diff --git a/arbeit/bibma.bib b/arbeit/bibma.bib index e54b214..386d802 100644 --- a/arbeit/bibma.bib +++ b/arbeit/bibma.bib @@ -75,3 +75,15 @@ organization={Springer}, url={https://www.lri.fr/~hansen/proceedings/2014/PPSN/papers/8672/86720465.pdf} } +@article{gaussNewton, +author = {Donald W. Marquardt}, +title = {An Algorithm for Least-Squares Estimation of Nonlinear Parameters}, +journal = {Journal of the Society for Industrial and Applied Mathematics}, +volume = {11}, +number = {2}, +pages = {431-441}, +year = {1963}, +doi = {10.1137/0111030}, +URL = {https://doi.org/10.1137/0111030}, +eprint = {https://doi.org/10.1137/0111030} +} diff --git a/arbeit/files/erklaerung.aux b/arbeit/files/erklaerung.aux index 8940204..c257395 100644 --- a/arbeit/files/erklaerung.aux +++ b/arbeit/files/erklaerung.aux @@ -22,15 +22,15 @@ \setcounter{ContinuedFloat}{0} \setcounter{float@type}{16} \setcounter{lstnumber}{1} -\setcounter{NAT@ctr}{8} +\setcounter{NAT@ctr}{9} \setcounter{AM@survey}{0} \setcounter{r@tfl@t}{0} \setcounter{subfigure}{0} \setcounter{subtable}{0} -\setcounter{@todonotes@numberoftodonotes}{2} +\setcounter{@todonotes@numberoftodonotes}{4} \setcounter{Item}{0} \setcounter{Hfootnote}{2} -\setcounter{bookmark@seq@number}{19} +\setcounter{bookmark@seq@number}{21} \setcounter{algorithm}{0} \setcounter{ALC@unique}{0} \setcounter{ALC@line}{0} diff --git a/arbeit/ma.md b/arbeit/ma.md index a3d40c7..6c90176 100644 --- a/arbeit/ma.md +++ b/arbeit/ma.md @@ -1,5 +1,5 @@ --- -fontsize: 11pt +fontsize: 12pt --- \chapter*{How to read this Thesis} @@ -36,6 +36,7 @@ We will replicate the same setup on the same meshes but use \acf{FFD} instead of work as a predictor given the different deformation scheme. ## What is \acf{FFD}? +\label{sec:intro:ffd} First of all we have to establish how a \ac{FFD} works and why this is a good tool for deforming meshes in the first place. For simplicity we only summarize @@ -114,11 +115,11 @@ mesh albeit the downsides. ## What is evolutional optimization? - +\change[inline]{Write this section} ## Advantages of evolutional algorithms -\improvement[inline]{Needs citations} +\change[inline]{Needs citations} The main advantage of evolutional algorithms is the ability to find optima of general functions just with the help of a given error-function (or fitness-function in this domain). This avoids the general pitfalls of @@ -197,12 +198,116 @@ $\|\vec{G}\|_F = 1$, whereby $\|\cdot\|_F$ denotes the Frobenius-Norm. # Implementation of \acf{FFD} -## Was ist FFD? +As general B-Splines have a free parameters $d$ and $\tau$. + +As we usually work with regular grids in our \ac{FFD} we define $\tau$ +statically as +$$\tau_i = \nicefrac{i}{n}$$ +whereby $n$ is the number of control-points in that direction. + +$d$ defines the *degree* of the B-Spline-Function (the number of times this +function is differentiable) and for our purposes we fix $d$ to $3$, but give the +formulas for the general case so it can be adapted quite freely. + + +## Adaption of \ac{FFD} + +As we have established in Chapter \ref{sec:intro:ffd} we can define an +\ac{FFD}-displacement as +\begin{equation} +\Delta_x(u) = \sum_i N_{i,d,\tau_i}(u) \Delta_x c_i +\end{equation} + +Note that we only sum up the $\Delta$-displacements in the control points $c_i$ to get +the change in position of the point we are interested in. + +In this way every deformed vertex is defined by +$$ +\textrm{Deform}(v_x) = v_x + \Delta_x(u) +$$ +with $u \in [0..1[$ being the variable that connects the high-detailed +vertex-mesh to the low-detailed control-grid. To actually calculate the new +position of the vertex we first have to calculate the $u$-value for each +vertex. This is achieved by finding out the parametrization of $v$ in terms of +$c_i$ +$$ +v_x = \sum_i N_{i,d,\tau_i}(u) c_i +$$ + +As the B-Spline-functions are smooth and convex we just derive by $u$ yielding + +\begin{eqnarray*} +& \frac{\partial}{\partial u} & v_x - \sum_i N_{i,d,\tau_i}(u) c_i \\ +& = & v_x - \sum_i \left( \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u) \right) c_i +\end{eqnarray*} + +and do a gradient-descend to approximate the value of $u$ up to an $\epsilon$ of $0.0001$. + +For this we use the Gauss-Newton algorithm\cite{gaussNewton} as the solution to +this problem may not be deterministic, because we usually have way more vertices +than control points ($\#v \gg \#c$). + +## Adaption of \ac{FFD} for a 3D-Mesh \label{3dffd} -- Definition -- Wieso Newton-Optimierung? - - Was folgt daraus? +This is a straightforward extension of the 1D-method presented in the last +chapter. But this time things get a bit more complicated. As we have a +3-dimensional grid we may have a different amount of control-points in each +direction. + +Given $n,m,o$ control points in $x,y,z$-direction each Point on the curve is +defined by +$$V(u,v,w) = \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.$$ + +In this case we have three different B-Splines (one for each dimension) and also +3 variables $u,v,w$ for each vertex we want to approximate. + +Given a target vertex $\vec{p}^*$ and an initial guess $\vec{p}=V(u,v,w)$ +we define the error-function for the gradient-descent as: + +$$Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)$$ + +And the partial version for just one direction as + +$$Err_x(u,v,w,\vec{p}^{*}) = p^{*}_x - \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} {C_{ijk}}_x N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) $$ + +To solve this we derive partially, like before: + +$$ +\begin{array}{rl} + \displaystyle \frac{\partial Err_x}{\partial u} & p^{*}_x - \displaystyle \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} {C_{ijk}}_x N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \\ + = & \displaystyle - \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} {C_{ijk}}_x N'_i(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) +\end{array} +$$ + +The other partial derivatives follow the same pattern yielding the Jacobian: + +$$ +J(Err(u,v,w)) = +\left( +\begin{array}{ccc} +\frac{\partial Err_x}{\partial u} & \frac{\partial Err_x}{\partial v} & \frac{\partial Err_x}{\partial w} \\ +\frac{\partial Err_y}{\partial u} & \frac{\partial Err_y}{\partial v} & \frac{\partial Err_y}{\partial w} \\ +\frac{\partial Err_z}{\partial u} & \frac{\partial Err_z}{\partial v} & \frac{\partial Err_z}{\partial w} +\end{array} +\right) +$$ + +\unsure[inline]{Should I add an informal complete derivative?\newline +Like leaving out Sums & $i,j,k$-Indices to make obvious what derivative belongs +where in what case?} + +With the Gauss-Newton algorithm we iterate the formula +$$J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)$$ +and use Cramers rule for inverting the small Jacobian and solving this system of +linear equations. + + +## Parametrisierung sinnvoll? + +- Nachteile von Parametrisierung + - Deformation ist um einen Kontrollpunkt viel direkter zu steuern. + - => DM-FFD? ## Test Scenario: 1D Function Approximation diff --git a/arbeit/ma.pdf b/arbeit/ma.pdf index 6d23dbd..499ccd1 100644 Binary files a/arbeit/ma.pdf and b/arbeit/ma.pdf differ diff --git a/arbeit/ma.tex b/arbeit/ma.tex index 1615d88..9690d2c 100644 --- a/arbeit/ma.tex +++ b/arbeit/ma.tex @@ -2,7 +2,7 @@ % abstracton : Abstract mit Ueberschrift \documentclass[ a4paper, % default -11pt, % default = 11pt +12pt, % default = 11pt BCOR6mm, % Bindungskorrektur bei Klebebindung 6mm, bei Lochen BCOR8.25mm twoside, % default, 2seitig titlepage, @@ -30,6 +30,7 @@ xcolor=dvipsnames, %\setlength{\parindent}{0pt} % kein einzug bei absaetzen %\setlength{\lineskip}{1ex plus0.5ex minus0.5ex} % dafr abstand zwischen abs�zen (funktioniert noch nicht) % \renewcommand{\familydefault}{\sfdefault} +\setstretch{1.44} % 1.5-facher zeilenabstand %%%%%%%%%%%%%%% Header - Footer %%%%%%%%%%%%%%% % ### Fr 2 Seitig (option twopage): @@ -162,6 +163,8 @@ deformation scheme. \section{\texorpdfstring{What is \acf{FFD}?}{What is ?}}\label{what-is} +\label{sec:intro:ffd} + First of all we have to establish how a \ac{FFD} works and why this is a good tool for deforming meshes in the first place. For simplicity we only summarize the 1D-case from \cite{spitzmuller1996bezier} here and go @@ -243,10 +246,12 @@ high-polygon mesh albeit the downsides. \section{What is evolutional optimization?}\label{what-is-evolutional-optimization} +\change[inline]{Write this section} + \section{Advantages of evolutional algorithms}\label{advantages-of-evolutional-algorithms} -\improvement[inline]{Needs citations} The main advantage of evolutional +\change[inline]{Needs citations} The main advantage of evolutional algorithms is the ability to find optima of general functions just with the help of a given error-function (or fitness-function in this domain). This avoids the general pitfalls of gradient-based procedures, which @@ -330,18 +335,125 @@ Frobenius-Norm. \chapter{\texorpdfstring{Implementation of \acf{FFD}}{Implementation of }}\label{implementation-of} -\section{Was ist FFD?}\label{was-ist-ffd} +As general B-Splines have a free parameters \(d\) and \(\tau\). + +As we usually work with regular grids in our \ac{FFD} we define \(\tau\) +statically as \[\tau_i = \nicefrac{i}{n}\] whereby \(n\) is the number +of control-points in that direction. + +\(d\) defines the \emph{degree} of the B-Spline-Function (the number of +times this function is differentiable) and for our purposes we fix \(d\) +to \(3\), but give the formulas for the general case so it can be +adapted quite freely. + +\section{\texorpdfstring{Adaption of +\ac{FFD}}{Adaption of }}\label{adaption-of} + +As we have established in Chapter \ref{sec:intro:ffd} we can define an +\ac{FFD}-displacement as + +\begin{equation} +\Delta_x(u) = \sum_i N_{i,d,\tau_i}(u) \Delta_x c_i +\end{equation} + +Note that we only sum up the \(\Delta\)-displacements in the control +points \(c_i\) to get the change in position of the point we are +interested in. + +In this way every deformed vertex is defined by \[ +\textrm{Deform}(v_x) = v_x + \Delta_x(u) +\] with \(u \in [0..1[\) being the variable that connects the +high-detailed vertex-mesh to the low-detailed control-grid. To actually +calculate the new position of the vertex we first have to calculate the +\(u\)-value for each vertex. This is achieved by finding out the +parametrization of \(v\) in terms of \(c_i\) \[ +v_x = \sum_i N_{i,d,\tau_i}(u) c_i +\] + +As the B-Spline-functions are smooth and convex we just derive by \(u\) +yielding + +\begin{eqnarray*} +& \frac{\partial}{\partial u} & v_x - \sum_i N_{i,d,\tau_i}(u) c_i \\ +& = & v_x - \sum_i \left( \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u) \right) c_i +\end{eqnarray*} + +and do a gradient-descend to approximate the value of \(u\) up to an +\(\epsilon\) of \(0.0001\). + +For this we use the Gauss-Newton algorithm\cite{gaussNewton} as the +solution to this problem may not be deterministic, because we usually +have way more vertices than control points (\(\#v \gg \#c\)). + +\section{\texorpdfstring{Adaption of \ac{FFD} for a +3D-Mesh}{Adaption of for a 3D-Mesh}}\label{adaption-of-for-a-3d-mesh} \label{3dffd} +This is a straightforward extension of the 1D-method presented in the +last chapter. But this time things get a bit more complicated. As we +have a 3-dimensional grid we may have a different amount of +control-points in each direction. + +Given \(n,m,o\) control points in \(x,y,z\)-direction each Point on the +curve is defined by +\[V(u,v,w) = \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.\] + +In this case we have three different B-Splines (one for each dimension) +and also 3 variables \(u,v,w\) for each vertex we want to approximate. + +Given a target vertex \(\vec{p}^*\) and an initial guess +\(\vec{p}=V(u,v,w)\) we define the error-function for the +gradient-descent as: + +\[Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)\] + +And the partial version for just one direction as + +\[Err_x(u,v,w,\vec{p}^{*}) = p^{*}_x - \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} {C_{ijk}}_x N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \] + +To solve this we derive partially, like before: + +\[ +\begin{array}{rl} + \displaystyle \frac{\partial Err_x}{\partial u} & p^{*}_x - \displaystyle \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} {C_{ijk}}_x N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \\ + = & \displaystyle - \sum_{i=0}^{n-d-2} \sum_{j=0}^{m-d-2} \sum_{k=0}^{o-d-2} {C_{ijk}}_x N'_i(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) +\end{array} +\] + +The other partial derivatives follow the same pattern yielding the +Jacobian: + +\[ +J(Err(u,v,w)) = +\left( +\begin{array}{ccc} +\frac{\partial Err_x}{\partial u} & \frac{\partial Err_x}{\partial v} & \frac{\partial Err_x}{\partial w} \\ +\frac{\partial Err_y}{\partial u} & \frac{\partial Err_y}{\partial v} & \frac{\partial Err_y}{\partial w} \\ +\frac{\partial Err_z}{\partial u} & \frac{\partial Err_z}{\partial v} & \frac{\partial Err_z}{\partial w} +\end{array} +\right) +\] + +\unsure[inline]{Should I add an informal complete derivative?\newline +Like leaving out Sums & $i,j,k$-Indices to make obvious what derivative belongs +where in what case?} + +With the Gauss-Newton algorithm we iterate the formula +\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\] +and use Cramers rule for inverting the small Jacobian and solving this +system of linear equations. + +\section{Parametrisierung sinnvoll?}\label{parametrisierung-sinnvoll} + \begin{itemize} \tightlist \item - Definition + Nachteile von Parametrisierung \item - Wieso Newton-Optimierung? + Deformation ist um einen Kontrollpunkt viel direkter zu steuern. \item - Was folgt daraus? + =\textgreater{} DM-FFD? \end{itemize} \section{Test Scenario: 1D Function diff --git a/arbeit/settings/commands.tex b/arbeit/settings/commands.tex index 5c377ec..8fc60a3 100644 --- a/arbeit/settings/commands.tex +++ b/arbeit/settings/commands.tex @@ -1,59 +1,59 @@ \newcommand\exampleend{\hfill$\diamond$} % ##### DCJ stuff ##### -\newcommand\dcaj{double cut and join\xspace} -\newcommand\dcjindel{d_{\DCJ}^{id}} -\newcommand\dcj{d_{\DCJ}} -\renewcommand\gg{\mathcal G} -\newcommand\del{\mathcal A} -\newcommand\ins{\mathcal B} -\newcommand\clean{\varnothing} -\newcommand\gdel{\mathcal G_{\!A}} -\newcommand\gins{\mathcal G_{\!B}} -\newcommand\ag{\AG(A,B)} -\renewcommand\r{\lambda} % # runs -\newcommand\R{\Lambda} % # runs after clustering -\newcommand\dr{\Delta\r} -\newcommand\drr{\Delta\r^{\!\rho}} -\newcommand\DR{\Delta\R} -\newcommand\DRR{\Delta\R^{\!\rho}} -%\newcommand\lab[1]{\lambda(#1)} -\newcommand\redu[1]{\left.#1\right|_\mathcal G} -\newcommand\h[1]{#1^h} -\renewcommand\t[1]{#1^t} -\renewcommand\a{$A$\xspace} -\renewcommand\b{$B$\xspace} -\renewcommand\O{\mathcal O} -\renewcommand\AA{$A\!A$\xspace} -\newcommand\AB{$A\!B$\xspace} -\newcommand\BA{$B\!A$\xspace} -\newcommand\BB{$B\!B$\xspace} -\newcommand\ab{\del\ins} -\newcommand\ba{\ins\del} -\def\aa{A\!A} -\def\bb{B\!B} -\def\AAab{A\!A_{\!\del\!\ins}} -\def\AAa{A\!A_{\!\del}} -\def\AAb{A\!A_{\ins}} -\def\BBab{B\!B_{\!\!\del\!\ins}} -\def\BBa{B\!B_{\!\!\del}} -\def\BBb{B\!B_{\ins}} -\def\ABab{A\!B_{\!\!\del\!\ins}} -\def\ABba{A\!B_{\ins\!\del}} -\def\ABa{A\!B_{\!\!\del}} -\def\ABb{A\!B_\ins} -\def\ABm{A\!B_\bullet} -\def\AAo{A\!A_{\!\varnothing}} -\def\ABo{A\!B_{\!\varnothing}} -\def\BBo{B\!B_{\!\varnothing}} -\def\xx{A\!B_\times} -\def\aba{\del\!\gr{\ins\!\del}} -\def\bab{\ins\!\gr{\del\!\ins}} -\newcommand\clusterize{accumulate\xspace} -\newcommand\clustering{accumulating\xspace} %aggregate, unite/unifying -\newcommand\clusterized{accumulated\xspace} -\newcommand\clusterization{accumulation\xspace} -\renewcommand\v[1]{V(#1)} -\newcommand\vset[2]{\v{#1}=\set{#2}} +% \newcommand\dcaj{double cut and join\xspace} +% \newcommand\dcjindel{d_{\DCJ}^{id}} +% \newcommand\dcj{d_{\DCJ}} +% \renewcommand\gg{\mathcal G} +% \newcommand\del{\mathcal A} +% \newcommand\ins{\mathcal B} +% \newcommand\clean{\varnothing} +% \newcommand\gdel{\mathcal G_{\!A}} +% \newcommand\gins{\mathcal G_{\!B}} +% \newcommand\ag{\AG(A,B)} +% \renewcommand\r{\lambda} % # runs +% \newcommand\R{\Lambda} % # runs after clustering +% \newcommand\dr{\Delta\r} +% \newcommand\drr{\Delta\r^{\!\rho}} +% \newcommand\DR{\Delta\R} +% \newcommand\DRR{\Delta\R^{\!\rho}} +% %\newcommand\lab[1]{\lambda(#1)} +% \newcommand\redu[1]{\left.#1\right|_\mathcal G} +% \newcommand\h[1]{#1^h} +% \renewcommand\t[1]{#1^t} +% \renewcommand\a{$A$\xspace} +% \renewcommand\b{$B$\xspace} +% \renewcommand\O{\mathcal O} +% \renewcommand\AA{$A\!A$\xspace} +% \newcommand\AB{$A\!B$\xspace} +% \newcommand\BA{$B\!A$\xspace} +% \newcommand\BB{$B\!B$\xspace} +% \newcommand\ab{\del\ins} +% \newcommand\ba{\ins\del} +% \def\aa{A\!A} +% \def\bb{B\!B} +% \def\AAab{A\!A_{\!\del\!\ins}} +% \def\AAa{A\!A_{\!\del}} +% \def\AAb{A\!A_{\ins}} +% \def\BBab{B\!B_{\!\!\del\!\ins}} +% \def\BBa{B\!B_{\!\!\del}} +% \def\BBb{B\!B_{\ins}} +% \def\ABab{A\!B_{\!\!\del\!\ins}} +% \def\ABba{A\!B_{\ins\!\del}} +% \def\ABa{A\!B_{\!\!\del}} +% \def\ABb{A\!B_\ins} +% \def\ABm{A\!B_\bullet} +% \def\AAo{A\!A_{\!\varnothing}} +% \def\ABo{A\!B_{\!\varnothing}} +% \def\BBo{B\!B_{\!\varnothing}} +% \def\xx{A\!B_\times} +% \def\aba{\del\!\gr{\ins\!\del}} +% \def\bab{\ins\!\gr{\del\!\ins}} +% \newcommand\clusterize{accumulate\xspace} +% \newcommand\clustering{accumulating\xspace} %aggregate, unite/unifying +% \newcommand\clusterized{accumulated\xspace} +% \newcommand\clusterization{accumulation\xspace} +% \renewcommand\v[1]{V(#1)} +% \newcommand\vset[2]{\v{#1}=\set{#2}} % ##### math ##### \DeclareMathOperator\AG{AG} @@ -174,8 +174,8 @@ \includegraphics[width=1cm]{img/cd} \end{center}\vspace{-15pt}\centering\footnotesize\texttt{#1}}} \renewcommand\vec[1]{\textbf{#1}} -\newcommandx{\unsure}[2][1=]{\todo[linecolor=red,backgroundcolor=red!25,bordercolor=red,#1]{#2}} -\newcommandx{\change}[2][1=]{\todo[linecolor=blue,backgroundcolor=blue!25,bordercolor=blue,#1]{#2}} -\newcommandx{\info}[2][1=]{\todo[linecolor=OliveGreen,backgroundcolor=OliveGreen!25,bordercolor=OliveGreen,#1]{#2}} +\newcommandx{\unsure}[2][1=]{\todo[linecolor=red,backgroundcolor=red!25,bordercolor=red,#1]{\textbf{Unsure:} #2}} +\newcommandx{\change}[2][1=]{\todo[linecolor=blue,backgroundcolor=blue!25,bordercolor=blue,#1]{\textbf{Change:} #2}} +\newcommandx{\info}[2][1=]{\todo[linecolor=OliveGreen,backgroundcolor=OliveGreen!25,bordercolor=OliveGreen,#1]{\textbf{Info:} #2}} \newcommandx{\improvement}[2][1=]{\todo[linecolor=violet,backgroundcolor=violet!25,bordercolor=violet,#1]{#2}} \newcommandx{\thiswillnotshow}[2][1=]{\todo[disable,#1]{#2}} diff --git a/arbeit/settings/packages.tex b/arbeit/settings/packages.tex index 78f3283..44f27ff 100644 --- a/arbeit/settings/packages.tex +++ b/arbeit/settings/packages.tex @@ -15,7 +15,7 @@ \usepackage{color} %\colorbox \usepackage{dsfont} %\mathds \usepackage{draftwatermark} -\SetWatermarkLightness{0.9} % default: 0.8 +\SetWatermarkLightness{0.95} % default: 0.8 \usepackage{epigraph} % \usepackage{euler} % euler: uni, eucal: baake, ohne: standard \usepackage{eucal} % euler calligraphy diff --git a/arbeit/template.tex b/arbeit/template.tex index 1613eb6..fb8d0a6 100644 --- a/arbeit/template.tex +++ b/arbeit/template.tex @@ -30,6 +30,7 @@ xcolor=dvipsnames, %\setlength{\parindent}{0pt} % kein einzug bei absaetzen %\setlength{\lineskip}{1ex plus0.5ex minus0.5ex} % dafr abstand zwischen abs�zen (funktioniert noch nicht) % \renewcommand{\familydefault}{\sfdefault} +\setstretch{1.44} % 1.5-facher zeilenabstand %%%%%%%%%%%%%%% Header - Footer %%%%%%%%%%%%%%% % ### Fr 2 Seitig (option twopage):