This commit is contained in:
Nicole Dresselhaus 2017-10-29 19:02:31 +01:00
parent c54b3f2960
commit 103b629b84
Signed by: Drezil
GPG Key ID: 057D94F356F41E25
6 changed files with 193 additions and 100 deletions

View File

@ -2,10 +2,9 @@
\thispagestyle{empty} \thispagestyle{empty}
\vspace*{\stretch{1}} \vspace*{\stretch{1}}
\noindent \noindent
{\huge Declaration of own work(?)}\\[1cm] {\huge Declaration of own work}\\[1cm]
I hereby declare that this thesis is my own work and effort. Where other sources of information have been used, they have been acknowledged. I hereby declare that this thesis is my own work and effort. Where other sources of information have been used, they have been acknowledged.
\improvement[inline]{write proper declaration..} \\[2cm]
%\\[2cm] Bielefeld, \today\hspace{\fill}
Bielefeld, den \today\hspace{\fill}
\parbox[t]{5cm}{\dotfill\\ \centering Stefan Dresselhaus} \parbox[t]{5cm}{\dotfill\\ \centering Stefan Dresselhaus}
\vspace*{\stretch{3}} \vspace*{\stretch{3}}

BIN
arbeit/img/imp1d3d.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -125,8 +125,8 @@ Chapter \ref{sec:dis}.
\label{sec:back:ffd} \label{sec:back:ffd}
First of all we have to establish how a \ac{FFD} works and why this is a good First of all we have to establish how a \ac{FFD} works and why this is a good
tool for deforming geometric objects (esp. meshes in our case) in the first tool for deforming geometric objects (especially meshes in our case) in the
place. For simplicity we only summarize the 1D--case from first place. For simplicity we only summarize the 1D--case from
\cite{spitzmuller1996bezier} here and go into the extension to the 3D case in \cite{spitzmuller1996bezier} here and go into the extension to the 3D case in
chapter \ref{3dffd}. chapter \ref{3dffd}.
@ -150,7 +150,7 @@ corresponding deformation to generate a deformed objet}
\end{figure} \end{figure}
In the 1--dimensional example in figure \ref{fig:bspline}, the control--points In the 1--dimensional example in figure \ref{fig:bspline}, the control--points
are indicated as red dots and the color-gradient should hint at the $u$--values are indicated as red dots and the colour--gradient should hint at the $u$--values
ranging from $0$ to $1$. ranging from $0$ to $1$.
We now define a \acf{FFD} by the following: We now define a \acf{FFD} by the following:
@ -169,7 +169,7 @@ N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+
\end{equation} \end{equation}
If we now multiply every $p_i$ with the corresponding $N_{i,d,\tau_i}(u)$ we get If we now multiply every $p_i$ with the corresponding $N_{i,d,\tau_i}(u)$ we get
the contribution of each point $p_i$ to the final curve--point parameterized only the contribution of each point $p_i$ to the final curve--point parametrized only
by $u \in [0,1[$. As can be seen from \eqref{eqn:ffd1d2} we only access points by $u \in [0,1[$. As can be seen from \eqref{eqn:ffd1d2} we only access points
$[p_i..p_{i+d}]$ for any given $i$^[one more for each recursive step.], which gives $[p_i..p_{i+d}]$ for any given $i$^[one more for each recursive step.], which gives
us, in combination with choosing $p_i$ and $\tau_i$ in order, only a local us, in combination with choosing $p_i$ and $\tau_i$ in order, only a local
@ -216,18 +216,18 @@ where $\vec{N}$ is the $n \times m$ transformation--matrix (later on called
\end{center} \end{center}
\caption[B--spline--basis--function as partition of unity]{From \cite[Figure 2.13]{brunet2010contributions}:\newline \caption[B--spline--basis--function as partition of unity]{From \cite[Figure 2.13]{brunet2010contributions}:\newline
\glqq Some interesting properties of the B--splines. On the natural definition domain \glqq Some interesting properties of the B--splines. On the natural definition domain
of the B--spline ($[k_0,k_4]$ on this figure), the B--spline basis functions sum of the B--spline ($[k_0,k_4]$ on this figure), the B--Spline basis functions sum
up to one (partition of unity). In this example, we use B--splines of degree 2. up to one (partition of unity). In this example, we use B--Splines of degree 2.
The horizontal segment below the abscissa axis represents the domain of The horizontal segment below the abscissa axis represents the domain of
influence of the B--splines basis function, i.e. the interval on which they are influence of the B--splines basis function, i.e. the interval on which they are
not null. At a given point, there are at most $ d+1$ non-zero B--spline basis not null. At a given point, there are at most $ d+1$ non-zero B--Spline basis
functions (compact support).\grqq \newline functions (compact support).\grqq \newline
Note, that Brunet starts his index at $-d$ opposed to our definition, where we Note, that Brunet starts his index at $-d$ opposed to our definition, where we
start at $0$.} start at $0$.}
\label{fig:partition_unity} \label{fig:partition_unity}
\end{figure} \end{figure}
Furthermore B--splines--basis--functions form a partition of unity for all, but Furthermore B--Spline--basis--functions form a partition of unity for all, but
the first and last $d$ control-points\cite{brunet2010contributions}. Therefore the first and last $d$ control-points\cite{brunet2010contributions}. Therefore
we later on use the border-points $d+1$ times, such that $\sum_j n_{i,j} p_j = p_i$ we later on use the border-points $d+1$ times, such that $\sum_j n_{i,j} p_j = p_i$
for these points. for these points.
@ -308,9 +308,29 @@ initialized by a random guess or just zero. Further on we need a so--called
space $M$ (usually $M = \mathbb{R}$) along a convergence--function $c : I \mapsto \mathbb{B}$ space $M$ (usually $M = \mathbb{R}$) along a convergence--function $c : I \mapsto \mathbb{B}$
that terminates the optimization. that terminates the optimization.
Biologically speaking the set $I$ corresponds to the set of possible *Genotypes* Biologically speaking the set $I$ corresponds to the set of possible *genotypes*
while $M$ represents the possible observable *Phenotypes*. while $M$ represents the possible observable *phenotypes*. *Genotypes* define
\improvement[inline]{Erklären, was das ist. Quellen!} all initial properties of an individual, but their properties are not directly
observable. It is the genes, that evolve over time (and thus correspond to the
parameters we are tweaking in our algorithms or the genes in nature), but only
the *phenotypes* make certain behaviour observable (algorithmically through our
*fitness--function*, biologically by the ability to survive and produce
offspring). Any individual in our algorithm thus experience a biologically
motivated life cycle of inheriting genes from the parents, modified by mutations
occurring, performing according to a fitness--metric and generating offspring
based on this. Therefore each iteration in the while--loop above is also often
named generation.
One should note that there is a subtle difference between *fitness--function*
and a so called *genotype--phenotype--mapping*. The first one directly applies
the *genotype--phenotype--mapping* and evaluates the performance of an individual,
thus going directly from genes/parameters to reproduction--probability/score.
In a concrete example the *genotype* can be an arbitrary vector (the genes), the
*phenotype* is then a deformed object, and the performance can be a single
measurement like an air--drag--coefficient. The *genotype--phenotype--mapping*
would then just be the generation of different objects from that
starting--vector, whereas the *fitness--function* would go directly from such a
starting--vector to the coefficient that we want to optimize.
The main algorithm just repeats the following steps: The main algorithm just repeats the following steps:
@ -335,15 +355,16 @@ can be changed over time. A good overview of this is given in
For example the mutation can consist of merely a single $\sigma$ determining the For example the mutation can consist of merely a single $\sigma$ determining the
strength of the gaussian defects in every parameter --- or giving a different strength of the gaussian defects in every parameter --- or giving a different
$\sigma$ to every part. An even more sophisticated example would be the \glqq 1/5 $\sigma$ to every component of those parameters. An even more sophisticated
success rule\grqq \ from \cite{rechenberg1973evolutionsstrategie}. example would be the \glqq 1/5 success rule\grqq \ from
\cite{rechenberg1973evolutionsstrategie}.
Also in selection it may not be wise to only take the best--performing Also in the selection--function it may not be wise to only take the
individuals, because it may be that the optimization has to overcome a barrier best--performing individuals, because it may be that the optimization has to
of bad fitness to achieve a better local optimum. overcome a barrier of bad fitness to achieve a better local optimum.
Recombination also does not have to be mere random choosing of parents, but can Recombination also does not have to be mere random choosing of parents, but can
also take ancestry, distance of genes or grouping into account. also take ancestry, distance of genes or groups of individuals into account.
## Advantages of evolutionary algorithms ## Advantages of evolutionary algorithms
\label{sec:back:evogood} \label{sec:back:evogood}
@ -364,13 +385,11 @@ are shown in figure \ref{fig:probhard}.
Most of the advantages stem from the fact that a gradient--based procedure has Most of the advantages stem from the fact that a gradient--based procedure has
only one point of observation from where it evaluates the next steps, whereas an only one point of observation from where it evaluates the next steps, whereas an
evolutionary strategy starts with a population of guessed solutions. Because an evolutionary strategy starts with a population of guessed solutions. Because an
evolutionary strategy modifies the solution randomly, keeping some solutions evolutionary strategy can be modified according to the problem--domain (i.e. by
and purging others, it can also target multiple different hypothesis at the the ideas given above) it can also approximate very difficult problems in an
same time where the local optima die out in the face of other, better efficient manner and even self--tune parameters depending on the ancestry at
candidates. runtime^[Some examples of this are explained in detail in
\cite{eiben1999parameter}].
\improvement[inline]{Verweis auf MO-CMA etc. Vielleicht auch etwas
ausführlicher.}
If an analytic best solution exists and is easily computable (i.e. because the If an analytic best solution exists and is easily computable (i.e. because the
error--function is convex) an evolutionary algorithm is not the right choice. error--function is convex) an evolutionary algorithm is not the right choice.
@ -381,7 +400,7 @@ either not convex or there are so many parameters that an analytic solution
(mostly meaning the equivalence to an exhaustive search) is computationally not (mostly meaning the equivalence to an exhaustive search) is computationally not
feasible. Here evolutionary optimization has one more advantage as one can at feasible. Here evolutionary optimization has one more advantage as one can at
least get suboptimal solutions fast, which then refine over time and still least get suboptimal solutions fast, which then refine over time and still
converge to the same solution. converge to a decent solution much faster than an exhaustive search.
## Criteria for the evolvability of linear deformations ## Criteria for the evolvability of linear deformations
\label{sec:intro:rvi} \label{sec:intro:rvi}
@ -445,8 +464,8 @@ optimal value and $0$ is the worst value.
On the one hand this criterion should be characteristic for numeric On the one hand this criterion should be characteristic for numeric
stability\cite[chapter 2.7]{golub2012matrix} and on the other hand for the stability\cite[chapter 2.7]{golub2012matrix} and on the other hand for the
convergence speed of evolutionary algorithms\cite{anrichterEvol} as it is tied to convergence speed of evolutionary algorithms\cite{anrichterEvol} as it is tied
the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}. to the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}.
### Improvement Potential ### Improvement Potential
@ -582,7 +601,7 @@ $$
With the Gauss--Newton algorithm we iterate via the formula With the Gauss--Newton algorithm we iterate via the formula
$$J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)$$ $$J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)$$
and use Cramers rule for inverting the small Jacobian and solving this system of and use Cramer's rule for inverting the small Jacobian and solving this system of
linear equations. linear equations.
As there is no strict upper bound of the number of iterations for this As there is no strict upper bound of the number of iterations for this
@ -818,10 +837,9 @@ instead of correlation we flip the sign of the correlation--coefficient for
readability and to have the correlation--coefficients be in the readability and to have the correlation--coefficients be in the
classification--range given above. classification--range given above.
For the evolutionary optimization we employ the CMA--ES (covariance matrix For the evolutionary optimization we employ the \afc{CMA--ES} of the shark3.1
adaptation evolution strategy) of the shark3.1 library \cite{shark08}, as this library \cite{shark08}, as this algorithm was used by \cite{anrichterEvol} as
algorithm was used by \cite{anrichterEvol} as well. We leave the parameters at well. We leave the parameters at their sensible defaults as further explained in
their sensible defaults as further explained in
\cite[Appendix~A: Table~1]{hansen2016cma}. \cite[Appendix~A: Table~1]{hansen2016cma}.
## Procedure: 1D Function Approximation ## Procedure: 1D Function Approximation
@ -1133,7 +1151,8 @@ deformation--matrix.
\includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png} \includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png}
\caption[Histogram of ranks of high--resolution deformation--matrices]{ \caption[Histogram of ranks of high--resolution deformation--matrices]{
Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$ Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$
control--points each. control--points each showing in this case how many control points are actually
used in the calculations.
} }
\label{fig:histrank3d} \label{fig:histrank3d}
\end{figure} \end{figure}
@ -1306,7 +1325,7 @@ before with the regularity. In figure \ref{fig:resimp3d} one can clearly see the
correlation and the spread within each setup and the behaviour when we increase correlation and the spread within each setup and the behaviour when we increase
the number of control--points. the number of control--points.
Along with this we also give the spearman--coefficients along with their Along with this we also give the Spearman--coefficients along with their
p--values in table \ref{tab:3dimp}. Within one scenario we only find a *weak* to p--values in table \ref{tab:3dimp}. Within one scenario we only find a *weak* to
*moderate* correlation between the improvement potential and the fitting error, *moderate* correlation between the improvement potential and the fitting error,
but all findings (except for $7 \times 4 \times 4$ and $6 \times 6 \times 6$) but all findings (except for $7 \times 4 \times 4$ and $6 \times 6 \times 6$)
@ -1319,9 +1338,28 @@ quality is naturally tied to the number of control--points.
All in all the improvement potential seems to be a good and sensible measure of All in all the improvement potential seems to be a good and sensible measure of
quality, even given gradients of varying quality. quality, even given gradients of varying quality.
\improvement[inline]{improvement--potential vs. steps ist anders als in 1d! Plot Lastly, a small note on the behaviour of improvement potential and convergence
und zeigen!} speed, as we used this in the 1D case to argue, why the *regularity* defied our
expectations. As a contrast we wanted to show, that improvement potential cannot
serve for good predictions of the convergence speed. In figure
\ref{fig:imp1d3d} we show improvement potential against number of iterations
for both scenarios. As one can see, in the 1D scenario we have a *strong*
and *significant* correlation (with $-r_S = -0.72$, $p = 0$), whereas in the 3D
scenario we have the opposite *significant* and *strong* effect (with
$-r_S = 0.69$, $p=0$), so these correlations clearly seem to be dependent on the
scenario and are not suited for generalization.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{img/imp1d3d.png}
\caption[Improvement potential and convergence speed for 1D and 3D--scenarios]{
\newline
Left: Improvement potential against convergence speed for the
1D--scenario\newline
Right: Improvement potential against convergence speed for the 3D--scnario
}
\label{fig:imp1d3d}
\end{figure}
# Discussion and outlook # Discussion and outlook
\label{sec:dis} \label{sec:dis}
@ -1355,23 +1393,27 @@ between $0.34$ to $0.87$.
Taking these results into consideration, one can say, that *variability* and Taking these results into consideration, one can say, that *variability* and
*improvement potential* are very good estimates for the quality of a fit using *improvement potential* are very good estimates for the quality of a fit using
\acf{FFD} as a deformation function. \acf{FFD} as a deformation function, while we could not reproduce similar
compelling results as Richter et al. for *regularity and convergence speed*.
One reason for the bad or erratic behaviour of the *regularity*--criterion could One reason for the bad or erratic behaviour of the *regularity*--criterion could
be that in an \ac{FFD}--setting we have a likelihood of having control--points be that in an \ac{FFD}--setting we have a likelihood of having control--points
that are only contributing to the whole parametrization in negligible amounts. that are only contributing to the whole parametrization in negligible amounts,
This results in very small right singular values of the deformation--matrix resulting in very small right singular values of the deformation--matrix
$\vec{U}$ that influence the condition--number and thus the *regularity* in a $\vec{U}$ that influence the condition--number and thus the *regularity* in a
significant way. Further research is needed to refine *regularity* so that these significant way. Further research is needed to refine *regularity* so that these
problems get addressed. problems get addressed, like taking all singular values into account when
capturing the notion of *regularity*.
Richter et al. also compared the behaviour of direct and indirect manipulation Richter et al. also compared the behaviour of direct and indirect manipulation
in \cite{anrichterEvol}, whereas we merely used an indirect \ac{FFD}--approach. in \cite{anrichterEvol}, whereas we merely used an indirect \ac{FFD}--approach.
As direct manipulations tend to perform better than indirect manipulations, the As direct manipulations tend to perform better than indirect manipulations, the
usage of \acf{DM--FFD} could also work better with the criteria we examined. usage of \acf{DM--FFD} could also work better with the criteria we examined.
This can also solve the problem of bad singular values for the *regularity* as
\improvement[inline]{write more outlook/further research} the incorporation of the parametrization of the points on the surface, which are
the essential part of a direct--manipulation, could cancel out a bad
control--grid as the bad control--points are never or negligibly used to
parametrize those surface--points.
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt \improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.\newline Direktlinks des Autors.}
Außerdem bricht url über Seitengrenzen den Seitenspiegel.}

Binary file not shown.

View File

@ -278,10 +278,10 @@ outlook in Chapter \ref{sec:dis}.
\label{sec:back:ffd} \label{sec:back:ffd}
First of all we have to establish how a \ac{FFD} works and why this is a First of all we have to establish how a \ac{FFD} works and why this is a
good tool for deforming geometric objects (esp. meshes in our case) in good tool for deforming geometric objects (especially meshes in our
the first place. For simplicity we only summarize the 1D--case from case) in the first place. For simplicity we only summarize the 1D--case
\cite{spitzmuller1996bezier} here and go into the extension to the 3D from \cite{spitzmuller1996bezier} here and go into the extension to the
case in chapter~\ref{3dffd}. 3D case in chapter~\ref{3dffd}.
The main idea of \ac{FFD} is to create a function The main idea of \ac{FFD} is to create a function
\(s : [0,1[^d \mapsto \mathbb{R}^d\) that spans a certain part of a \(s : [0,1[^d \mapsto \mathbb{R}^d\) that spans a certain part of a
@ -302,8 +302,8 @@ corresponding deformation to generate a deformed objet}
\end{figure} \end{figure}
In the 1--dimensional example in figure~\ref{fig:bspline}, the In the 1--dimensional example in figure~\ref{fig:bspline}, the
control--points are indicated as red dots and the color-gradient should control--points are indicated as red dots and the colour--gradient
hint at the \(u\)--values ranging from \(0\) to \(1\). should hint at the \(u\)--values ranging from \(0\) to \(1\).
We now define a \acf{FFD} by the following:\\ We now define a \acf{FFD} by the following:\\
Given an arbitrary number of points \(p_i\) alongside a line, we map a Given an arbitrary number of points \(p_i\) alongside a line, we map a
@ -324,7 +324,7 @@ N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+
If we now multiply every \(p_i\) with the corresponding If we now multiply every \(p_i\) with the corresponding
\(N_{i,d,\tau_i}(u)\) we get the contribution of each point \(p_i\) to \(N_{i,d,\tau_i}(u)\) we get the contribution of each point \(p_i\) to
the final curve--point parameterized only by \(u \in [0,1[\). As can be the final curve--point parametrized only by \(u \in [0,1[\). As can be
seen from \eqref{eqn:ffd1d2} we only access points \([p_i..p_{i+d}]\) seen from \eqref{eqn:ffd1d2} we only access points \([p_i..p_{i+d}]\)
for any given \(i\)\footnote{one more for each recursive step.}, which for any given \(i\)\footnote{one more for each recursive step.}, which
gives us, in combination with choosing \(p_i\) and \(\tau_i\) in order, gives us, in combination with choosing \(p_i\) and \(\tau_i\) in order,
@ -368,18 +368,18 @@ and \(m\) control--points.
\end{center} \end{center}
\caption[B--spline--basis--function as partition of unity]{From \cite[Figure 2.13]{brunet2010contributions}:\newline \caption[B--spline--basis--function as partition of unity]{From \cite[Figure 2.13]{brunet2010contributions}:\newline
\glqq Some interesting properties of the B--splines. On the natural definition domain \glqq Some interesting properties of the B--splines. On the natural definition domain
of the B--spline ($[k_0,k_4]$ on this figure), the B--spline basis functions sum of the B--spline ($[k_0,k_4]$ on this figure), the B--Spline basis functions sum
up to one (partition of unity). In this example, we use B--splines of degree 2. up to one (partition of unity). In this example, we use B--Splines of degree 2.
The horizontal segment below the abscissa axis represents the domain of The horizontal segment below the abscissa axis represents the domain of
influence of the B--splines basis function, i.e. the interval on which they are influence of the B--splines basis function, i.e. the interval on which they are
not null. At a given point, there are at most $ d+1$ non-zero B--spline basis not null. At a given point, there are at most $ d+1$ non-zero B--Spline basis
functions (compact support).\grqq \newline functions (compact support).\grqq \newline
Note, that Brunet starts his index at $-d$ opposed to our definition, where we Note, that Brunet starts his index at $-d$ opposed to our definition, where we
start at $0$.} start at $0$.}
\label{fig:partition_unity} \label{fig:partition_unity}
\end{figure} \end{figure}
Furthermore B--splines--basis--functions form a partition of unity for Furthermore B--Spline--basis--functions form a partition of unity for
all, but the first and last \(d\) all, but the first and last \(d\)
control-points\cite{brunet2010contributions}. Therefore we later on use control-points\cite{brunet2010contributions}. Therefore we later on use
the border-points \(d+1\) times, such that \(\sum_j n_{i,j} p_j = p_i\) the border-points \(d+1\) times, such that \(\sum_j n_{i,j} p_j = p_i\)
@ -469,8 +469,32 @@ space \(M\) (usually \(M = \mathbb{R}\)) along a convergence--function
\(c : I \mapsto \mathbb{B}\) that terminates the optimization. \(c : I \mapsto \mathbb{B}\) that terminates the optimization.
Biologically speaking the set \(I\) corresponds to the set of possible Biologically speaking the set \(I\) corresponds to the set of possible
\emph{Genotypes} while \(M\) represents the possible observable \emph{genotypes} while \(M\) represents the possible observable
\emph{Phenotypes}. \improvement[inline]{Erklären, was das ist. Quellen!} \emph{phenotypes}. \emph{Genotypes} define all initial properties of an
individual, but their properties are not directly observable. It is the
genes, that evolve over time (and thus correspond to the parameters we
are tweaking in our algorithms or the genes in nature), but only the
\emph{phenotypes} make certain behaviour observable (algorithmically
through our \emph{fitness--function}, biologically by the ability to
survive and produce offspring). Any individual in our algorithm thus
experience a biologically motivated life cycle of inheriting genes from
the parents, modified by mutations occurring, performing according to a
fitness--metric and generating offspring based on this. Therefore each
iteration in the while--loop above is also often named generation.
One should note that there is a subtle difference between
\emph{fitness--function} and a so called
\emph{genotype--phenotype--mapping}. The first one directly applies the
\emph{genotype--phenotype--mapping} and evaluates the performance of an
individual, thus going directly from genes/parameters to
reproduction--probability/score. In a concrete example the
\emph{genotype} can be an arbitrary vector (the genes), the
\emph{phenotype} is then a deformed object, and the performance can be a
single measurement like an air--drag--coefficient. The
\emph{genotype--phenotype--mapping} would then just be the generation of
different objects from that starting--vector, whereas the
\emph{fitness--function} would go directly from such a starting--vector
to the coefficient that we want to optimize.
The main algorithm just repeats the following steps: The main algorithm just repeats the following steps:
@ -503,16 +527,18 @@ that can be changed over time. A good overview of this is given in
For example the mutation can consist of merely a single \(\sigma\) For example the mutation can consist of merely a single \(\sigma\)
determining the strength of the gaussian defects in every parameter --- determining the strength of the gaussian defects in every parameter ---
or giving a different \(\sigma\) to every part. An even more or giving a different \(\sigma\) to every component of those parameters.
sophisticated example would be the \glqq 1/5 success rule\grqq ~from An even more sophisticated example would be the \glqq 1/5 success
\cite{rechenberg1973evolutionsstrategie}. rule\grqq ~from \cite{rechenberg1973evolutionsstrategie}.
Also in selection it may not be wise to only take the best--performing Also in the selection--function it may not be wise to only take the
individuals, because it may be that the optimization has to overcome a best--performing individuals, because it may be that the optimization
barrier of bad fitness to achieve a better local optimum. has to overcome a barrier of bad fitness to achieve a better local
optimum.
Recombination also does not have to be mere random choosing of parents, Recombination also does not have to be mere random choosing of parents,
but can also take ancestry, distance of genes or grouping into account. but can also take ancestry, distance of genes or groups of individuals
into account.
\section{Advantages of evolutionary \section{Advantages of evolutionary
algorithms}\label{advantages-of-evolutionary-algorithms} algorithms}\label{advantages-of-evolutionary-algorithms}
@ -535,13 +561,11 @@ typical problems are shown in figure \ref{fig:probhard}.
Most of the advantages stem from the fact that a gradient--based Most of the advantages stem from the fact that a gradient--based
procedure has only one point of observation from where it evaluates the procedure has only one point of observation from where it evaluates the
next steps, whereas an evolutionary strategy starts with a population of next steps, whereas an evolutionary strategy starts with a population of
guessed solutions. Because an evolutionary strategy modifies the guessed solutions. Because an evolutionary strategy can be modified
solution randomly, keeping some solutions and purging others, it can according to the problem--domain (i.e.~by the ideas given above) it can
also target multiple different hypothesis at the same time where the also approximate very difficult problems in an efficient manner and even
local optima die out in the face of other, better candidates. self--tune parameters depending on the ancestry at runtime\footnote{Some
examples of this are explained in detail in \cite{eiben1999parameter}}.
\improvement[inline]{Verweis auf MO-CMA etc. Vielleicht auch etwas
ausführlicher.}
If an analytic best solution exists and is easily computable If an analytic best solution exists and is easily computable
(i.e.~because the error--function is convex) an evolutionary algorithm (i.e.~because the error--function is convex) an evolutionary algorithm
@ -553,8 +577,8 @@ problem is either not convex or there are so many parameters that an
analytic solution (mostly meaning the equivalence to an exhaustive analytic solution (mostly meaning the equivalence to an exhaustive
search) is computationally not feasible. Here evolutionary optimization search) is computationally not feasible. Here evolutionary optimization
has one more advantage as one can at least get suboptimal solutions has one more advantage as one can at least get suboptimal solutions
fast, which then refine over time and still converge to the same fast, which then refine over time and still converge to a decent
solution. solution much faster than an exhaustive search.
\section{Criteria for the evolvability of linear \section{Criteria for the evolvability of linear
deformations}\label{criteria-for-the-evolvability-of-linear-deformations} deformations}\label{criteria-for-the-evolvability-of-linear-deformations}
@ -763,7 +787,7 @@ J(Err(u,v,w)) =
With the Gauss--Newton algorithm we iterate via the formula With the Gauss--Newton algorithm we iterate via the formula
\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\] \[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\]
and use Cramers rule for inverting the small Jacobian and solving this and use Cramer's rule for inverting the small Jacobian and solving this
system of linear equations. system of linear equations.
As there is no strict upper bound of the number of iterations for this As there is no strict upper bound of the number of iterations for this
@ -1036,11 +1060,11 @@ reconstruction--error) instead of correlation we flip the sign of the
correlation--coefficient for readability and to have the correlation--coefficient for readability and to have the
correlation--coefficients be in the classification--range given above. correlation--coefficients be in the classification--range given above.
For the evolutionary optimization we employ the CMA--ES (covariance For the evolutionary optimization we employ the \afc{CMA--ES} of the
matrix adaptation evolution strategy) of the shark3.1 library shark3.1 library \cite{shark08}, as this algorithm was used by
\cite{shark08}, as this algorithm was used by \cite{anrichterEvol} as \cite{anrichterEvol} as well. We leave the parameters at their sensible
well. We leave the parameters at their sensible defaults as further defaults as further explained in
explained in \cite[Appendix~A: Table~1]{hansen2016cma}. \cite[Appendix~A: Table~1]{hansen2016cma}.
\section{Procedure: 1D Function \section{Procedure: 1D Function
Approximation}\label{procedure-1d-function-approximation} Approximation}\label{procedure-1d-function-approximation}
@ -1382,7 +1406,8 @@ the variability via the rank of the deformation--matrix.
\includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png} \includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png}
\caption[Histogram of ranks of high--resolution deformation--matrices]{ \caption[Histogram of ranks of high--resolution deformation--matrices]{
Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$ Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$
control--points each. control--points each showing in this case how many control points are actually
used in the calculations.
} }
\label{fig:histrank3d} \label{fig:histrank3d}
\end{figure} \end{figure}
@ -1562,7 +1587,7 @@ we did before with the regularity. In figure \ref{fig:resimp3d} one can
clearly see the correlation and the spread within each setup and the clearly see the correlation and the spread within each setup and the
behaviour when we increase the number of control--points. behaviour when we increase the number of control--points.
Along with this we also give the spearman--coefficients along with their Along with this we also give the Spearman--coefficients along with their
p--values in table \ref{tab:3dimp}. Within one scenario we only find a p--values in table \ref{tab:3dimp}. Within one scenario we only find a
\emph{weak} to \emph{moderate} correlation between the improvement \emph{weak} to \emph{moderate} correlation between the improvement
potential and the fitting error, but all findings (except for potential and the fitting error, but all findings (except for
@ -1576,8 +1601,30 @@ control--points.
All in all the improvement potential seems to be a good and sensible All in all the improvement potential seems to be a good and sensible
measure of quality, even given gradients of varying quality. measure of quality, even given gradients of varying quality.
\improvement[inline]{improvement--potential vs. steps ist anders als in 1d! Plot Lastly, a small note on the behaviour of improvement potential and
und zeigen!} convergence speed, as we used this in the 1D case to argue, why the
\emph{regularity} defied our expectations. As a contrast we wanted to
show, that improvement potential cannot serve for good predictions of
the convergence speed. In figure \ref{fig:imp1d3d} we show improvement
potential against number of iterations for both scenarios. As one can
see, in the 1D scenario we have a \emph{strong} and \emph{significant}
correlation (with \(-r_S = -0.72\), \(p = 0\)), whereas in the 3D
scenario we have the opposite \emph{significant} and \emph{strong}
effect (with \(-r_S = 0.69\), \(p=0\)), so these correlations clearly
seem to be dependent on the scenario and are not suited for
generalization.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{img/imp1d3d.png}
\caption[Improvement potential and convergence speed for 1D and 3D--scenarios]{
\newline
Left: Improvement potential against convergence speed for the
1D--scenario\newline
Right: Improvement potential against convergence speed for the 3D--scnario
}
\label{fig:imp1d3d}
\end{figure}
\chapter{Discussion and outlook}\label{discussion-and-outlook} \chapter{Discussion and outlook}\label{discussion-and-outlook}
@ -1617,28 +1664,32 @@ Richter et al. reported correlations between \(0.34\) to \(0.87\).
Taking these results into consideration, one can say, that Taking these results into consideration, one can say, that
\emph{variability} and \emph{improvement potential} are very good \emph{variability} and \emph{improvement potential} are very good
estimates for the quality of a fit using \acf{FFD} as a deformation estimates for the quality of a fit using \acf{FFD} as a deformation
function. function, while we could not reproduce similar compelling results as
Richter et al. for \emph{regularity and convergence speed}.
One reason for the bad or erratic behaviour of the One reason for the bad or erratic behaviour of the
\emph{regularity}--criterion could be that in an \ac{FFD}--setting we \emph{regularity}--criterion could be that in an \ac{FFD}--setting we
have a likelihood of having control--points that are only contributing have a likelihood of having control--points that are only contributing
to the whole parametrization in negligible amounts. This results in very to the whole parametrization in negligible amounts, resulting in very
small right singular values of the deformation--matrix \(\vec{U}\) that small right singular values of the deformation--matrix \(\vec{U}\) that
influence the condition--number and thus the \emph{regularity} in a influence the condition--number and thus the \emph{regularity} in a
significant way. Further research is needed to refine \emph{regularity} significant way. Further research is needed to refine \emph{regularity}
so that these problems get addressed. so that these problems get addressed, like taking all singular values
into account when capturing the notion of \emph{regularity}.
Richter et al. also compared the behaviour of direct and indirect Richter et al. also compared the behaviour of direct and indirect
manipulation in \cite{anrichterEvol}, whereas we merely used an indirect manipulation in \cite{anrichterEvol}, whereas we merely used an indirect
\ac{FFD}--approach. As direct manipulations tend to perform better than \ac{FFD}--approach. As direct manipulations tend to perform better than
indirect manipulations, the usage of \acf{DM--FFD} could also work indirect manipulations, the usage of \acf{DM--FFD} could also work
better with the criteria we examined. better with the criteria we examined. This can also solve the problem of
bad singular values for the \emph{regularity} as the incorporation of
\improvement[inline]{write more outlook/further research} the parametrization of the points on the surface, which are the
essential part of a direct--manipulation, could cancel out a bad
control--grid as the bad control--points are never or negligibly used to
parametrize those surface--points.
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt \improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.\newline Direktlinks des Autors.}
Außerdem bricht url über Seitengrenzen den Seitenspiegel.}
% \backmatter % \backmatter
\cleardoublepage \cleardoublepage

View File

@ -10,8 +10,9 @@
% %
%\acro{GPL}{GNU General Public License} -- %\acro{GPL}{GNU General Public License} --
% License for free software, see \url{http://www.gnu.org/copyleft/gpl.html}. % License for free software, see \url{http://www.gnu.org/copyleft/gpl.html}.
\acro{FFD}{Freeform--Deformation} \acro{CMA--ES}{Covariance Matrix Adaption Evolution Strategy}
\acro{DM--FFD}{Direct Manipulation Freeform--Deformation} \acro{DM--FFD}{Direct Manipulation Freeform--Deformation}
\acro{FFD}{Freeform--Deformation}
\acro{RBF}{Radial Basis Function} \acro{RBF}{Radial Basis Function}
% %