This commit is contained in:
Nicole Dresselhaus 2017-10-29 19:02:31 +01:00
parent c54b3f2960
commit 103b629b84
Signed by: Drezil
GPG Key ID: 057D94F356F41E25
6 changed files with 193 additions and 100 deletions

View File

@ -2,10 +2,9 @@
\thispagestyle{empty}
\vspace*{\stretch{1}}
\noindent
{\huge Declaration of own work(?)}\\[1cm]
{\huge Declaration of own work}\\[1cm]
I hereby declare that this thesis is my own work and effort. Where other sources of information have been used, they have been acknowledged.
\improvement[inline]{write proper declaration..}
%\\[2cm]
Bielefeld, den \today\hspace{\fill}
\\[2cm]
Bielefeld, \today\hspace{\fill}
\parbox[t]{5cm}{\dotfill\\ \centering Stefan Dresselhaus}
\vspace*{\stretch{3}}

BIN
arbeit/img/imp1d3d.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -125,8 +125,8 @@ Chapter \ref{sec:dis}.
\label{sec:back:ffd}
First of all we have to establish how a \ac{FFD} works and why this is a good
tool for deforming geometric objects (esp. meshes in our case) in the first
place. For simplicity we only summarize the 1D--case from
tool for deforming geometric objects (especially meshes in our case) in the
first place. For simplicity we only summarize the 1D--case from
\cite{spitzmuller1996bezier} here and go into the extension to the 3D case in
chapter \ref{3dffd}.
@ -150,7 +150,7 @@ corresponding deformation to generate a deformed objet}
\end{figure}
In the 1--dimensional example in figure \ref{fig:bspline}, the control--points
are indicated as red dots and the color-gradient should hint at the $u$--values
are indicated as red dots and the colour--gradient should hint at the $u$--values
ranging from $0$ to $1$.
We now define a \acf{FFD} by the following:
@ -169,7 +169,7 @@ N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+
\end{equation}
If we now multiply every $p_i$ with the corresponding $N_{i,d,\tau_i}(u)$ we get
the contribution of each point $p_i$ to the final curve--point parameterized only
the contribution of each point $p_i$ to the final curve--point parametrized only
by $u \in [0,1[$. As can be seen from \eqref{eqn:ffd1d2} we only access points
$[p_i..p_{i+d}]$ for any given $i$^[one more for each recursive step.], which gives
us, in combination with choosing $p_i$ and $\tau_i$ in order, only a local
@ -216,18 +216,18 @@ where $\vec{N}$ is the $n \times m$ transformation--matrix (later on called
\end{center}
\caption[B--spline--basis--function as partition of unity]{From \cite[Figure 2.13]{brunet2010contributions}:\newline
\glqq Some interesting properties of the B--splines. On the natural definition domain
of the B--spline ($[k_0,k_4]$ on this figure), the B--spline basis functions sum
up to one (partition of unity). In this example, we use B--splines of degree 2.
of the B--spline ($[k_0,k_4]$ on this figure), the B--Spline basis functions sum
up to one (partition of unity). In this example, we use B--Splines of degree 2.
The horizontal segment below the abscissa axis represents the domain of
influence of the B--splines basis function, i.e. the interval on which they are
not null. At a given point, there are at most $ d+1$ non-zero B--spline basis
not null. At a given point, there are at most $ d+1$ non-zero B--Spline basis
functions (compact support).\grqq \newline
Note, that Brunet starts his index at $-d$ opposed to our definition, where we
start at $0$.}
\label{fig:partition_unity}
\end{figure}
Furthermore B--splines--basis--functions form a partition of unity for all, but
Furthermore B--Spline--basis--functions form a partition of unity for all, but
the first and last $d$ control-points\cite{brunet2010contributions}. Therefore
we later on use the border-points $d+1$ times, such that $\sum_j n_{i,j} p_j = p_i$
for these points.
@ -308,9 +308,29 @@ initialized by a random guess or just zero. Further on we need a so--called
space $M$ (usually $M = \mathbb{R}$) along a convergence--function $c : I \mapsto \mathbb{B}$
that terminates the optimization.
Biologically speaking the set $I$ corresponds to the set of possible *Genotypes*
while $M$ represents the possible observable *Phenotypes*.
\improvement[inline]{Erklären, was das ist. Quellen!}
Biologically speaking the set $I$ corresponds to the set of possible *genotypes*
while $M$ represents the possible observable *phenotypes*. *Genotypes* define
all initial properties of an individual, but their properties are not directly
observable. It is the genes, that evolve over time (and thus correspond to the
parameters we are tweaking in our algorithms or the genes in nature), but only
the *phenotypes* make certain behaviour observable (algorithmically through our
*fitness--function*, biologically by the ability to survive and produce
offspring). Any individual in our algorithm thus experience a biologically
motivated life cycle of inheriting genes from the parents, modified by mutations
occurring, performing according to a fitness--metric and generating offspring
based on this. Therefore each iteration in the while--loop above is also often
named generation.
One should note that there is a subtle difference between *fitness--function*
and a so called *genotype--phenotype--mapping*. The first one directly applies
the *genotype--phenotype--mapping* and evaluates the performance of an individual,
thus going directly from genes/parameters to reproduction--probability/score.
In a concrete example the *genotype* can be an arbitrary vector (the genes), the
*phenotype* is then a deformed object, and the performance can be a single
measurement like an air--drag--coefficient. The *genotype--phenotype--mapping*
would then just be the generation of different objects from that
starting--vector, whereas the *fitness--function* would go directly from such a
starting--vector to the coefficient that we want to optimize.
The main algorithm just repeats the following steps:
@ -335,15 +355,16 @@ can be changed over time. A good overview of this is given in
For example the mutation can consist of merely a single $\sigma$ determining the
strength of the gaussian defects in every parameter --- or giving a different
$\sigma$ to every part. An even more sophisticated example would be the \glqq 1/5
success rule\grqq \ from \cite{rechenberg1973evolutionsstrategie}.
$\sigma$ to every component of those parameters. An even more sophisticated
example would be the \glqq 1/5 success rule\grqq \ from
\cite{rechenberg1973evolutionsstrategie}.
Also in selection it may not be wise to only take the best--performing
individuals, because it may be that the optimization has to overcome a barrier
of bad fitness to achieve a better local optimum.
Also in the selection--function it may not be wise to only take the
best--performing individuals, because it may be that the optimization has to
overcome a barrier of bad fitness to achieve a better local optimum.
Recombination also does not have to be mere random choosing of parents, but can
also take ancestry, distance of genes or grouping into account.
also take ancestry, distance of genes or groups of individuals into account.
## Advantages of evolutionary algorithms
\label{sec:back:evogood}
@ -364,13 +385,11 @@ are shown in figure \ref{fig:probhard}.
Most of the advantages stem from the fact that a gradient--based procedure has
only one point of observation from where it evaluates the next steps, whereas an
evolutionary strategy starts with a population of guessed solutions. Because an
evolutionary strategy modifies the solution randomly, keeping some solutions
and purging others, it can also target multiple different hypothesis at the
same time where the local optima die out in the face of other, better
candidates.
\improvement[inline]{Verweis auf MO-CMA etc. Vielleicht auch etwas
ausführlicher.}
evolutionary strategy can be modified according to the problem--domain (i.e. by
the ideas given above) it can also approximate very difficult problems in an
efficient manner and even self--tune parameters depending on the ancestry at
runtime^[Some examples of this are explained in detail in
\cite{eiben1999parameter}].
If an analytic best solution exists and is easily computable (i.e. because the
error--function is convex) an evolutionary algorithm is not the right choice.
@ -381,7 +400,7 @@ either not convex or there are so many parameters that an analytic solution
(mostly meaning the equivalence to an exhaustive search) is computationally not
feasible. Here evolutionary optimization has one more advantage as one can at
least get suboptimal solutions fast, which then refine over time and still
converge to the same solution.
converge to a decent solution much faster than an exhaustive search.
## Criteria for the evolvability of linear deformations
\label{sec:intro:rvi}
@ -445,8 +464,8 @@ optimal value and $0$ is the worst value.
On the one hand this criterion should be characteristic for numeric
stability\cite[chapter 2.7]{golub2012matrix} and on the other hand for the
convergence speed of evolutionary algorithms\cite{anrichterEvol} as it is tied to
the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}.
convergence speed of evolutionary algorithms\cite{anrichterEvol} as it is tied
to the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}.
### Improvement Potential
@ -582,7 +601,7 @@ $$
With the Gauss--Newton algorithm we iterate via the formula
$$J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)$$
and use Cramers rule for inverting the small Jacobian and solving this system of
and use Cramer's rule for inverting the small Jacobian and solving this system of
linear equations.
As there is no strict upper bound of the number of iterations for this
@ -818,10 +837,9 @@ instead of correlation we flip the sign of the correlation--coefficient for
readability and to have the correlation--coefficients be in the
classification--range given above.
For the evolutionary optimization we employ the CMA--ES (covariance matrix
adaptation evolution strategy) of the shark3.1 library \cite{shark08}, as this
algorithm was used by \cite{anrichterEvol} as well. We leave the parameters at
their sensible defaults as further explained in
For the evolutionary optimization we employ the \afc{CMA--ES} of the shark3.1
library \cite{shark08}, as this algorithm was used by \cite{anrichterEvol} as
well. We leave the parameters at their sensible defaults as further explained in
\cite[Appendix~A: Table~1]{hansen2016cma}.
## Procedure: 1D Function Approximation
@ -911,7 +929,7 @@ is defined in terms of the normalized rank of the deformation matrix $\vec{U}$:
$V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}$, whereby $n$ is the number of
vertices.
As all our tested matrices had a constant rank (being $m = x \cdot y$ for a $x \times y$
grid), we have merely plotted the errors in the boxplot in figure
grid), we have merely plotted the errors in the box plot in figure
\ref{fig:1dvar}
It is also noticeable, that although the $7 \times 4$ and $4 \times 7$ grids
@ -1114,7 +1132,7 @@ in brackets for three cases of increasing variability ($\mathrm{X} \in [4,5,7],
Similar to the 1D case all our tested matrices had a constant rank (being
$m = x \cdot y \cdot z$ for a $x \times y \times z$ grid), so we again have merely plotted
the errors in the boxplot in figure \ref{fig:3dvar}.
the errors in the box plot in figure \ref{fig:3dvar}.
As expected the $\mathrm{X} \times 4 \times 4$ grids performed
slightly better than their $4 \times 4 \times \mathrm{X}$ counterparts with a
@ -1133,7 +1151,8 @@ deformation--matrix.
\includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png}
\caption[Histogram of ranks of high--resolution deformation--matrices]{
Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$
control--points each.
control--points each showing in this case how many control points are actually
used in the calculations.
}
\label{fig:histrank3d}
\end{figure}
@ -1306,7 +1325,7 @@ before with the regularity. In figure \ref{fig:resimp3d} one can clearly see the
correlation and the spread within each setup and the behaviour when we increase
the number of control--points.
Along with this we also give the spearman--coefficients along with their
Along with this we also give the Spearman--coefficients along with their
p--values in table \ref{tab:3dimp}. Within one scenario we only find a *weak* to
*moderate* correlation between the improvement potential and the fitting error,
but all findings (except for $7 \times 4 \times 4$ and $6 \times 6 \times 6$)
@ -1319,9 +1338,28 @@ quality is naturally tied to the number of control--points.
All in all the improvement potential seems to be a good and sensible measure of
quality, even given gradients of varying quality.
\improvement[inline]{improvement--potential vs. steps ist anders als in 1d! Plot
und zeigen!}
Lastly, a small note on the behaviour of improvement potential and convergence
speed, as we used this in the 1D case to argue, why the *regularity* defied our
expectations. As a contrast we wanted to show, that improvement potential cannot
serve for good predictions of the convergence speed. In figure
\ref{fig:imp1d3d} we show improvement potential against number of iterations
for both scenarios. As one can see, in the 1D scenario we have a *strong*
and *significant* correlation (with $-r_S = -0.72$, $p = 0$), whereas in the 3D
scenario we have the opposite *significant* and *strong* effect (with
$-r_S = 0.69$, $p=0$), so these correlations clearly seem to be dependent on the
scenario and are not suited for generalization.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{img/imp1d3d.png}
\caption[Improvement potential and convergence speed for 1D and 3D--scenarios]{
\newline
Left: Improvement potential against convergence speed for the
1D--scenario\newline
Right: Improvement potential against convergence speed for the 3D--scnario
}
\label{fig:imp1d3d}
\end{figure}
# Discussion and outlook
\label{sec:dis}
@ -1355,23 +1393,27 @@ between $0.34$ to $0.87$.
Taking these results into consideration, one can say, that *variability* and
*improvement potential* are very good estimates for the quality of a fit using
\acf{FFD} as a deformation function.
\acf{FFD} as a deformation function, while we could not reproduce similar
compelling results as Richter et al. for *regularity and convergence speed*.
One reason for the bad or erratic behaviour of the *regularity*--criterion could
be that in an \ac{FFD}--setting we have a likelihood of having control--points
that are only contributing to the whole parametrization in negligible amounts.
This results in very small right singular values of the deformation--matrix
that are only contributing to the whole parametrization in negligible amounts,
resulting in very small right singular values of the deformation--matrix
$\vec{U}$ that influence the condition--number and thus the *regularity* in a
significant way. Further research is needed to refine *regularity* so that these
problems get addressed.
problems get addressed, like taking all singular values into account when
capturing the notion of *regularity*.
Richter et al. also compared the behaviour of direct and indirect manipulation
in \cite{anrichterEvol}, whereas we merely used an indirect \ac{FFD}--approach.
As direct manipulations tend to perform better than indirect manipulations, the
usage of \acf{DM--FFD} could also work better with the criteria we examined.
\improvement[inline]{write more outlook/further research}
This can also solve the problem of bad singular values for the *regularity* as
the incorporation of the parametrization of the points on the surface, which are
the essential part of a direct--manipulation, could cancel out a bad
control--grid as the bad control--points are never or negligibly used to
parametrize those surface--points.
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.\newline
Außerdem bricht url über Seitengrenzen den Seitenspiegel.}
Direktlinks des Autors.}

Binary file not shown.

View File

@ -278,10 +278,10 @@ outlook in Chapter \ref{sec:dis}.
\label{sec:back:ffd}
First of all we have to establish how a \ac{FFD} works and why this is a
good tool for deforming geometric objects (esp. meshes in our case) in
the first place. For simplicity we only summarize the 1D--case from
\cite{spitzmuller1996bezier} here and go into the extension to the 3D
case in chapter~\ref{3dffd}.
good tool for deforming geometric objects (especially meshes in our
case) in the first place. For simplicity we only summarize the 1D--case
from \cite{spitzmuller1996bezier} here and go into the extension to the
3D case in chapter~\ref{3dffd}.
The main idea of \ac{FFD} is to create a function
\(s : [0,1[^d \mapsto \mathbb{R}^d\) that spans a certain part of a
@ -302,8 +302,8 @@ corresponding deformation to generate a deformed objet}
\end{figure}
In the 1--dimensional example in figure~\ref{fig:bspline}, the
control--points are indicated as red dots and the color-gradient should
hint at the \(u\)--values ranging from \(0\) to \(1\).
control--points are indicated as red dots and the colour--gradient
should hint at the \(u\)--values ranging from \(0\) to \(1\).
We now define a \acf{FFD} by the following:\\
Given an arbitrary number of points \(p_i\) alongside a line, we map a
@ -324,7 +324,7 @@ N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+
If we now multiply every \(p_i\) with the corresponding
\(N_{i,d,\tau_i}(u)\) we get the contribution of each point \(p_i\) to
the final curve--point parameterized only by \(u \in [0,1[\). As can be
the final curve--point parametrized only by \(u \in [0,1[\). As can be
seen from \eqref{eqn:ffd1d2} we only access points \([p_i..p_{i+d}]\)
for any given \(i\)\footnote{one more for each recursive step.}, which
gives us, in combination with choosing \(p_i\) and \(\tau_i\) in order,
@ -368,18 +368,18 @@ and \(m\) control--points.
\end{center}
\caption[B--spline--basis--function as partition of unity]{From \cite[Figure 2.13]{brunet2010contributions}:\newline
\glqq Some interesting properties of the B--splines. On the natural definition domain
of the B--spline ($[k_0,k_4]$ on this figure), the B--spline basis functions sum
up to one (partition of unity). In this example, we use B--splines of degree 2.
of the B--spline ($[k_0,k_4]$ on this figure), the B--Spline basis functions sum
up to one (partition of unity). In this example, we use B--Splines of degree 2.
The horizontal segment below the abscissa axis represents the domain of
influence of the B--splines basis function, i.e. the interval on which they are
not null. At a given point, there are at most $ d+1$ non-zero B--spline basis
not null. At a given point, there are at most $ d+1$ non-zero B--Spline basis
functions (compact support).\grqq \newline
Note, that Brunet starts his index at $-d$ opposed to our definition, where we
start at $0$.}
\label{fig:partition_unity}
\end{figure}
Furthermore B--splines--basis--functions form a partition of unity for
Furthermore B--Spline--basis--functions form a partition of unity for
all, but the first and last \(d\)
control-points\cite{brunet2010contributions}. Therefore we later on use
the border-points \(d+1\) times, such that \(\sum_j n_{i,j} p_j = p_i\)
@ -469,8 +469,32 @@ space \(M\) (usually \(M = \mathbb{R}\)) along a convergence--function
\(c : I \mapsto \mathbb{B}\) that terminates the optimization.
Biologically speaking the set \(I\) corresponds to the set of possible
\emph{Genotypes} while \(M\) represents the possible observable
\emph{Phenotypes}. \improvement[inline]{Erklären, was das ist. Quellen!}
\emph{genotypes} while \(M\) represents the possible observable
\emph{phenotypes}. \emph{Genotypes} define all initial properties of an
individual, but their properties are not directly observable. It is the
genes, that evolve over time (and thus correspond to the parameters we
are tweaking in our algorithms or the genes in nature), but only the
\emph{phenotypes} make certain behaviour observable (algorithmically
through our \emph{fitness--function}, biologically by the ability to
survive and produce offspring). Any individual in our algorithm thus
experience a biologically motivated life cycle of inheriting genes from
the parents, modified by mutations occurring, performing according to a
fitness--metric and generating offspring based on this. Therefore each
iteration in the while--loop above is also often named generation.
One should note that there is a subtle difference between
\emph{fitness--function} and a so called
\emph{genotype--phenotype--mapping}. The first one directly applies the
\emph{genotype--phenotype--mapping} and evaluates the performance of an
individual, thus going directly from genes/parameters to
reproduction--probability/score. In a concrete example the
\emph{genotype} can be an arbitrary vector (the genes), the
\emph{phenotype} is then a deformed object, and the performance can be a
single measurement like an air--drag--coefficient. The
\emph{genotype--phenotype--mapping} would then just be the generation of
different objects from that starting--vector, whereas the
\emph{fitness--function} would go directly from such a starting--vector
to the coefficient that we want to optimize.
The main algorithm just repeats the following steps:
@ -503,16 +527,18 @@ that can be changed over time. A good overview of this is given in
For example the mutation can consist of merely a single \(\sigma\)
determining the strength of the gaussian defects in every parameter ---
or giving a different \(\sigma\) to every part. An even more
sophisticated example would be the \glqq 1/5 success rule\grqq ~from
\cite{rechenberg1973evolutionsstrategie}.
or giving a different \(\sigma\) to every component of those parameters.
An even more sophisticated example would be the \glqq 1/5 success
rule\grqq ~from \cite{rechenberg1973evolutionsstrategie}.
Also in selection it may not be wise to only take the best--performing
individuals, because it may be that the optimization has to overcome a
barrier of bad fitness to achieve a better local optimum.
Also in the selection--function it may not be wise to only take the
best--performing individuals, because it may be that the optimization
has to overcome a barrier of bad fitness to achieve a better local
optimum.
Recombination also does not have to be mere random choosing of parents,
but can also take ancestry, distance of genes or grouping into account.
but can also take ancestry, distance of genes or groups of individuals
into account.
\section{Advantages of evolutionary
algorithms}\label{advantages-of-evolutionary-algorithms}
@ -535,13 +561,11 @@ typical problems are shown in figure \ref{fig:probhard}.
Most of the advantages stem from the fact that a gradient--based
procedure has only one point of observation from where it evaluates the
next steps, whereas an evolutionary strategy starts with a population of
guessed solutions. Because an evolutionary strategy modifies the
solution randomly, keeping some solutions and purging others, it can
also target multiple different hypothesis at the same time where the
local optima die out in the face of other, better candidates.
\improvement[inline]{Verweis auf MO-CMA etc. Vielleicht auch etwas
ausführlicher.}
guessed solutions. Because an evolutionary strategy can be modified
according to the problem--domain (i.e.~by the ideas given above) it can
also approximate very difficult problems in an efficient manner and even
self--tune parameters depending on the ancestry at runtime\footnote{Some
examples of this are explained in detail in \cite{eiben1999parameter}}.
If an analytic best solution exists and is easily computable
(i.e.~because the error--function is convex) an evolutionary algorithm
@ -553,8 +577,8 @@ problem is either not convex or there are so many parameters that an
analytic solution (mostly meaning the equivalence to an exhaustive
search) is computationally not feasible. Here evolutionary optimization
has one more advantage as one can at least get suboptimal solutions
fast, which then refine over time and still converge to the same
solution.
fast, which then refine over time and still converge to a decent
solution much faster than an exhaustive search.
\section{Criteria for the evolvability of linear
deformations}\label{criteria-for-the-evolvability-of-linear-deformations}
@ -763,7 +787,7 @@ J(Err(u,v,w)) =
With the Gauss--Newton algorithm we iterate via the formula
\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\]
and use Cramers rule for inverting the small Jacobian and solving this
and use Cramer's rule for inverting the small Jacobian and solving this
system of linear equations.
As there is no strict upper bound of the number of iterations for this
@ -1036,11 +1060,11 @@ reconstruction--error) instead of correlation we flip the sign of the
correlation--coefficient for readability and to have the
correlation--coefficients be in the classification--range given above.
For the evolutionary optimization we employ the CMA--ES (covariance
matrix adaptation evolution strategy) of the shark3.1 library
\cite{shark08}, as this algorithm was used by \cite{anrichterEvol} as
well. We leave the parameters at their sensible defaults as further
explained in \cite[Appendix~A: Table~1]{hansen2016cma}.
For the evolutionary optimization we employ the \afc{CMA--ES} of the
shark3.1 library \cite{shark08}, as this algorithm was used by
\cite{anrichterEvol} as well. We leave the parameters at their sensible
defaults as further explained in
\cite[Appendix~A: Table~1]{hansen2016cma}.
\section{Procedure: 1D Function
Approximation}\label{procedure-1d-function-approximation}
@ -1139,7 +1163,7 @@ deformation matrix \(\vec{U}\):
\(V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}\), whereby \(n\) is the
number of vertices. As all our tested matrices had a constant rank
(being \(m = x \cdot y\) for a \(x \times y\) grid), we have merely
plotted the errors in the boxplot in figure \ref{fig:1dvar}
plotted the errors in the box plot in figure \ref{fig:1dvar}
It is also noticeable, that although the \(7 \times 4\) and
\(4 \times 7\) grids have a higher variability, they perform not better
@ -1361,7 +1385,7 @@ in brackets for three cases of increasing variability ($\mathrm{X} \in [4,5,7],
Similar to the 1D case all our tested matrices had a constant rank
(being \(m = x \cdot y \cdot z\) for a \(x \times y \times z\) grid), so
we again have merely plotted the errors in the boxplot in figure
we again have merely plotted the errors in the box plot in figure
\ref{fig:3dvar}.
As expected the \(\mathrm{X} \times 4 \times 4\) grids performed
@ -1382,7 +1406,8 @@ the variability via the rank of the deformation--matrix.
\includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png}
\caption[Histogram of ranks of high--resolution deformation--matrices]{
Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$
control--points each.
control--points each showing in this case how many control points are actually
used in the calculations.
}
\label{fig:histrank3d}
\end{figure}
@ -1562,7 +1587,7 @@ we did before with the regularity. In figure \ref{fig:resimp3d} one can
clearly see the correlation and the spread within each setup and the
behaviour when we increase the number of control--points.
Along with this we also give the spearman--coefficients along with their
Along with this we also give the Spearman--coefficients along with their
p--values in table \ref{tab:3dimp}. Within one scenario we only find a
\emph{weak} to \emph{moderate} correlation between the improvement
potential and the fitting error, but all findings (except for
@ -1576,8 +1601,30 @@ control--points.
All in all the improvement potential seems to be a good and sensible
measure of quality, even given gradients of varying quality.
\improvement[inline]{improvement--potential vs. steps ist anders als in 1d! Plot
und zeigen!}
Lastly, a small note on the behaviour of improvement potential and
convergence speed, as we used this in the 1D case to argue, why the
\emph{regularity} defied our expectations. As a contrast we wanted to
show, that improvement potential cannot serve for good predictions of
the convergence speed. In figure \ref{fig:imp1d3d} we show improvement
potential against number of iterations for both scenarios. As one can
see, in the 1D scenario we have a \emph{strong} and \emph{significant}
correlation (with \(-r_S = -0.72\), \(p = 0\)), whereas in the 3D
scenario we have the opposite \emph{significant} and \emph{strong}
effect (with \(-r_S = 0.69\), \(p=0\)), so these correlations clearly
seem to be dependent on the scenario and are not suited for
generalization.
\begin{figure}[hbt]
\centering
\includegraphics[width=\textwidth]{img/imp1d3d.png}
\caption[Improvement potential and convergence speed for 1D and 3D--scenarios]{
\newline
Left: Improvement potential against convergence speed for the
1D--scenario\newline
Right: Improvement potential against convergence speed for the 3D--scnario
}
\label{fig:imp1d3d}
\end{figure}
\chapter{Discussion and outlook}\label{discussion-and-outlook}
@ -1617,28 +1664,32 @@ Richter et al. reported correlations between \(0.34\) to \(0.87\).
Taking these results into consideration, one can say, that
\emph{variability} and \emph{improvement potential} are very good
estimates for the quality of a fit using \acf{FFD} as a deformation
function.
function, while we could not reproduce similar compelling results as
Richter et al. for \emph{regularity and convergence speed}.
One reason for the bad or erratic behaviour of the
\emph{regularity}--criterion could be that in an \ac{FFD}--setting we
have a likelihood of having control--points that are only contributing
to the whole parametrization in negligible amounts. This results in very
to the whole parametrization in negligible amounts, resulting in very
small right singular values of the deformation--matrix \(\vec{U}\) that
influence the condition--number and thus the \emph{regularity} in a
significant way. Further research is needed to refine \emph{regularity}
so that these problems get addressed.
so that these problems get addressed, like taking all singular values
into account when capturing the notion of \emph{regularity}.
Richter et al. also compared the behaviour of direct and indirect
manipulation in \cite{anrichterEvol}, whereas we merely used an indirect
\ac{FFD}--approach. As direct manipulations tend to perform better than
indirect manipulations, the usage of \acf{DM--FFD} could also work
better with the criteria we examined.
\improvement[inline]{write more outlook/further research}
better with the criteria we examined. This can also solve the problem of
bad singular values for the \emph{regularity} as the incorporation of
the parametrization of the points on the surface, which are the
essential part of a direct--manipulation, could cancel out a bad
control--grid as the bad control--points are never or negligibly used to
parametrize those surface--points.
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.\newline
Außerdem bricht url über Seitengrenzen den Seitenspiegel.}
Direktlinks des Autors.}
% \backmatter
\cleardoublepage

View File

@ -10,8 +10,9 @@
%
%\acro{GPL}{GNU General Public License} --
% License for free software, see \url{http://www.gnu.org/copyleft/gpl.html}.
\acro{FFD}{Freeform--Deformation}
\acro{CMA--ES}{Covariance Matrix Adaption Evolution Strategy}
\acro{DM--FFD}{Direct Manipulation Freeform--Deformation}
\acro{FFD}{Freeform--Deformation}
\acro{RBF}{Radial Basis Function}
%