masterarbeit/arbeit/ma.md

573 lines
24 KiB
Markdown
Raw Normal View History

2017-08-23 19:18:44 +00:00
---
2017-10-02 19:56:06 +00:00
fontsize: 12pt
2017-08-23 19:18:44 +00:00
---
2017-08-30 18:56:35 +00:00
\chapter*{How to read this Thesis}
2017-08-23 19:18:44 +00:00
2017-10-01 18:06:41 +00:00
As a guide through the nomenclature used in the formulas we prepend this
chapter.
2017-08-28 16:39:18 +00:00
Unless otherwise noted the following holds:
- lowercase letters $x,y,z$
2017-10-08 21:29:45 +00:00
refer to real variables and represent a point in 3D--Space.
2017-08-28 16:39:18 +00:00
- lowercase letters $u,v,w$
2017-10-01 18:06:41 +00:00
refer to real variables between $0$ and $1$ used as coefficients in a 3D
2017-10-08 21:29:45 +00:00
B--Spline grid.
2017-08-28 16:39:18 +00:00
- other lowercase letters
refer to other scalar (real) variables.
- lowercase **bold** letters (e.g. $\vec{x},\vec{y}$)
refer to 3D coordinates
2017-09-09 16:38:43 +00:00
- uppercase **BOLD** letters (e.g. $\vec{D}, \vec{M}$)
2017-08-28 16:39:18 +00:00
refer to Matrices
# Introduction
2017-10-08 19:08:29 +00:00
\improvement[inline]{Mehr Bilder}
2017-10-05 09:36:53 +00:00
2017-10-07 23:07:37 +00:00
Many modern industrial design processes require advanced optimization methods
do to the increased complexity. These designs have to adhere to more and more
degrees of freedom as methods refine and/or other methods are used. Examples for
this are physical domains like aerodynamic (i.e. drag), fluid dynamics (i.e.
2017-10-08 21:29:45 +00:00
throughput of liquid) --- where the complexity increases with the temporal and
spatial resolution of the simulation --- or known hard algorithmic problems in
informatics (i.e. layouting of circuit boards or stacking of 3D--objects).
2017-10-07 23:07:37 +00:00
Moreover these are typically not static environments but requirements shift over
time or from case to case.
2017-10-10 15:45:58 +00:00
Evolutionary algorithms cope especially well with these problem domains while
2017-10-07 23:07:37 +00:00
addressing all the issues at hand\cite{minai2006complex}. One of the main
concerns in these algorithms is the formulation of the problems in terms of a
genome and a fitness function. While one can typically use an arbitrary
2017-10-08 21:29:45 +00:00
cost--function for the fitness--functions (i.e. amount of drag, amount of space,
etc.), the translation of the problem--domain into a simple parametric
2017-10-07 23:07:37 +00:00
representation can be challenging.
The quality of such a representation in biological evolution is called
*evolvability*\cite{wagner1996complex} and is at the core of this thesis, as the
parametrization of the problem has serious implications on the convergence speed
and the quality of the solution\cite{Rothlauf2006}.
2017-10-07 23:07:37 +00:00
However, there is no consensus on how *evolvability* is defined and the meaning
varies from context to context\cite{richter2015evolvability}.
As we transfer the results of Richter et al.\cite{anrichterEvol} from using
\acf{RBF} as a representation to manipulate a geometric mesh to the use of
\acf{FFD} we will use the same definition for evolvability the original author
used, namely *regularity*, *variability*, and *improvement potential*. We
introduce these term in detail in Chapter \ref{sec:intro:rvi}.
In the original publication the author used random sampled points weighted with
\acf{RBF} to deform the mesh and showed that the mentioned criteria of
*regularity*, *variability*, and *improvement potential* correlate with the quality
and potential of such optimization.
2017-08-28 16:39:18 +00:00
We will replicate the same setup on the same meshes but use \acf{FFD} instead of
2017-10-07 23:07:37 +00:00
\acf{RBF} to create a local deformation near the control points and evaluate if
2017-10-08 21:29:45 +00:00
the evolution--criteria still work as a predictor given the different deformation
2017-10-07 23:07:37 +00:00
scheme, as suspected in \cite{anrichterEvol}.
## Outline of this thesis
2017-10-08 19:08:29 +00:00
First we introduce different topics in isolation in Chapter \ref{sec:back}. We
2017-10-08 21:29:45 +00:00
take an abstract look at the definition of \ac{FFD} for a one--dimensional line
2017-10-08 19:08:29 +00:00
(in \ref{sec:back:ffd}) and discuss why this is a sensible deformation function
(in \ref{sec:back:ffdgood}).
2017-10-10 15:45:58 +00:00
Then we establish some background--knowledge of evolutionary algorithms (in
2017-10-08 19:08:29 +00:00
\ref{sec:back:evo}) and why this is useful in our domain (in
\ref{sec:back:evogood}).
In a third step we take a look at the definition of the different evolvability
criteria established in \cite{anrichterEvol}.
In Chapter \ref{sec:impl} we take a look at our implementation of \ac{FFD} and
2017-10-08 21:29:45 +00:00
the adaptation for 3D--meshes.
2017-10-08 19:08:29 +00:00
Next, in Chapter \ref{sec:eval}, we describe the different scenarios we use to
2017-10-08 21:29:45 +00:00
evaluate the different evolvability--criteria incorporating all aspects
2017-10-08 19:08:29 +00:00
introduced in Chapter \ref{sec:back}. Following that, we evaluate the results in
Chapter \ref{sec:res} with further on discussion in Chapter \ref{sec:dis}.
2017-10-07 23:07:37 +00:00
# Background
2017-10-08 19:08:29 +00:00
\label{sec:back}
2017-08-28 16:39:18 +00:00
## What is \acf{FFD}?
2017-10-08 19:08:29 +00:00
\label{sec:back:ffd}
2017-08-28 16:39:18 +00:00
First of all we have to establish how a \ac{FFD} works and why this is a good
2017-10-01 18:06:41 +00:00
tool for deforming meshes in the first place. For simplicity we only summarize
2017-10-08 21:29:45 +00:00
the 1D--case from \cite{spitzmuller1996bezier} here and go into the extension to
2017-10-01 18:06:41 +00:00
the 3D case in chapter \ref{3dffd}.
2017-09-06 15:07:46 +00:00
Given an arbitrary number of points $p_i$ alongside a line, we map a scalar
value $\tau_i \in [0,1[$ to each point with $\tau_i < \tau_{i+1} \forall i$.
2017-10-01 18:06:41 +00:00
Given a degree of the target polynomial $d$ we define the curve
$N_{i,d,\tau_i}(u)$ as follows:
2017-09-06 15:07:46 +00:00
2017-09-09 16:38:43 +00:00
\begin{equation} \label{eqn:ffd1d1}
2017-09-06 15:07:46 +00:00
N_{i,0,\tau}(u) = \begin{cases} 1, & u \in [\tau_i, \tau_{i+1}[ \\ 0, & \mbox{otherwise} \end{cases}
\end{equation}
and
2017-09-09 16:38:43 +00:00
\begin{equation} \label{eqn:ffd1d2}
2017-09-06 15:07:46 +00:00
N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u)
\end{equation}
2017-10-01 18:06:41 +00:00
If we now multiply every $p_i$ with the corresponding $N_{i,d,\tau_i}(u)$ we get
2017-10-08 21:29:45 +00:00
the contribution of each point $p_i$ to the final curve--point parameterized only
2017-10-01 18:06:41 +00:00
by $u \in [0,1[$. As can be seen from \eqref{eqn:ffd1d2} we only access points
$[i..i+d]$ for any given $i$^[one more for each recursive step.], which gives
us, in combination with choosing $p_i$ and $\tau_i$ in order, only a local
interference of $d+1$ points.
2017-09-06 15:07:46 +00:00
2017-10-01 18:06:41 +00:00
We can even derive this equation straightforward for an arbitrary
2017-10-08 21:29:45 +00:00
$N$^[*Warning:* in the case of $d=1$ the recursion--formula yields a $0$
2017-10-01 18:06:41 +00:00
denominator, but $N$ is also $0$. The right solution for this case is a
derivative of $0$]:
2017-09-06 15:07:46 +00:00
$$\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)$$
2017-10-08 21:29:45 +00:00
For a B--Spline
2017-09-06 15:07:46 +00:00
$$s(u) = \sum_{i} N_{i,d,\tau_i}(u) p_i$$
these derivations yield $\frac{\partial^d}{\partial u} s(u) = 0$.
2017-10-01 18:06:41 +00:00
Another interesting property of these recursive polynomials is that they are
continuous (given $d \ge 1$) as every $p_i$ gets blended in linearly between
$\tau_i$ and $\tau_{i+d}$ and out linearly between $\tau_{i+1}$ and
$\tau_{i+d+1}$ as can bee seen from the two coefficients in every step of the
recursion.
2017-09-06 15:07:46 +00:00
### Why is \ac{FFD} a good deformation function?
2017-10-08 19:08:29 +00:00
\label{sec:back:ffdgood}
2017-09-06 15:07:46 +00:00
2017-10-01 18:06:41 +00:00
The usage of \ac{FFD} as a tool for manipulating follows directly from the
properties of the polynomials and the correspondence to the control points.
2017-10-08 21:29:45 +00:00
Having only a few control points gives the user a nicer high--level--interface, as
2017-10-01 18:06:41 +00:00
she only needs to move these points and the model follows in an intuitive
manner. The deformation is smooth as the underlying polygon is smooth as well
and affects as many vertices of the model as needed. Moreover the changes are
always local so one risks not any change that a user cannot immediately see.
But there are also disadvantages of this approach. The user loses the ability to
directly influence vertices and even seemingly simple tasks as creating a
plateau can be difficult to
achieve\cite[chapter~3.2]{hsu1991dmffd}\cite{hsu1992direct}.
This disadvantages led to the formulation of
2017-10-08 21:29:45 +00:00
\acf{DM--FFD}\cite[chapter~3.3]{hsu1991dmffd} in which the user directly
interacts with the surface--mesh. All interactions will be applied
proportionally to the control--points that make up the parametrization of the
interaction--point itself yielding a smooth deformation of the surface *at* the
surface without seemingly arbitrary scattered control--points. Moreover this
2017-10-01 18:06:41 +00:00
increases the efficiency of an evolutionary optimization\cite{Menzel2006}, which
we will use later on.
2017-10-02 20:13:24 +00:00
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/hsu_fig7.png}
\caption{Figure 7 from \cite{hsu1991dmffd}.}
\label{fig:hsu_fig7}
\end{figure}
But this approach also has downsides as can be seen in figure
\ref{fig:hsu_fig7}, as the tessellation of the invisible grid has a major impact
on the deformation itself.
2017-10-01 18:06:41 +00:00
2017-10-08 21:29:45 +00:00
All in all \ac{FFD} and \ac{DM--FFD} are still good ways to deform a high--polygon
2017-10-01 18:06:41 +00:00
mesh albeit the downsides.
2017-09-06 15:07:46 +00:00
2017-10-10 15:45:58 +00:00
## What is evolutionary optimization?
2017-10-08 19:08:29 +00:00
\label{sec:back:evo}
2017-09-06 15:07:46 +00:00
2017-10-10 15:45:58 +00:00
In this thesis we are using an evolutionary optimization strategy to solve the
2017-10-10 10:24:15 +00:00
problem of finding the best parameters for our deformation. This approach,
however, is very generic and we introduce it here in a broader sense.
\begin{algorithm}
2017-10-10 15:45:58 +00:00
\caption{An outline of evolutionary algorithms}
2017-10-10 10:24:15 +00:00
\label{alg:evo}
\begin{algorithmic}
\STATE t := 0;
\STATE initialize $P(0) := \{\vec{a}_1(0),\dots,\vec{a}_\mu(0)\} \in I^\mu$;
\STATE evaluate $F(0) : \{\Phi(x) | x \in P(0)\}$;
\WHILE{$c(F(t)) \neq$ \TRUE}
\STATE recombine: $P(t) := r(P(t))$;
\STATE mutate: $P''(t) := m(P(t))$;
\STATE evaluate $F''(t) : \{\Phi(x) | x \in P''(t)\}$
\STATE select: $P(t + 1) := s(P''(t) \cup Q,\Phi)$;
\STATE t := t + 1;
\ENDWHILE
\end{algorithmic}
\end{algorithm}
2017-10-10 15:45:58 +00:00
The general shape of an evolutionary algorithm (adapted from
\cite{back1993overview}) is outlined in Algorithm \ref{alg:evo}. Here, $P(t)$
2017-10-10 10:24:15 +00:00
denotes the population of parameters in step $t$ of the algorithm. The
population contains $\mu$ individuals $a_i$ that fit the shape of the parameters
we are looking for. Typically these are initialized by a random guess or just
zero. Further on we need a so--called *fitness--function* $\Phi : I \mapsto M$ that can take
each parameter to a measurable space along a convergence--function $c : I \mapsto
2017-10-10 10:24:15 +00:00
\mathbb{B}$ that terminates the optimization.
The main algorithm just repeats the following steps:
- **Recombine** with a recombination--function $r : I^{\mu} \mapsto I^{\lambda}$ to
2017-10-10 10:24:15 +00:00
generate new individuals based on the parents characteristics.
This makes sure that the next guess is close to the old guess.
- **Mutate** with a mutation--function $m : I^{\lambda} \mapsto I^{\lambda}$ to
2017-10-10 10:24:15 +00:00
introduce new effects that cannot be produced by mere recombination of the
parents.
Typically this just adds minor defects to individual members of the population
like adding a random gaussian noise or amplifying/dampening random parts.
- **Selection** takes a selection--function $s : (I^\lambda \cup I^{\mu + \lambda},\Phi) \mapsto I^\mu$ that
2017-10-10 10:24:15 +00:00
selects from the previously generated $I^\lambda$ children and optionally also
the parents (denoted by the set $Q$ in the algorithm) using the
fitness--function $\Phi$. The result of this operation is the next Population
2017-10-10 10:24:15 +00:00
of $\mu$ individuals.
All these functions can (and mostly do) have a lot of hidden parameters that
can be changed over time. One can for example start off with a high
mutation--rate that cools off over time (i.e. by lowering the variance of a
gaussian noise).
2017-09-06 15:07:46 +00:00
2017-10-10 15:45:58 +00:00
## Advantages of evolutionary algorithms
2017-10-08 19:08:29 +00:00
\label{sec:back:evogood}
2017-09-06 15:07:46 +00:00
2017-10-10 15:45:58 +00:00
The main advantage of evolutionary algorithms is the ability to find optima of
general functions just with the help of a given fitness--function. With this
most problems of simple gradient--based procedures, which often target the same
2017-10-10 15:45:58 +00:00
error--function which measures the fitness, as an evolutionary algorithm, but can
easily get stuck in local optima.
2017-09-06 15:07:46 +00:00
2017-10-10 15:45:58 +00:00
Components and techniques for evolutionary algorithms are specifically known to
help with different problems arising in the domain of
optimization\cite{weise2012evolutionary}. An overview of the typical problems
are shown in figure \ref{fig:probhard}.
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/weise_fig3.png}
\caption{Fig.~3. taken from \cite{weise2012evolutionary}}
\label{fig:probhard}
\end{figure}
Most of the advantages stem from the fact that a gradient--based procedure has
only one point of observation from where it evaluates the next steps, whereas an
2017-10-10 15:45:58 +00:00
evolutionary strategy starts with a population of guessed solutions. Because an
evolutionary strategy modifies the solution randomly, keeps the best solutions
2017-10-01 18:06:41 +00:00
and purges the worst, it can also target multiple different hypothesis at the
same time where the local optima die out in the face of other, better
candidates.
2017-08-23 19:18:44 +00:00
If an analytic best solution exists and is easily computable (i.e. because the
2017-10-10 15:45:58 +00:00
error--function is convex) an evolutionary algorithm is not the right choice.
Although both converge to the same solution, the analytic one is usually faster.
But in reality many problems have no analytic solution, because the problem is
either not convex or there are so many parameters that an analytic solution
(mostly meaning the equivalence to an exhaustive search) is computationally not
2017-10-10 15:45:58 +00:00
feasible. Here evolutionary optimization has one more advantage as you can at
least get a suboptimal solutions fast, which then refine over time.
2017-08-23 19:18:44 +00:00
2017-10-01 18:06:41 +00:00
## Criteria for the evolvability of linear deformations
2017-10-07 23:07:37 +00:00
\label{sec:intro:rvi}
2017-10-01 18:06:41 +00:00
### Variability
2017-10-08 19:08:29 +00:00
In \cite{anrichterEvol} *variability* is defined as
2017-10-01 18:06:41 +00:00
$$V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n},$$
2017-10-08 21:29:45 +00:00
whereby $\vec{U}$ is the $m \times n$ deformation--Matrix used to map the $m$
2017-10-01 18:06:41 +00:00
control points onto the $n$ vertices.
2017-10-08 21:29:45 +00:00
Given $n = m$, an identical number of control--points and vertices, this
2017-10-01 18:06:41 +00:00
quotient will be $=1$ if all control points are independent of each other and
2017-10-08 21:29:45 +00:00
the solution is to trivially move every control--point onto a target--point.
2017-08-23 19:18:44 +00:00
2017-10-01 18:06:41 +00:00
In praxis the value of $V(\vec{U})$ is typically $\ll 1$, because as
2017-10-08 21:29:45 +00:00
there are only few control--points for many vertices, so $m \ll n$.
2017-09-27 20:06:39 +00:00
2017-10-08 21:29:45 +00:00
Additionally in our setup we connect neighbouring control--points in a grid so
2017-10-01 18:06:41 +00:00
each control point is not independent, but typically depends on $4^d$
2017-10-08 21:29:45 +00:00
control--points for an $d$--dimensional control mesh.
2017-09-27 20:06:39 +00:00
2017-10-01 18:06:41 +00:00
### Regularity
2017-09-27 20:06:39 +00:00
2017-10-08 19:08:29 +00:00
*Regularity* is defined\cite{anrichterEvol} as
2017-10-01 18:06:41 +00:00
$$R(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}$$
where $\sigma_{min}$ and $\sigma_{max}$ are the smallest and greatest right singular
2017-10-08 21:29:45 +00:00
value of the deformation--matrix $\vec{U}$.
2017-09-27 20:06:39 +00:00
2017-10-01 18:06:41 +00:00
As we deform the given Object only based on the parameters as $\vec{p} \mapsto
f(\vec{x} + \vec{U}\vec{p})$ this makes sure that $\|\vec{Up}\| \propto
\|\vec{p}\|$ when $\kappa(\vec{U}) \approx 1$. The inversion of $\kappa(\vec{U})$
2017-10-08 21:29:45 +00:00
is only performed to map the criterion--range to $[0..1]$, whereas $1$ is the
2017-10-01 18:06:41 +00:00
optimal value and $0$ is the worst value.
2017-09-27 20:06:39 +00:00
2017-10-01 18:06:41 +00:00
This criterion should be characteristic for numeric stability on the on
2017-10-10 15:45:58 +00:00
hand\cite[chapter 2.7]{golub2012matrix} and for convergence speed of
evolutionary algorithms on the other hand\cite{anrichterEvol} as it is tied to
the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}.
2017-08-23 19:18:44 +00:00
2017-10-01 18:06:41 +00:00
### Improvement Potential
2017-08-23 19:18:44 +00:00
2017-10-08 19:08:29 +00:00
In contrast to the general nature of *variability* and *regularity*, which are
2017-10-08 21:29:45 +00:00
agnostic of the fitness--function at hand the third criterion should reflect a
2017-10-01 18:06:41 +00:00
notion of potential.
As during optimization some kind of gradient $g$ is available to suggest a
direction worth pursuing we use this to guess how much change can be achieved in
the given direction.
2017-10-08 19:08:29 +00:00
The definition for an *improvement potential* $P$ is\cite{anrichterEvol}:
2017-10-01 18:06:41 +00:00
$$
P(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec(G)\|^2_F
$$
2017-10-08 21:29:45 +00:00
given some approximate $n \times d$ fitness--gradient $\vec{G}$, normalized to
$\|\vec{G}\|_F = 1$, whereby $\|\cdot\|_F$ denotes the Frobenius--Norm.
2017-10-01 18:06:41 +00:00
# Implementation of \acf{FFD}
2017-10-08 19:08:29 +00:00
\label{sec:impl}
2017-08-23 19:18:44 +00:00
2017-10-08 21:29:45 +00:00
The general formulation of B--Splines has two free parameters $d$ and $\tau$
2017-10-02 20:36:46 +00:00
which must be chosen beforehand.
2017-10-02 19:56:06 +00:00
As we usually work with regular grids in our \ac{FFD} we define $\tau$
2017-10-02 20:36:46 +00:00
statically as $\tau_i = \nicefrac{i}{n}$ whereby $n$ is the number of
2017-10-08 21:29:45 +00:00
control--points in that direction.
2017-10-02 19:56:06 +00:00
2017-10-08 21:29:45 +00:00
$d$ defines the *degree* of the B--Spline--Function (the number of times this
2017-10-02 19:56:06 +00:00
function is differentiable) and for our purposes we fix $d$ to $3$, but give the
formulas for the general case so it can be adapted quite freely.
## Adaption of \ac{FFD}
2017-10-08 19:08:29 +00:00
As we have established in Chapter \ref{sec:back:ffd} we can define an
2017-10-08 21:29:45 +00:00
\ac{FFD}--displacement as
2017-10-02 19:56:06 +00:00
\begin{equation}
\Delta_x(u) = \sum_i N_{i,d,\tau_i}(u) \Delta_x c_i
\end{equation}
2017-10-08 21:29:45 +00:00
Note that we only sum up the $\Delta$--displacements in the control points $c_i$ to get
2017-10-02 19:56:06 +00:00
the change in position of the point we are interested in.
In this way every deformed vertex is defined by
$$
\textrm{Deform}(v_x) = v_x + \Delta_x(u)
$$
2017-10-08 21:29:45 +00:00
with $u \in [0..1[$ being the variable that connects the high--detailed
vertex--mesh to the low--detailed control--grid. To actually calculate the new
position of the vertex we first have to calculate the $u$--value for each
2017-10-02 19:56:06 +00:00
vertex. This is achieved by finding out the parametrization of $v$ in terms of
$c_i$
$$
2017-10-02 20:36:46 +00:00
v_x \overset{!}{=} \sum_i N_{i,d,\tau_i}(u) c_i
$$
so we can minimize the error between those two:
$$
\underset{u}{\argmin}\,Err(u,v_x) = \underset{u}{\argmin}\,2 \cdot \|v_x - \sum_i N_{i,d,\tau_i}(u) c_i\|^2_2
2017-10-02 19:56:06 +00:00
$$
2017-10-08 21:29:45 +00:00
As this error--term is quadratic we just derive by $u$ yielding
$$
\begin{array}{rl}
\frac{\partial}{\partial u} & v_x - \sum_i N_{i,d,\tau_i}(u) c_i \\
= & - \sum_i \left( \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u) \right) c_i
\end{array}
$$
and do a gradient--descend to approximate the value of $u$ up to an $\epsilon$ of $0.0001$.
2017-10-02 19:56:06 +00:00
2017-10-08 21:29:45 +00:00
For this we use the Gauss--Newton algorithm\cite{gaussNewton} as the solution to
2017-10-02 19:56:06 +00:00
this problem may not be deterministic, because we usually have way more vertices
2017-10-08 21:29:45 +00:00
than control points ($\#v~\gg~\#c$).
2017-10-02 19:56:06 +00:00
2017-10-08 21:29:45 +00:00
## Adaption of \ac{FFD} for a 3D--Mesh
2017-09-06 15:07:46 +00:00
\label{3dffd}
2017-08-23 19:18:44 +00:00
2017-10-08 21:29:45 +00:00
This is a straightforward extension of the 1D--method presented in the last
2017-10-02 19:56:06 +00:00
chapter. But this time things get a bit more complicated. As we have a
2017-10-08 21:29:45 +00:00
3--dimensional grid we may have a different amount of control--points in each
2017-10-02 19:56:06 +00:00
direction.
2017-10-08 21:29:45 +00:00
Given $n,m,o$ control points in $x,y,z$--direction each Point on the curve is
2017-10-02 19:56:06 +00:00
defined by
2017-10-02 20:36:46 +00:00
$$V(u,v,w) = \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.$$
2017-10-02 19:56:06 +00:00
2017-10-08 21:29:45 +00:00
In this case we have three different B--Splines (one for each dimension) and also
2017-10-02 19:56:06 +00:00
3 variables $u,v,w$ for each vertex we want to approximate.
Given a target vertex $\vec{p}^*$ and an initial guess $\vec{p}=V(u,v,w)$
2017-10-08 21:29:45 +00:00
we define the error--function for the gradient--descent as:
2017-10-02 19:56:06 +00:00
$$Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)$$
And the partial version for just one direction as
2017-10-02 20:36:46 +00:00
$$Err_x(u,v,w,\vec{p}^{*}) = p^{*}_x - \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x $$
2017-10-02 19:56:06 +00:00
To solve this we derive partially, like before:
$$
\begin{array}{rl}
2017-10-02 20:36:46 +00:00
\displaystyle \frac{\partial Err_x}{\partial u} & p^{*}_x - \displaystyle \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \\
2017-10-07 23:07:37 +00:00
= & \displaystyle - \sum_i \sum_j \sum_k N'_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x
2017-10-02 19:56:06 +00:00
\end{array}
$$
The other partial derivatives follow the same pattern yielding the Jacobian:
$$
J(Err(u,v,w)) =
\left(
\begin{array}{ccc}
\frac{\partial Err_x}{\partial u} & \frac{\partial Err_x}{\partial v} & \frac{\partial Err_x}{\partial w} \\
\frac{\partial Err_y}{\partial u} & \frac{\partial Err_y}{\partial v} & \frac{\partial Err_y}{\partial w} \\
\frac{\partial Err_z}{\partial u} & \frac{\partial Err_z}{\partial v} & \frac{\partial Err_z}{\partial w}
\end{array}
\right)
$$
2017-10-07 23:07:37 +00:00
$$
\scriptsize
=
\left(
\begin{array}{ccc}
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_x \\
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_y \\
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_z
\end{array}
\right)
$$
2017-10-02 19:56:06 +00:00
2017-10-08 21:29:45 +00:00
With the Gauss--Newton algorithm we iterate via the formula
2017-10-02 19:56:06 +00:00
$$J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)$$
and use Cramers rule for inverting the small Jacobian and solving this system of
linear equations.
2017-10-10 15:45:58 +00:00
## Deformation Grid
2017-10-02 19:56:06 +00:00
2017-10-10 15:45:58 +00:00
As mentioned in chapter \ref{sec:back:evo}, the way of choosing the
representation to map the general problem (mesh--fitting/optimization in our
case) into a parameter-space it very important for the quality and runtime of
evolutionary algorithms\cite{Rothlauf2006}.
2017-08-23 19:18:44 +00:00
2017-10-10 15:45:58 +00:00
Because our control--points are arranged in a grid, we can accurately represent
each vertex--point inside the grids volume with proper B--Spline--coefficients
between $[0,1[$ and --- as a consequence --- we have to embed our object into it
(or create constant "dummy"-points outside).
The great advantage of B--Splines is the locality, direct impact of each
control point without having a $1:1$--correlation, and a smooth deformation.
While the advantages are great, the issues arise from the problem to decide
where to place the control--points and how many.
One would normally think, that the more control--points you add, the better the
result will be, but this is not the case for our B--Splines. Given any point $p$
only the $2 \cdot (d-1)$ control--points contribute to the parametrization of
that point^[Normally these are $d-1$ to each side, but at the boundaries the
number gets increased to the inside to meet the required smoothness].
This means, that a high resolution can have many control-points that are not
contributing to any point on the surface and are thus completely irrelevant to
the solution.
\begin{figure}[!ht]
\begin{center}
\includegraphics{img/enoughCP.png}
\end{center}
2017-10-10 16:39:57 +00:00
\caption[Example of a high resolution control--grid]{A high resolution
($10 \times 10$) of control--points over a circle. Yellow/green points
contribute to the parametrization, red points don't.\newline
2017-10-10 15:45:58 +00:00
An Example--point (blue) is solely determined by the position of the green
control--points.}
\label{fig:enoughCP}
\end{figure}
We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the four red
central points are not relevant for the parametrization of the circle.
\unsure[inline]{erwähnen, dass man aus $\vec{D}$ einfach die Null--Spalten
entfernen kann?}
For our tests we chose different uniformly sized grids and added gaussian noise
onto each control-point^[For the special case of the outer layer we only applied
noise away from the object] to simulate different starting-conditions.
\unsure[inline]{verweis auf DM--FFD?}
2017-10-07 23:07:37 +00:00
# Scenarios for testing evolvability criteria using \acf{FFD}
2017-10-08 19:08:29 +00:00
\label{sec:eval}
2017-10-07 23:07:37 +00:00
2017-10-01 18:06:41 +00:00
## Test Scenario: 1D Function Approximation
2017-08-23 19:18:44 +00:00
2017-10-01 18:06:41 +00:00
### Optimierungszenario
2017-08-23 19:18:44 +00:00
2017-10-08 21:29:45 +00:00
- Ebene -> Template--Fit
2017-08-23 19:18:44 +00:00
2017-10-01 18:06:41 +00:00
### Matching in 1D
2017-08-23 19:18:44 +00:00
- Trivial
2017-10-01 18:06:41 +00:00
### Besonderheiten der Auswertung
2017-08-23 19:18:44 +00:00
- Analytische Lösung einzig beste
- Ergebnis auch bei Rauschen konstant?
2017-10-08 21:29:45 +00:00
- normierter 1--Vektor auf den Gradienten addieren
2017-08-23 19:18:44 +00:00
- Kegel entsteht
2017-10-01 18:06:41 +00:00
## Test Scenario: 3D Function Approximation
2017-08-23 19:18:44 +00:00
2017-10-01 18:06:41 +00:00
### Optimierungsszenario
2017-08-23 19:18:44 +00:00
- Ball zu Mario
2017-10-01 18:06:41 +00:00
### Matching in 3D
2017-08-23 19:18:44 +00:00
- alternierende Optimierung
2017-10-01 18:06:41 +00:00
### Besonderheiten der Optimierung
2017-08-23 19:18:44 +00:00
- Analytische Lösung nur bis zur Optimierung der ersten Punkte gültig
- Kriterien trotzdem gut
2017-10-01 18:06:41 +00:00
# Evaluation of Scenarios
2017-10-08 19:08:29 +00:00
\label{sec:res}
2017-10-01 18:06:41 +00:00
2017-10-08 21:29:45 +00:00
## Spearman/Pearson--Metriken
2017-08-23 19:18:44 +00:00
- Was ist das?
- Wieso sollte uns das interessieren?
- Wieso reicht Monotonie?
- Haben wir das gezeigt?
2017-10-01 18:06:41 +00:00
- Statistik, Bilder, blah!
2017-08-23 19:18:44 +00:00
2017-10-01 19:04:04 +00:00
## Results of 1D Function Approximation
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution1d/20171005-all_appended.png}
2017-10-01 19:04:04 +00:00
\caption{Results 1D}
\end{figure}
<!-- ![Improvement potential vs steps](img/evolution1d/20170830-evolution1D_5x5_100Times-all_improvement-vs-steps.png) -->
<!-- -->
<!-- ![Improvement potential vs evolutional error](img/evolution1d/20170830-evolution1D_5x5_100Times-all_improvement-vs-evo-error.png) -->
<!-- -->
<!-- ![Regularity vs steps](img/evolution1d/20170830-evolution1D_5x5_100Times-all_regularity-vs-steps.png) -->
## Results of 3D Function Approximation
\begin{figure}[!ht]
2017-10-09 00:31:11 +00:00
\includegraphics[width=\textwidth]{img/evolution3d/20171007_3dFit_all_append.png}
2017-10-01 19:04:04 +00:00
\caption{Results 3D}
\end{figure}
<!-- ![Improvement potential vs steps](img/evolution3d/20170926_3dFit_both_improvement-vs-steps.png) -->
<!-- -->
<!-- ![Improvement potential vs evolutional -->
<!-- error](img/evolution3d/20170926_3dFit_both_improvement-vs-evo-error.png) -->
<!-- -->
<!-- ![Regularity vs steps](img/evolution3d/20170926_3dFit_both_regularity-vs-steps.png) -->
2017-08-23 19:18:44 +00:00
# Schluss
2017-10-08 19:08:29 +00:00
\label{sec:dis}
2017-08-23 19:18:44 +00:00
HAHA .. als ob -.-