masterarbeit/arbeit/ma.md

282 lines
11 KiB
Markdown

---
fontsize: 11pt
---
\chapter*{How to read this Thesis}
As a guide through the nomenclature used in the formulas we prepend this
chapter.
Unless otherwise noted the following holds:
- lowercase letters $x,y,z$
refer to real variables and represent a point in 3D-Space.
- lowercase letters $u,v,w$
refer to real variables between $0$ and $1$ used as coefficients in a 3D
B-Spline grid.
- other lowercase letters
refer to other scalar (real) variables.
- lowercase **bold** letters (e.g. $\vec{x},\vec{y}$)
refer to 3D coordinates
- uppercase **BOLD** letters (e.g. $\vec{D}, \vec{M}$)
refer to Matrices
# Introduction
In this Master Thesis we try to extend a previously proposed concept of
predicting the evolvability of \acf{FFD} given a
Deformation-Matrix\cite{anrichterEvol}. In the original publication the author
used random sampled points weighted with \acf{RBF} to deform the mesh and
defined three different criteria that can be calculated prior to using an
evolutional optimization algorithm to asses the quality and potential of such
optimization.
We will replicate the same setup on the same meshes but use \acf{FFD} instead of
\acf{RBF} to create a deformation and evaluate if the evolution-criteria still
work as a predictor given the different deformation scheme.
## What is \acf{FFD}?
First of all we have to establish how a \ac{FFD} works and why this is a good
tool for deforming meshes in the first place. For simplicity we only summarize
the 1D-case from \cite{spitzmuller1996bezier} here and go into the extension to
the 3D case in chapter \ref{3dffd}.
Given an arbitrary number of points $p_i$ alongside a line, we map a scalar
value $\tau_i \in [0,1[$ to each point with $\tau_i < \tau_{i+1} \forall i$.
Given a degree of the target polynomial $d$ we define the curve
$N_{i,d,\tau_i}(u)$ as follows:
\begin{equation} \label{eqn:ffd1d1}
N_{i,0,\tau}(u) = \begin{cases} 1, & u \in [\tau_i, \tau_{i+1}[ \\ 0, & \mbox{otherwise} \end{cases}
\end{equation}
and
\begin{equation} \label{eqn:ffd1d2}
N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u)
\end{equation}
If we now multiply every $p_i$ with the corresponding $N_{i,d,\tau_i}(u)$ we get
the contribution of each point $p_i$ to the final curve-point parameterized only
by $u \in [0,1[$. As can be seen from \eqref{eqn:ffd1d2} we only access points
$[i..i+d]$ for any given $i$^[one more for each recursive step.], which gives
us, in combination with choosing $p_i$ and $\tau_i$ in order, only a local
interference of $d+1$ points.
We can even derive this equation straightforward for an arbitrary
$N$^[*Warning:* in the case of $d=1$ the recursion-formula yields a $0$
denominator, but $N$ is also $0$. The right solution for this case is a
derivative of $0$]:
$$\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)$$
For a B-Spline
$$s(u) = \sum_{i} N_{i,d,\tau_i}(u) p_i$$
these derivations yield $\frac{\partial^d}{\partial u} s(u) = 0$.
Another interesting property of these recursive polynomials is that they are
continuous (given $d \ge 1$) as every $p_i$ gets blended in linearly between
$\tau_i$ and $\tau_{i+d}$ and out linearly between $\tau_{i+1}$ and
$\tau_{i+d+1}$ as can bee seen from the two coefficients in every step of the
recursion.
### Why is \ac{FFD} a good deformation function?
The usage of \ac{FFD} as a tool for manipulating follows directly from the
properties of the polynomials and the correspondence to the control points.
Having only a few control points gives the user a nicer high-level-interface, as
she only needs to move these points and the model follows in an intuitive
manner. The deformation is smooth as the underlying polygon is smooth as well
and affects as many vertices of the model as needed. Moreover the changes are
always local so one risks not any change that a user cannot immediately see.
But there are also disadvantages of this approach. The user loses the ability to
directly influence vertices and even seemingly simple tasks as creating a
plateau can be difficult to
achieve\cite[chapter~3.2]{hsu1991dmffd}\cite{hsu1992direct}.
This disadvantages led to the formulation of
\acf{DM-FFD}\cite[chapter~3.3]{hsu1991dmffd} in which the user directly
interacts with the surface-mesh. All interactions will be applied
proportionally to the control-points that make up the parametrization of the
interaction-point itself yielding a smooth deformation of the surface *at* the
surface without seemingly arbitrary scattered control-points. Moreover this
increases the efficiency of an evolutionary optimization\cite{Menzel2006}, which
we will use later on.
But this approach also has downsides as can be seen in
\cite[figure~7]{hsu1991dmffd}\unsure{figure hier einfügen?}, as the tessellation
of the invisible grid has a major impact on the deformation itself.
All in all \ac{FFD} and \ac{DM-FFD} are still good ways to deform a high-polygon
mesh albeit the downsides.
## What is evolutional optimization?
## Advantages of evolutional algorithms
\improvement[inline]{Needs citations}
The main advantage of evolutional algorithms is the ability to find optima of
general functions just with the help of a given error-function (or
fitness-function in this domain). This avoids the general pitfalls of
gradient-based procedures, which often target the same error-function as an
evolutional algorithm, but can get stuck in local optima.
This is mostly due to the fact that a gradient-based procedure has only one
point of observation from where it evaluates the next steps, whereas an
evolutional strategy starts with a population of guessed solutions. Because an
evolutional strategy modifies the solution randomly, keeps the best solutions
and purges the worst, it can also target multiple different hypothesis at the
same time where the local optima die out in the face of other, better
candidates.
If an analytic best solution exists (i.e. because the error-function is convex)
an evolutional algorithm is not the right choice. Although both converge to the
same solution, the analytic one is usually faster. But in reality many problems
have no analytic solution, because the problem is not convex. Here evolutional
optimization has one more advantage as you get bad solutions fast, which refine
over time.
## Criteria for the evolvability of linear deformations
### Variability
In \cite{anrichterEvol} variability is defined as
$$V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n},$$
whereby $\vec{U}$ is the $m \times n$ deformation-Matrix used to map the $m$
control points onto the $n$ vertices.
Given $n = m$, an identical number of control-points and vertices, this
quotient will be $=1$ if all control points are independent of each other and
the solution is to trivially move every control-point onto a target-point.
In praxis the value of $V(\vec{U})$ is typically $\ll 1$, because as
there are only few control-points for many vertices, so $m \ll n$.
Additionally in our setup we connect neighbouring control-points in a grid so
each control point is not independent, but typically depends on $4^d$
control-points for an $d$-dimensional control mesh.
### Regularity
Regularity is defined\cite{anrichterEvol} as
$$R(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}$$
where $\sigma_{min}$ and $\sigma_{max}$ are the smallest and greatest right singular
value of the deformation-matrix $\vec{U}$.
As we deform the given Object only based on the parameters as $\vec{p} \mapsto
f(\vec{x} + \vec{U}\vec{p})$ this makes sure that $\|\vec{Up}\| \propto
\|\vec{p}\|$ when $\kappa(\vec{U}) \approx 1$. The inversion of $\kappa(\vec{U})$
is only performed to map the criterion-range to $[0..1]$, whereas $1$ is the
optimal value and $0$ is the worst value.
This criterion should be characteristic for numeric stability on the on
hand\cite[chapter 2.7]{golub2012matrix} and for convergence speed of evolutional
algorithms on the other hand\cite{anrichterEvol} as it is tied to the notion of
locality\cite{weise2012evolutionary,thorhauer2014locality}.
### Improvement Potential
In contrast to the general nature of variability and regularity, which are
agnostic of the fitness-function at hand the third criterion should reflect a
notion of potential.
As during optimization some kind of gradient $g$ is available to suggest a
direction worth pursuing we use this to guess how much change can be achieved in
the given direction.
The definition for an improvement potential $P$ is\cite{anrichterEvol}:
$$
P(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec(G)\|^2_F
$$
given some approximate $n \times d$ fitness-gradient $\vec{G}$, normalized to
$\|\vec{G}\|_F = 1$, whereby $\|\cdot\|_F$ denotes the Frobenius-Norm.
# Implementation of \acf{FFD}
## Was ist FFD?
\label{3dffd}
- Definition
- Wieso Newton-Optimierung?
- Was folgt daraus?
## Test Scenario: 1D Function Approximation
### Optimierungszenario
- Ebene -> Template-Fit
### Matching in 1D
- Trivial
### Besonderheiten der Auswertung
- Analytische Lösung einzig beste
- Ergebnis auch bei Rauschen konstant?
- normierter 1-Vektor auf den Gradienten addieren
- Kegel entsteht
## Test Scenario: 3D Function Approximation
### Optimierungsszenario
- Ball zu Mario
### Matching in 3D
- alternierende Optimierung
### Besonderheiten der Optimierung
- Analytische Lösung nur bis zur Optimierung der ersten Punkte gültig
- Kriterien trotzdem gut
# Evaluation of Scenarios
## Spearman/Pearson-Metriken
- Was ist das?
- Wieso sollte uns das interessieren?
- Wieso reicht Monotonie?
- Haben wir das gezeigt?
- Statistik, Bilder, blah!
## Results of 1D Function Approximation
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution1d/20170830-evolution1D_5x5_100Times-all_appended.png}
\caption{Results 1D}
\end{figure}
<!-- ![Improvement potential vs steps](img/evolution1d/20170830-evolution1D_5x5_100Times-all_improvement-vs-steps.png) -->
<!-- -->
<!-- ![Improvement potential vs evolutional error](img/evolution1d/20170830-evolution1D_5x5_100Times-all_improvement-vs-evo-error.png) -->
<!-- -->
<!-- ![Regularity vs steps](img/evolution1d/20170830-evolution1D_5x5_100Times-all_regularity-vs-steps.png) -->
## Results of 3D Function Approximation
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution3d/20170926_3dFit_both_append.png}
\caption{Results 3D}
\end{figure}
<!-- ![Improvement potential vs steps](img/evolution3d/20170926_3dFit_both_improvement-vs-steps.png) -->
<!-- -->
<!-- ![Improvement potential vs evolutional -->
<!-- error](img/evolution3d/20170926_3dFit_both_improvement-vs-evo-error.png) -->
<!-- -->
<!-- ![Regularity vs steps](img/evolution3d/20170926_3dFit_both_regularity-vs-steps.png) -->
# Schluss
HAHA .. als ob -.-