masterarbeit/arbeit/ma.tex

669 lines
25 KiB
TeX
Raw Blame History

% bibtotoc[numbered] : Literaturv. wird in Inhaltsv. aufgenommen
% abstracton : Abstract mit Ueberschrift
\documentclass[
a4paper, % default
12pt, % default = 11pt
BCOR6mm, % Bindungskorrektur bei Klebebindung 6mm, bei Lochen BCOR8.25mm
twoside, % default, 2seitig
titlepage,
% pagesize=auto
% openany, % Kapitel koennen auch auf geraden Seiten starten
% draft % schneller compillieren, Bild-dummy
% appendixprefix % Anhang mit Bezeichner
xcolor=dvipsnames,
]{scrbook}
%%%%%%%%%%%%%%% Literaturverzeichnisstil %%%%%%%%%%%%%%%
% achtung, auch \bibstyle, unten, anpassen!
% \usepackage[square]{natbib} % fuer bibstyle natdin/ see ../natbib.pdf
%%%%%%%%%%%%%%% Packages %%%%%%%%%%%%%%%
\input{settings/packages}
\makeindex
%%%%%%%%%%%%%%% Graphics %%%%%%%%%%%%%%%
\graphicspath{{pics/}}
%%%%%%%%%%%%%%% Globale Einstellungen %%%%%%%%%%%%%%%
\input{settings/commands}
\input{settings/environments}
%\setlength{\parindent}{0pt} % kein einzug bei absaetzen
%\setlength{\lineskip}{1ex plus0.5ex minus0.5ex} % dafr abstand zwischen abs<62>zen (funktioniert noch nicht)
% \renewcommand{\familydefault}{\sfdefault}
\setstretch{1.44} % 1.5-facher zeilenabstand
%%%%%%%%%%%%%%% Header - Footer %%%%%%%%%%%%%%%
% ### Fr 2 Seitig (option twopage):
\usepackage{fancyhdr}%http://www.tug.org/tex-archive/info/german/fancyhdr
\pagestyle{fancy} % must be called before the following renewcommands !!!
\fancyhead{} % Alte Definition loeschen
\fancyfoot{} % dito
\renewcommand{\chaptermark}[1]{\markboth{\chaptername\ \thechapter{}: #1}{}}
\renewcommand{\sectionmark}[1]{\markright{\thesection{}~~#1}}
% % um das hard codierte makeuppercase zu verhindern
\fancyhead[EL]{\textrm{\nouppercase\leftmark}}% Even=linke Seiten und dort links, also aussn das \leftmark
\fancyhead[OR]{\textrm{\nouppercase\rightmark}}% Odd=rechte Seiten und dort rechts, also aussen das \rightmark
\fancyfoot[RO,LE]{\thepage} % Seitenzahl : rechts ungerade, links gerade
% ### fr 1 seitig
%\usepackage{fancyhdr} %
%\lhead{\textsf{\noupercase\leftmark}}
%\chead{}
%\rhead{\textsf{\nouppercase\rightmark}}
%\lfoot{}
%\cfoot{\textsf{\thepage}}
%\rfoot{}
\setkomafont{sectioning}{\rmfamily\bfseries}
\setcounter{tocdepth}{3}
%\setcounter{secnumdepth}{3}
% \input{settings/hyphenation} %% Manchmal bricht latex nicht richtig um. hier trennregeln rein.
% \includeonly{%
% % files/0_titlepage.tex
% % files/1_0_introduction,%
% % files/2_0_knownDCJ,%
% % files/3_0_DCJIndels,%
% % files/4_0_DCJIndels_1comps,%
% files/5_0_DCJIndels_2comps,%
% % files/6_0_implementation,%
% % files/7_0_evaluation%
% % ,files/8_0_conclusion%
% }
%%%%%%%%%%%%%%% PANDOC-nedded defs %%%%%%%%%%
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
%disable "Redefining ngerman shorthand"-Message
% \makeatletter
% \patchcmd{\pdfstringdef}
% {\csname HyPsd@babel@}
% {\let\bbl@info\@gobble\csname HyPsd@babel@}
% {}{}
% \makeatother
%%%%%%%%%%%%%%% Hauptdokument %%%%%%%%%%%%%%%
\begin{document}
% ###### Autoref definitions (hyperref package)#####
\def\subtableautorefname{Table}
\def\algorithmautorefname{Algorithm}
\def\chapterautorefname{Chapter}
\def\sectionautorefname{Section}
\def\definitionautorefname{Definition}
\def\exampleautorefname{Example}
\def\observationautorefname{Observation}
\def\propositionautorefname{Proposition}
\def\lemmaautorefname{Lemma}
% in diesem Dokument nicht verwendet:
% \def\subsectionautorefname{Subsection}
% \def\Subsubsectionautorefname{Subsubsection}
% \def\subfigureautorefname{Figure}
% \def\claimautorefname{Claim}
%%%%%%%%%%%%%%% Deckblatt %%%%%%%%%%%%%%%
\extratitle{}
\input{files/titlepage}
%\input{files/titlepage.pdf} % Rueckseite leer
% \input{files/0_deckblatt/title}
\pagestyle{empty} % Rueckseite leer
%
%%%%%%%%%%%%%%% Verzeichnisse %%%%%%%%%%%%%%%
\frontmatter % Abstrakte Gliederungsebene: Anfang des Buches
\tableofcontents % Rueckseite leer
%\lstlistoflistings % fuer listingsverzeichnis mit package listings
%%%%%%%%%%%%%%% Hauptteil %%%%%%%%%%%%%%%
% Insgesamt ca. 60-100 Seiten Davon mindesten 50% Eigene Arbeit
\mainmatter %Abstrakte Gliederungsebene: Hauptteil des Buches
\pagestyle{fancy}
\pagenumbering{arabic}
\chapter*{How to read this Thesis}
As a guide through the nomenclature used in the formulas we prepend this
chapter.
Unless otherwise noted the following holds:
\begin{itemize}
\tightlist
\item
lowercase letters \(x,y,z\)\\
refer to real variables and represent a point in 3D-Space.
\item
lowercase letters \(u,v,w\)\\
refer to real variables between \(0\) and \(1\) used as coefficients
in a 3D B-Spline grid.
\item
other lowercase letters\\
refer to other scalar (real) variables.
\item
lowercase \textbf{bold} letters (e.g. \(\vec{x},\vec{y}\))\\
refer to 3D coordinates
\item
uppercase \textbf{BOLD} letters (e.g. \(\vec{D}, \vec{M}\))\\
refer to Matrices
\end{itemize}
\chapter{Introduction}\label{introduction}
\improvement[inline]{mehr Motivation, Ziel der Arbeit, Wieso das ganze?\newline
Wieso untersuchen wir das überhaupt? \cmark \newline
Aufbau der Arbeit? \xmark \newline
Mehr Bilder}
Many modern industrial design processes require advanced optimization
methods do to the increased complexity. These designs have to adhere to
more and more degrees of freedom as methods refine and/or other methods
are used. Examples for this are physical domains like aerodynamic
(i.e.~drag), fluid dynamics (i.e.~throughput of liquid) -- where the
complexity increases with the temporal and spatial resolution of the
simulation -- or known hard algorithmic problems in informatics
(i.e.~layouting of circuit boards or stacking of 3D-objects). Moreover
these are typically not static environments but requirements shift over
time or from case to case.
Evolutional algorithms cope especially well with these problem domains
while addressing all the issues at hand\cite{minai2006complex}. One of
the main concerns in these algorithms is the formulation of the problems
in terms of a genome and a fitness function. While one can typically use
an arbitrary cost-function for the fitness-functions (i.e.~amount of
drag, amount of space, etc.), the translation of the problem-domain into
a simple parametric representation can be challenging.
The quality of such a representation in biological evolution is called
\emph{evolvability}\cite{wagner1996complex} and is at the core of this
thesis. However, there is no consensus on how \emph{evolvability} is
defined and the meaning varies from context to
context\cite{richter2015evolvability}.
As we transfer the results of Richter et al.\cite{anrichterEvol} from
using \acf{RBF} as a representation to manipulate a geometric mesh to
the use of \acf{FFD} we will use the same definition for evolvability
the original author used, namely \emph{regularity}, \emph{variability},
and \emph{improvement potential}. We introduce these term in detail in
Chapter \ref{sec:intro:rvi}.
In the original publication the author used random sampled points
weighted with \acf{RBF} to deform the mesh and showed that the mentioned
criteria of \emph{regularity}, \emph{variability}, and \emph{improvement
potential} correlate with the quality and potential of such
optimization.
We will replicate the same setup on the same meshes but use \acf{FFD}
instead of \acf{RBF} to create a local deformation near the control
points and evaluate if the evolution-criteria still work as a predictor
given the different deformation scheme, as suspected in
\cite{anrichterEvol}.
\section{Outline of this thesis}\label{outline-of-this-thesis}
\improvement[inline]{Kapitel vorstellen, Inhalt? Ziel?}
\chapter{Background}\label{background}
\section{\texorpdfstring{What is \acf{FFD}?}{What is ?}}\label{what-is}
\label{sec:intro:ffd}
First of all we have to establish how a \ac{FFD} works and why this is a
good tool for deforming meshes in the first place. For simplicity we
only summarize the 1D-case from \cite{spitzmuller1996bezier} here and go
into the extension to the 3D case in chapter \ref{3dffd}.
Given an arbitrary number of points \(p_i\) alongside a line, we map a
scalar value \(\tau_i \in [0,1[\) to each point with
\(\tau_i < \tau_{i+1} \forall i\). Given a degree of the target
polynomial \(d\) we define the curve \(N_{i,d,\tau_i}(u)\) as follows:
\begin{equation} \label{eqn:ffd1d1}
N_{i,0,\tau}(u) = \begin{cases} 1, & u \in [\tau_i, \tau_{i+1}[ \\ 0, & \mbox{otherwise} \end{cases}
\end{equation}
and
\begin{equation} \label{eqn:ffd1d2}
N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u)
\end{equation}
If we now multiply every \(p_i\) with the corresponding
\(N_{i,d,\tau_i}(u)\) we get the contribution of each point \(p_i\) to
the final curve-point parameterized only by \(u \in [0,1[\). As can be
seen from \eqref{eqn:ffd1d2} we only access points \([i..i+d]\) for any
given \(i\)\footnote{one more for each recursive step.}, which gives us,
in combination with choosing \(p_i\) and \(\tau_i\) in order, only a
local interference of \(d+1\) points.
We can even derive this equation straightforward for an arbitrary
\(N\)\footnote{\emph{Warning:} in the case of \(d=1\) the
recursion-formula yields a \(0\) denominator, but \(N\) is also \(0\).
The right solution for this case is a derivative of \(0\)}:
\[\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)\]
For a B-Spline \[s(u) = \sum_{i} N_{i,d,\tau_i}(u) p_i\] these
derivations yield \(\frac{\partial^d}{\partial u} s(u) = 0\).
Another interesting property of these recursive polynomials is that they
are continuous (given \(d \ge 1\)) as every \(p_i\) gets blended in
linearly between \(\tau_i\) and \(\tau_{i+d}\) and out linearly between
\(\tau_{i+1}\) and \(\tau_{i+d+1}\) as can bee seen from the two
coefficients in every step of the recursion.
\subsection{\texorpdfstring{Why is \ac{FFD} a good deformation
function?}{Why is a good deformation function?}}\label{why-is-a-good-deformation-function}
The usage of \ac{FFD} as a tool for manipulating follows directly from
the properties of the polynomials and the correspondence to the control
points. Having only a few control points gives the user a nicer
high-level-interface, as she only needs to move these points and the
model follows in an intuitive manner. The deformation is smooth as the
underlying polygon is smooth as well and affects as many vertices of the
model as needed. Moreover the changes are always local so one risks not
any change that a user cannot immediately see.
But there are also disadvantages of this approach. The user loses the
ability to directly influence vertices and even seemingly simple tasks
as creating a plateau can be difficult to
achieve\cite[chapter~3.2]{hsu1991dmffd}\cite{hsu1992direct}.
This disadvantages led to the formulation of
\acf{DM-FFD}\cite[chapter~3.3]{hsu1991dmffd} in which the user directly
interacts with the surface-mesh. All interactions will be applied
proportionally to the control-points that make up the parametrization of
the interaction-point itself yielding a smooth deformation of the
surface \emph{at} the surface without seemingly arbitrary scattered
control-points. Moreover this increases the efficiency of an
evolutionary optimization\cite{Menzel2006}, which we will use later on.
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/hsu_fig7.png}
\caption{Figure 7 from \cite{hsu1991dmffd}.}
\label{fig:hsu_fig7}
\end{figure}
But this approach also has downsides as can be seen in figure
\ref{fig:hsu_fig7}, as the tessellation of the invisible grid has a
major impact on the deformation itself.
All in all \ac{FFD} and \ac{DM-FFD} are still good ways to deform a
high-polygon mesh albeit the downsides.
\section{What is evolutional
optimization?}\label{what-is-evolutional-optimization}
\change[inline]{Write this section}
\section{Advantages of evolutional
algorithms}\label{advantages-of-evolutional-algorithms}
\change[inline]{Needs citations} The main advantage of evolutional
algorithms is the ability to find optima of general functions just with
the help of a given error-function (or fitness-function in this domain).
This avoids the general pitfalls of gradient-based procedures, which
often target the same error-function as an evolutional algorithm, but
can get stuck in local optima.
This is mostly due to the fact that a gradient-based procedure has only
one point of observation from where it evaluates the next steps, whereas
an evolutional strategy starts with a population of guessed solutions.
Because an evolutional strategy modifies the solution randomly, keeps
the best solutions and purges the worst, it can also target multiple
different hypothesis at the same time where the local optima die out in
the face of other, better candidates.
If an analytic best solution exists (i.e.~because the error-function is
convex) an evolutional algorithm is not the right choice. Although both
converge to the same solution, the analytic one is usually faster. But
in reality many problems have no analytic solution, because the problem
is not convex. Here evolutional optimization has one more advantage as
you get bad solutions fast, which refine over time.
\section{Criteria for the evolvability of linear
deformations}\label{criteria-for-the-evolvability-of-linear-deformations}
\label{sec:intro:rvi}
\subsection{Variability}\label{variability}
In \cite{anrichterEvol} variability is defined as
\[V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n},\] whereby \(\vec{U}\)
is the \(m \times n\) deformation-Matrix used to map the \(m\) control
points onto the \(n\) vertices.
Given \(n = m\), an identical number of control-points and vertices,
this quotient will be \(=1\) if all control points are independent of
each other and the solution is to trivially move every control-point
onto a target-point.
In praxis the value of \(V(\vec{U})\) is typically \(\ll 1\), because as
there are only few control-points for many vertices, so \(m \ll n\).
Additionally in our setup we connect neighbouring control-points in a
grid so each control point is not independent, but typically depends on
\(4^d\) control-points for an \(d\)-dimensional control mesh.
\subsection{Regularity}\label{regularity}
Regularity is defined\cite{anrichterEvol} as
\[R(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}\]
where \(\sigma_{min}\) and \(\sigma_{max}\) are the smallest and
greatest right singular value of the deformation-matrix \(\vec{U}\).
As we deform the given Object only based on the parameters as
\(\vec{p} \mapsto f(\vec{x} + \vec{U}\vec{p})\) this makes sure that
\(\|\vec{Up}\| \propto \|\vec{p}\|\) when \(\kappa(\vec{U}) \approx 1\).
The inversion of \(\kappa(\vec{U})\) is only performed to map the
criterion-range to \([0..1]\), whereas \(1\) is the optimal value and
\(0\) is the worst value.
This criterion should be characteristic for numeric stability on the on
hand\cite[chapter 2.7]{golub2012matrix} and for convergence speed of
evolutional algorithms on the other hand\cite{anrichterEvol} as it is
tied to the notion of
locality\cite{weise2012evolutionary,thorhauer2014locality}.
\subsection{Improvement Potential}\label{improvement-potential}
In contrast to the general nature of variability and regularity, which
are agnostic of the fitness-function at hand the third criterion should
reflect a notion of potential.
As during optimization some kind of gradient \(g\) is available to
suggest a direction worth pursuing we use this to guess how much change
can be achieved in the given direction.
The definition for an improvement potential \(P\)
is\cite{anrichterEvol}: \[
P(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec(G)\|^2_F
\] given some approximate \(n \times d\) fitness-gradient \(\vec{G}\),
normalized to \(\|\vec{G}\|_F = 1\), whereby \(\|\cdot\|_F\) denotes the
Frobenius-Norm.
\chapter{\texorpdfstring{Implementation of
\acf{FFD}}{Implementation of }}\label{implementation-of}
The general formulation of B-Splines has two free parameters \(d\) and
\(\tau\) which must be chosen beforehand.
As we usually work with regular grids in our \ac{FFD} we define \(\tau\)
statically as \(\tau_i = \nicefrac{i}{n}\) whereby \(n\) is the number
of control-points in that direction.
\(d\) defines the \emph{degree} of the B-Spline-Function (the number of
times this function is differentiable) and for our purposes we fix \(d\)
to \(3\), but give the formulas for the general case so it can be
adapted quite freely.
\section{\texorpdfstring{Adaption of
\ac{FFD}}{Adaption of }}\label{adaption-of}
As we have established in Chapter \ref{sec:intro:ffd} we can define an
\ac{FFD}-displacement as
\begin{equation}
\Delta_x(u) = \sum_i N_{i,d,\tau_i}(u) \Delta_x c_i
\end{equation}
Note that we only sum up the \(\Delta\)-displacements in the control
points \(c_i\) to get the change in position of the point we are
interested in.
In this way every deformed vertex is defined by \[
\textrm{Deform}(v_x) = v_x + \Delta_x(u)
\] with \(u \in [0..1[\) being the variable that connects the
high-detailed vertex-mesh to the low-detailed control-grid. To actually
calculate the new position of the vertex we first have to calculate the
\(u\)-value for each vertex. This is achieved by finding out the
parametrization of \(v\) in terms of \(c_i\) \[
v_x \overset{!}{=} \sum_i N_{i,d,\tau_i}(u) c_i
\] so we can minimize the error between those two: \[
\underset{u}{\argmin}\,Err(u,v_x) = \underset{u}{\argmin}\,2 \cdot \|v_x - \sum_i N_{i,d,\tau_i}(u) c_i\|^2_2
\]
As this error-term is quadratic we just derive by \(u\) yielding
\begin{eqnarray*}
& \frac{\partial}{\partial u} & v_x - \sum_i N_{i,d,\tau_i}(u) c_i \\
& = & - \sum_i \left( \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u) \right) c_i
\end{eqnarray*}
and do a gradient-descend to approximate the value of \(u\) up to an
\(\epsilon\) of \(0.0001\).
For this we use the Gauss-Newton algorithm\cite{gaussNewton} as the
solution to this problem may not be deterministic, because we usually
have way more vertices than control points (\(\#v \gg \#c\)).
\section{\texorpdfstring{Adaption of \ac{FFD} for a
3D-Mesh}{Adaption of for a 3D-Mesh}}\label{adaption-of-for-a-3d-mesh}
\label{3dffd}
This is a straightforward extension of the 1D-method presented in the
last chapter. But this time things get a bit more complicated. As we
have a 3-dimensional grid we may have a different amount of
control-points in each direction.
Given \(n,m,o\) control points in \(x,y,z\)-direction each Point on the
curve is defined by
\[V(u,v,w) = \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.\]
In this case we have three different B-Splines (one for each dimension)
and also 3 variables \(u,v,w\) for each vertex we want to approximate.
Given a target vertex \(\vec{p}^*\) and an initial guess
\(\vec{p}=V(u,v,w)\) we define the error-function for the
gradient-descent as:
\[Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)\]
And the partial version for just one direction as
\[Err_x(u,v,w,\vec{p}^{*}) = p^{*}_x - \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \]
To solve this we derive partially, like before:
\[
\begin{array}{rl}
\displaystyle \frac{\partial Err_x}{\partial u} & p^{*}_x - \displaystyle \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \\
= & \displaystyle - \sum_i \sum_j \sum_k N'_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x
\end{array}
\]
The other partial derivatives follow the same pattern yielding the
Jacobian:
\[
J(Err(u,v,w)) =
\left(
\begin{array}{ccc}
\frac{\partial Err_x}{\partial u} & \frac{\partial Err_x}{\partial v} & \frac{\partial Err_x}{\partial w} \\
\frac{\partial Err_y}{\partial u} & \frac{\partial Err_y}{\partial v} & \frac{\partial Err_y}{\partial w} \\
\frac{\partial Err_z}{\partial u} & \frac{\partial Err_z}{\partial v} & \frac{\partial Err_z}{\partial w}
\end{array}
\right)
\] \[
\scriptsize
=
\left(
\begin{array}{ccc}
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_x \\
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_y \\
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_z
\end{array}
\right)
\]
With the Gauss-Newton algorithm we iterate via the formula
\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\]
and use Cramers rule for inverting the small Jacobian and solving this
system of linear equations.
\section{Parametrisierung sinnvoll?}\label{parametrisierung-sinnvoll}
\begin{itemize}
\tightlist
\item
Nachteile von Parametrisierung
\item
Deformation ist um einen Kontrollpunkt viel direkter zu steuern.
\item
=\textgreater{} DM-FFD?
\end{itemize}
\chapter{\texorpdfstring{Scenarios for testing evolvability criteria
using
\acf{FFD}}{Scenarios for testing evolvability criteria using }}\label{scenarios-for-testing-evolvability-criteria-using}
\section{Test Scenario: 1D Function
Approximation}\label{test-scenario-1d-function-approximation}
\subsection{Optimierungszenario}\label{optimierungszenario}
\begin{itemize}
\tightlist
\item
Ebene -\textgreater{} Template-Fit
\end{itemize}
\subsection{Matching in 1D}\label{matching-in-1d}
\begin{itemize}
\tightlist
\item
Trivial
\end{itemize}
\subsection{Besonderheiten der
Auswertung}\label{besonderheiten-der-auswertung}
\begin{itemize}
\tightlist
\item
Analytische Lösung einzig beste
\item
Ergebnis auch bei Rauschen konstant?
\item
normierter 1-Vektor auf den Gradienten addieren
\begin{itemize}
\tightlist
\item
Kegel entsteht
\end{itemize}
\end{itemize}
\section{Test Scenario: 3D Function
Approximation}\label{test-scenario-3d-function-approximation}
\subsection{Optimierungsszenario}\label{optimierungsszenario}
\begin{itemize}
\tightlist
\item
Ball zu Mario
\end{itemize}
\subsection{Matching in 3D}\label{matching-in-3d}
\begin{itemize}
\tightlist
\item
alternierende Optimierung
\end{itemize}
\subsection{Besonderheiten der
Optimierung}\label{besonderheiten-der-optimierung}
\begin{itemize}
\tightlist
\item
Analytische Lösung nur bis zur Optimierung der ersten Punkte gültig
\item
Kriterien trotzdem gut
\end{itemize}
\chapter{Evaluation of Scenarios}\label{evaluation-of-scenarios}
\section{Spearman/Pearson-Metriken}\label{spearmanpearson-metriken}
\begin{itemize}
\tightlist
\item
Was ist das?
\item
Wieso sollte uns das interessieren?
\item
Wieso reicht Monotonie?
\item
Haben wir das gezeigt?
\item
Statistik, Bilder, blah!
\end{itemize}
\section{Results of 1D Function
Approximation}\label{results-of-1d-function-approximation}
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution1d/20170830-evolution1D_5x5_100Times-all_appended.png}
\caption{Results 1D}
\end{figure}
\section{Results of 3D Function
Approximation}\label{results-of-3d-function-approximation}
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution3d/20170926_3dFit_both_append.png}
\caption{Results 3D}
\end{figure}
\chapter{Schluss}\label{schluss}
HAHA .. als ob -.-
\backmatter
\cleardoublepage
\renewcommand\thesection{\Roman{section}}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
\setcounter{section}{1} % reset section to 1 so its stars I, II, III,...
\chapter*{Appendix}
\addcontentsline{toc}{chapter}{\protect\numberline{}Appendix}
\pagenumbering{roman}
%%%%%%%%%%%%%%% Literaturverzeichnis %%%%%%%%%%%%%%%
\bibliographystyle{natdin} % \bibliographystyle{natdin}
\bibliography{bibma}
\addcontentsline{toc}{section}{\protect\numberline{\thesection}Bibliography} % Literaturverzeichnis in das Inhaltsverzeichnis aufnehmen
\addtocounter{section}{1}
\newpage
%%%%%%%%%%%%%%% Anhang %%%%%%%%%%%%%%%
% \clearpage %spaeter alles wieder rein
% % \input{files/appendix}
\input{settings/abkuerzungen}
\addcontentsline{toc}{section}{\protect\numberline{\thesection}Abbreviations}
\addtocounter{section}{1}
\newpage
% \listofalgorithms
% \addcontentsline{toc}{section}{\protect\numberline{\thesection}List of Algorithms}
% \addtocounter{section}{1}
% \newpage
%
\listoffigures
% \listoftables
\listoftodos
\addcontentsline{toc}{section}{\protect\numberline{\thesection}TODOs}
\addtocounter{section}{1}
\newpage
% \printindex
%%%%%%%%%%%%%%% Erklaerung %%%%%%%%%%%%%%%
% *\input{settings/declaration}
\include{files/erklaerung}
\end{document}