masterarbeit/arbeit/ma.md

27 KiB
Raw Blame History

fontsize
12pt

\chapter*{How to read this Thesis}

As a guide through the nomenclature used in the formulas we prepend this chapter.

Unless otherwise noted the following holds:

  • lowercase letters x,y,z
    refer to real variables and represent the coordinates of a point in 3D--Space.
  • lowercase letters u,v,w
    refer to real variables between 0 and 1 used as coefficients in a 3D B--Spline grid.
  • other lowercase letters
    refer to other scalar (real) variables.
  • lowercase bold letters (e.g. \vec{x},\vec{y})
    refer to 3D coordinates
  • uppercase BOLD letters (e.g. \vec{D}, \vec{M})
    refer to Matrices

Introduction

\improvement[inline]{Mehr Bilder}

Many modern industrial design processes require advanced optimization methods due to the increased complexity resulting from more and more degrees of freedom as methods refine and/or other methods are used. Examples for this are physical domains like aerodynamic (i.e. drag), fluid dynamics (i.e. throughput of liquid) --- where the complexity increases with the temporal and spatial resolution of the simulation --- or known hard algorithmic problems in informatics (i.e. layouting of circuit boards or stacking of 3D--objects). Moreover these are typically not static environments but requirements shift over time or from case to case.

Evolutionary algorithms cope especially well with these problem domains while addressing all the issues at hand\cite{minai2006complex}. One of the main concerns in these algorithms is the formulation of the problems in terms of a genome and fitness--function. While one can typically use an arbitrary cost--function for the fitness--functions (i.e. amount of drag, amount of space, etc.), the translation of the problem--domain into a simple parametric representation (the genome) can be challenging.

This translation is often necessary as the target of the optimization may have too many degrees of freedom. In the example of an aerodynamic simulation of drag onto an object, those objects--designs tend to have a high number of vertices to adhere to various requirements (visual, practical, physical, etc.). A simpler representation of the same object in only a few parameters that manipulate the whole in a sensible matter are desirable, as this often decreases the computation time significantly.

Additionally one can exploit the fact, that drag in this case is especially sensitive to non--smooth surfaces, so that a smooth local manipulation of the surface as a whole is more advantageous than merely random manipulation of the vertices.

The quality of such a low-dimensional representation in biological evolution is strongly tied to the notion of evolvability\cite{wagner1996complex}, as the parametrization of the problem has serious implications on the convergence speed and the quality of the solution\cite{Rothlauf2006}. However, there is no consensus on how evolvability is defined and the meaning varies from context to context\cite{richter2015evolvability}, so there is need for some criteria we can measure, so that we are able to compare different representations to learn and improve upon these.

One example of such a general representation of an object is to generate random points and represent vertices of an object as distances to these points --- for example via \acf{RBF}. If one (or the algorithm) would move such a point the object will get deformed locally (due to the \ac{RBF}). As this results in a simple mapping from the parameter-space onto the object one can try out different representations of the same object and evaluate the evolvability. This is exactly what Richter et al.\cite{anrichterEvol} have done.

As we transfer the results of Richter et al.\cite{anrichterEvol} from using \acf{RBF} as a representation to manipulate geometric objects to the use of \acf{FFD} we will use the same definition for evolvability the original author used, namely regularity, variability, and improvement potential. We introduce these term in detail in Chapter \ref{sec:intro:rvi}. In the original publication the author could show a correlation between these evolvability--criteria with the quality and convergence speed of such optimization.

We will replicate the same setup on the same objects but use \acf{FFD} instead of \acf{RBF} to create a local deformation near the control points and evaluate if the evolution--criteria still work as a predictor for evolvability of the representation given the different deformation scheme, as suspected in \cite{anrichterEvol}.

First we introduce different topics in isolation in Chapter \ref{sec:back}. We take an abstract look at the definition of \ac{FFD} for a one--dimensional line (in \ref{sec🔙ffd}) and discuss why this is a sensible deformation function (in \ref{sec🔙ffdgood}). Then we establish some background--knowledge of evolutionary algorithms (in \ref{sec🔙evo}) and why this is useful in our domain (in \ref{sec🔙evogood}). In a third step we take a look at the definition of the different evolvability criteria established in \cite{anrichterEvol}.

In Chapter \ref{sec:impl} we take a look at our implementation of \ac{FFD} and the adaptation for 3D--meshes that were used.

Next, in Chapter \ref{sec:eval}, we describe the different scenarios we use to evaluate the different evolvability--criteria incorporating all aspects introduced in Chapter \ref{sec:back}. Following that, we evaluate the results in Chapter \ref{sec:res} with further on discussion in Chapter \ref{sec:dis}.

Background

\label{sec:back}

What is \acf{FFD}?

\label{sec🔙ffd}

First of all we have to establish how a \ac{FFD} works and why this is a good tool for deforming geometric objects (esp. meshes in our case) in the first place. For simplicity we only summarize the 1D--case from \cite{spitzmuller1996bezier} here and go into the extension to the 3D case in chapter \ref{3dffd}.

The main idea of \ac{FFD} is to create a function $s : [0,1[^d \mapsto \mathbb{R}^d$ that spans a certain part of a vector--space and is only linearly parametrized by some special control points p_i and an constant attribution--function a_i(u), so


s(u) = \sum_i a_i(u) p_i

can be thought of a representation of the inside of the convex hull generated by the control points where each point can be accessed by the right u \in [0,1[.

\begin{figure}[!ht] \begin{center} \includegraphics[width=0.7\textwidth]{img/B-Splines.png} \end{center} \caption[Example of B-Splines]{Example of a parametrization of a line with corresponding deformation to generate a deformed objet} \label{fig:bspline} \end{figure}

In the example in figure \ref{fig:bspline}, the control--points are indicated as red dots and the color-gradient should hint at the $u$--values ranging from 0 to 1.

We now define a \acf{FFD} by the following:
Given an arbitrary number of points p_i alongside a line, we map a scalar value \tau_i \in [0,1[ to each point with $\tau_i < \tau_{i+1} \forall i$ according to the position of p_i on said line. Additionally, given a degree of the target polynomial d we define the curve N_{i,d,\tau_i}(u) as follows:

\begin{equation} \label{eqn:ffd1d1} N_{i,0,\tau}(u) = \begin{cases} 1, & u \in [\tau_i, \tau_{i+1}[ \ 0, & \mbox{otherwise} \end{cases} \end{equation} and \begin{equation} \label{eqn:ffd1d2} N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u) \end{equation}

If we now multiply every p_i with the corresponding N_{i,d,\tau_i}(u) we get the contribution of each point p_i to the final curve--point parameterized only by u \in [0,1[. As can be seen from \eqref{eqn:ffd1d2} we only access points [p_i..p_{i+d}] for any given $i$^[one more for each recursive step.], which gives us, in combination with choosing p_i and \tau_i in order, only a local interference of d+1 points.

We can even derive this equation straightforward for an arbitrary $N$^[Warning: in the case of d=1 the recursion--formula yields a $0$ denominator, but N is also 0. The right solution for this case is a derivative of $0$]:

\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)

For a B--Spline

s(u) = \sum_{i} N_{i,d,\tau_i}(u) p_i

these derivations yield \frac{\partial^d}{\partial u} s(u) = 0.

Another interesting property of these recursive polynomials is that they are continuous (given d \ge 1) as every p_i gets blended in between \tau_i and \tau_{i+d} and out between \tau_{i+1}, and \tau_{i+d+1} as can bee seen from the two coefficients in every step of the recursion.

\improvement[inline]{Weitere Eigenschaften erwähnen: \newline Convex hull \newline \sum_i N_i = 1? \newline Bilder von Basisfunktionen zur Visualisierung. }

For a given number of points v_1,\dots,v_n we can then calculate the contributions n_{i,j}~:=~N_{j,d,\tau} of each control point p_j to get the projection from the control--point--space into the object--space:


v_i = \sum_j n_{i,j} \cdot p_j = \vec{n}_i^{T} \vec{p}

or written for all points at the same time:


\vec{v} = \vec{N} \vec{p}

where \vec{N} is the n \times m transformation--matrix (later on called deformation matrix) for n object--space--points and m control--points.

Why is \ac{FFD} a good deformation function?

\label{sec🔙ffdgood}

The usage of \ac{FFD} as a tool for manipulating follows directly from the properties of the polynomials and the correspondence to the control points. Having only a few control points gives the user a nicer high--level--interface, as she only needs to move these points and the model follows in an intuitive manner. The deformation is smooth as the underlying polygon is smooth as well and affects as many vertices of the model as needed. Moreover the changes are always local so one risks not any change that a user cannot immediately see.

But there are also disadvantages of this approach. The user loses the ability to directly influence vertices and even seemingly simple tasks as creating a plateau can be difficult to achieve\cite[chapter~3.2]{hsu1991dmffd}\cite{hsu1992direct}.

This disadvantages led to the formulation of \acf{DM--FFD}\cite[chapter~3.3]{hsu1991dmffd} in which the user directly interacts with the surface--mesh. All interactions will be applied proportionally to the control--points that make up the parametrization of the interaction--point itself yielding a smooth deformation of the surface at the surface without seemingly arbitrary scattered control--points. Moreover this increases the efficiency of an evolutionary optimization\cite{Menzel2006}, which we will use later on.

\begin{figure}[!ht] \includegraphics[width=\textwidth]{img/hsu_fig7.png} \caption{Figure 7 from \cite{hsu1991dmffd}.} \label{fig:hsu_fig7} \end{figure}

But this approach also has downsides as can be seen in figure \ref{fig:hsu_fig7}, as the tessellation of the invisible grid has a major impact on the deformation itself.

All in all \ac{FFD} and \ac{DM--FFD} are still good ways to deform a high--polygon mesh albeit the downsides.

What is evolutionary optimization?

\label{sec🔙evo}

In this thesis we are using an evolutionary optimization strategy to solve the problem of finding the best parameters for our deformation. This approach, however, is very generic and we introduce it here in a broader sense.

\begin{algorithm} \caption{An outline of evolutionary algorithms} \label{alg:evo} \begin{algorithmic} \STATE t := 0; \STATE initialize P(0) := \{\vec{a}_1(0),\dots,\vec{a}_\mu(0)\} \in I^\mu; \STATE evaluate F(0) : \{\Phi(x) | x \in P(0)\}; \WHILE{c(F(t)) \neq \TRUE} \STATE recombine: P(t) := r(P(t)); \STATE mutate: P''(t) := m(P(t)); \STATE evaluate $F''(t) : {\Phi(x) | x \in P''(t)}$ \STATE select: P(t + 1) := s(P''(t) \cup Q,\Phi); \STATE t := t + 1; \ENDWHILE \end{algorithmic} \end{algorithm}

The general shape of an evolutionary algorithm (adapted from \cite{back1993overview}) is outlined in Algorithm \ref{alg:evo}. Here, P(t) denotes the population of parameters in step t of the algorithm. The population contains \mu individuals a_i that fit the shape of the parameters we are looking for. Typically these are initialized by a random guess or just zero. Further on we need a so--called fitness--function $\Phi : I \mapsto M$\improvement{Was ist I,M?\newline Bezug Genotyp/Phenotyp} that can take each parameter to a measurable space along a convergence--function $c : I \mapsto \mathbb{B}$ that terminates the optimization.

The main algorithm just repeats the following steps:

  • Recombine with a recombination--function r : I^{\mu} \mapsto I^{\lambda} to generate new individuals based on the parents characteristics.
    This makes sure that the next guess is close to the old guess.
  • Mutate with a mutation--function m : I^{\lambda} \mapsto I^{\lambda} to introduce new effects that cannot be produced by mere recombination of the parents.
    Typically this just adds minor defects to individual members of the population like adding a random gaussian noise or amplifying/dampening random parts.
  • Selection takes a selection--function s : (I^\lambda \cup I^{\mu + \lambda},\Phi) \mapsto I^\mu that selects from the previously generated I^\lambda children and optionally also the parents (denoted by the set Q in the algorithm) using the fitness--function \Phi. The result of this operation is the next Population of \mu individuals.

All these functions can (and mostly do) have a lot of hidden parameters that can be changed over time. One can for example start off with a high mutation--rate that cools off over time (i.e. by lowering the variance of a gaussian noise).

Advantages of evolutionary algorithms

\label{sec🔙evogood}

The main advantage of evolutionary algorithms is the ability to find optima of general functions just with the help of a given fitness--function. With this most problems of simple gradient--based procedures, which often target the same error--function which measures the fitness, as an evolutionary algorithm, but can easily get stuck in local optima.

Components and techniques for evolutionary algorithms are specifically known to help with different problems arising in the domain of optimization\cite{weise2012evolutionary}. An overview of the typical problems are shown in figure \ref{fig:probhard}.

\begin{figure}[!ht] \includegraphics[width=\textwidth]{img/weise_fig3.png} \caption{Fig.~3. taken from \cite{weise2012evolutionary}} \label{fig:probhard} \end{figure}

Most of the advantages stem from the fact that a gradient--based procedure has only one point of observation from where it evaluates the next steps, whereas an evolutionary strategy starts with a population of guessed solutions. Because an evolutionary strategy modifies the solution randomly, keeps the best solutions and purges the worst, it can also target multiple different hypothesis at the same time where the local optima die out in the face of other, better candidates.

If an analytic best solution exists and is easily computable (i.e. because the error--function is convex) an evolutionary algorithm is not the right choice. Although both converge to the same solution, the analytic one is usually faster.

But in reality many problems have no analytic solution, because the problem is either not convex or there are so many parameters that an analytic solution (mostly meaning the equivalence to an exhaustive search) is computationally not feasible. Here evolutionary optimization has one more advantage as you can at least get suboptimal solutions fast, which then refine over time.

Criteria for the evolvability of linear deformations

\label{sec:intro:rvi}

\improvement[inline]{Nomenklatur. Was ist \vec{U}? Kurz Matrix--Darstellung des Problems & Rückgriff auf FFD-Kapitel.}

Variability

In \cite{anrichterEvol} variability is defined as

V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n},

whereby \vec{U} is the n \times m deformation--Matrix \unsure{Nicht (n\cdot d) \times m? Wegen u,v,w?} used to map the $m$ control points onto the n vertices.

Given n = m, an identical number of control--points and vertices, this quotient will be =1 if all control points are independent of each other and the solution is to trivially move every control--point onto a target--point.

In praxis the value of V(\vec{U}) is typically \ll 1, because as there are only few control--points for many vertices, so m \ll n.

Regularity

Regularity is defined\cite{anrichterEvol} as

R(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}

where \sigma_{min} and \sigma_{max} are the smallest and greatest right singular value of the deformation--matrix \vec{U}.

As we deform the given Object only based on the parameters as $\vec{p} \mapsto f(\vec{x} + \vec{U}\vec{p})$ this makes sure that $|\vec{Up}| \propto |\vec{p}|$ when \kappa(\vec{U}) \approx 1. The inversion of $\kappa(\vec{U})$ is only performed to map the criterion--range to [0..1], whereas 1 is the optimal value and 0 is the worst value.

This criterion should be characteristic for numeric stability on the on hand\cite[chapter 2.7]{golub2012matrix} and for convergence speed of evolutionary algorithms on the other hand\cite{anrichterEvol} as it is tied to the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}.

Improvement Potential

In contrast to the general nature of variability and regularity, which are agnostic of the fitness--function at hand the third criterion should reflect a notion of potential.

As during optimization some kind of gradient g is available to suggest a direction worth pursuing we use this to guess how much change can be achieved in the given direction.

The definition for an improvement potential P is\cite{anrichterEvol}:


P(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec(G)\|^2_F

given some approximate n \times d fitness--gradient \vec{G}, normalized to \|\vec{G}\|_F = 1, whereby \|\cdot\|_F denotes the Frobenius--Norm.

Implementation of \acf{FFD}

\label{sec:impl}

The general formulation of B--Splines has two free parameters d and $\tau$ which must be chosen beforehand.

As we usually work with regular grids in our \ac{FFD} we define $\tau$ statically as \tau_i = \nicefrac{i}{n} whereby n is the number of control--points in that direction.

d defines the degree of the B--Spline--Function (the number of times this function is differentiable) and for our purposes we fix d to 3, but give the formulas for the general case so it can be adapted quite freely.

Adaption of \ac{FFD}

As we have established in Chapter \ref{sec🔙ffd} we can define an \ac{FFD}--displacement as \begin{equation} \Delta_x(u) = \sum_i N_{i,d,\tau_i}(u) \Delta_x c_i \end{equation}

Note that we only sum up the $\Delta$--displacements in the control points c_i to get the change in position of the point we are interested in.

In this way every deformed vertex is defined by


\textrm{Deform}(v_x) = v_x + \Delta_x(u)

with u \in [0..1[ being the variable that connects the high--detailed vertex--mesh to the low--detailed control--grid. To actually calculate the new position of the vertex we first have to calculate the $u$--value for each vertex. This is achieved by finding out the parametrization of v in terms of c_i


v_x \overset{!}{=} \sum_i N_{i,d,\tau_i}(u) c_i

so we can minimize the error between those two:


\underset{u}{\argmin}\,Err(u,v_x) = \underset{u}{\argmin}\,2 \cdot \|v_x - \sum_i N_{i,d,\tau_i}(u) c_i\|^2_2

As this error--term is quadratic we just derive by u yielding


\begin{array}{rl}
\frac{\partial}{\partial u} & v_x - \sum_i N_{i,d,\tau_i}(u) c_i \\
= & - \sum_i \left( \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u) \right) c_i
\end{array}

and do a gradient--descend to approximate the value of u up to an \epsilon of 0.0001.

For this we use the Gauss--Newton algorithm\cite{gaussNewton} as the solution to this problem may not be deterministic, because we usually have way more vertices than control points (\#v~\gg~\#c).

Adaption of \ac{FFD} for a 3D--Mesh

\label{3dffd}

This is a straightforward extension of the 1D--method presented in the last chapter. But this time things get a bit more complicated. As we have a 3--dimensional grid we may have a different amount of control--points in each direction.

Given n,m,o control points in $x,y,z$--direction each Point on the curve is defined by

V(u,v,w) = \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.

In this case we have three different B--Splines (one for each dimension) and also 3 variables u,v,w for each vertex we want to approximate.

Given a target vertex \vec{p}^* and an initial guess $\vec{p}=V(u,v,w)$ we define the error--function for the gradient--descent as:

Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)

And the partial version for just one direction as

Err_x(u,v,w,\vec{p}^{*}) = p^{*}_x - \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x 

To solve this we derive partially, like before:


\begin{array}{rl}
    \displaystyle \frac{\partial Err_x}{\partial u} & p^{*}_x - \displaystyle \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \\
  = & \displaystyle - \sum_i \sum_j \sum_k N'_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x
\end{array}

The other partial derivatives follow the same pattern yielding the Jacobian:


J(Err(u,v,w)) = 
\left(
\begin{array}{ccc}
\frac{\partial Err_x}{\partial u} & \frac{\partial Err_x}{\partial v} & \frac{\partial Err_x}{\partial w} \\
\frac{\partial Err_y}{\partial u} & \frac{\partial Err_y}{\partial v} & \frac{\partial Err_y}{\partial w} \\
\frac{\partial Err_z}{\partial u} & \frac{\partial Err_z}{\partial v} & \frac{\partial Err_z}{\partial w}
\end{array}
\right)

\scriptsize
=
\left(
\begin{array}{ccc}
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_x \\
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_y \\
- \displaystyle \sum_{i,j,k} N'_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z &- \displaystyle \sum_{i,j,k} N_{i}(u) N'_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z & - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N'_{k}(w) \cdot {c_{ijk}}_z
\end{array}
\right)

With the Gauss--Newton algorithm we iterate via the formula

J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)

and use Cramers rule for inverting the small Jacobian and solving this system of linear equations.

Deformation Grid

As mentioned in chapter \ref{sec🔙evo}, the way of choosing the representation to map the general problem (mesh--fitting/optimization in our case) into a parameter-space it very important for the quality and runtime of evolutionary algorithms\cite{Rothlauf2006}.

Because our control--points are arranged in a grid, we can accurately represent each vertex--point inside the grids volume with proper B--Spline--coefficients between [0,1[ and --- as a consequence --- we have to embed our object into it (or create constant "dummy"-points outside).

The great advantage of B--Splines is the locality, direct impact of each control point without having a $1:1$--correlation, and a smooth deformation. While the advantages are great, the issues arise from the problem to decide where to place the control--points and how many.

One would normally think, that the more control--points you add, the better the result will be, but this is not the case for our B--Splines. Given any point $p$ only the 2 \cdot (d-1) control--points contribute to the parametrization of that point^[Normally these are d-1 to each side, but at the boundaries the number gets increased to the inside to meet the required smoothness]. This means, that a high resolution can have many control-points that are not contributing to any point on the surface and are thus completely irrelevant to the solution.

\begin{figure}[!ht] \begin{center} \includegraphics{img/enoughCP.png} \end{center} \caption[Example of a high resolution control--grid]{A high resolution (10 \times 10) of control--points over a circle. Yellow/green points contribute to the parametrization, red points don't.\newline An Example--point (blue) is solely determined by the position of the green control--points.} \label{fig:enoughCP} \end{figure}

We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the four red central points are not relevant for the parametrization of the circle.

\unsure[inline]{erwähnen, dass man aus \vec{D} einfach die Null--Spalten entfernen kann?}

For our tests we chose different uniformly sized grids and added gaussian noise onto each control-point^[For the special case of the outer layer we only applied noise away from the object] to simulate different starting-conditions.

\unsure[inline]{verweis auf DM--FFD?}

Scenarios for testing evolvability criteria using \acf{FFD}

\label{sec:eval}

Test Scenario: 1D Function Approximation

Optimierungszenario

  • Ebene -> Template--Fit

Matching in 1D

  • Trivial

Besonderheiten der Auswertung

  • Analytische Lösung einzig beste
    • Ergebnis auch bei Rauschen konstant?
    • normierter 1--Vektor auf den Gradienten addieren
      • Kegel entsteht

Test Scenario: 3D Function Approximation

Optimierungsszenario

  • Ball zu Mario

Matching in 3D

  • alternierende Optimierung

Besonderheiten der Optimierung

  • Analytische Lösung nur bis zur Optimierung der ersten Punkte gültig
  • Kriterien trotzdem gut

Evaluation of Scenarios

\label{sec:res}

Spearman/Pearson--Metriken

  • Was ist das?
  • Wieso sollte uns das interessieren?
  • Wieso reicht Monotonie?
  • Haben wir das gezeigt?
  • Statistik, Bilder, blah!

Results of 1D Function Approximation

\begin{figure}[!ht] \includegraphics[width=\textwidth]{img/evolution1d/20171005-all_appended.png} \caption{Results 1D}

\end{figure}

Results of 3D Function Approximation

\begin{figure}[!ht] \includegraphics[width=\textwidth]{img/evolution3d/20171007_3dFit_all_append.png} \caption{Results 3D}

\end{figure}

Schluss

\label{sec:dis}

HAHA .. als ob -.-