masterarbeit/arbeit/ma.md

174 lines
7.1 KiB
Markdown
Raw Normal View History

2017-08-23 19:18:44 +00:00
---
fontsize: 11pt
---
2017-08-30 18:56:35 +00:00
\chapter*{How to read this Thesis}
2017-08-23 19:18:44 +00:00
2017-08-28 16:39:18 +00:00
As a guide through the nomenclature used in the formulas we prepend this chapter.
Unless otherwise noted the following holds:
- lowercase letters $x,y,z$
refer to real variables and represent a point in 3D-Space.
- lowercase letters $u,v,w$
refer to real variables between $0$ and $1$ used as coefficients in a 3D B-Spline grid.
- other lowercase letters
refer to other scalar (real) variables.
- lowercase **bold** letters (e.g. $\vec{x},\vec{y}$)
refer to 3D coordinates
2017-09-09 16:38:43 +00:00
- uppercase **BOLD** letters (e.g. $\vec{D}, \vec{M}$)
2017-08-28 16:39:18 +00:00
refer to Matrices
# Introduction
In this Master Thesis we try to extend a previously proposed concept of predicting
the evolvability of \acf{FFD} given a Deformation-Matrix\cite{anrichterEvol}.
In the original publication the author used random sampled points weighted with
\acf{RBF} to deform the mesh and defined three different criteria that can be
calculated prior to using an evolutional optimisation algorithm to asses the
quality and potential of such optimisation.
We will replicate the same setup on the same meshes but use \acf{FFD} instead of
\acf{RBF} to create a deformation and evaluate if the evolution-criteria still
2017-09-06 15:07:46 +00:00
work as a predictor given the different deformation scheme.
2017-08-28 16:39:18 +00:00
## What is \acf{FFD}?
First of all we have to establish how a \ac{FFD} works and why this is a good
2017-09-06 15:07:46 +00:00
tool for deforming meshes in the first place. For simplicity we only summarize the
1D-case from \cite{spitzmuller1996bezier} here and go into the extension to the 3D case in chapter \ref{3dffd}.
Given an arbitrary number of points $p_i$ alongside a line, we map a scalar
value $\tau_i \in [0,1[$ to each point with $\tau_i < \tau_{i+1} \forall i$.
Given a degree of the target polynomial $d$ we define the curve $N_{i,d,\tau_i}(u)$ as follows:
2017-09-09 16:38:43 +00:00
\begin{equation} \label{eqn:ffd1d1}
2017-09-06 15:07:46 +00:00
N_{i,0,\tau}(u) = \begin{cases} 1, & u \in [\tau_i, \tau_{i+1}[ \\ 0, & \mbox{otherwise} \end{cases}
\end{equation}
2017-09-09 16:38:43 +00:00
2017-09-06 15:07:46 +00:00
and
2017-09-09 16:38:43 +00:00
\begin{equation} \label{eqn:ffd1d2}
2017-09-06 15:07:46 +00:00
N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u)
\end{equation}
If we now multiply every $p_i$ with the corresponding $N_{i,d,\tau_i}(u)$ we get the contribution of each
point $p_i$ to the final curve-point parameterized only by $u \in [0,1[$.
2017-09-09 16:38:43 +00:00
As can be seen from \eqref{eqn:ffd1d2} we only access points $[i..i+d]$ for any given $i$^[one more for each recursive step.], which
2017-09-06 15:07:46 +00:00
gives us, in combination with choosing $p_i$ and $\tau_i$ in order, only a local interference of $d+1$ points.
We can even derive this equation straightforward for an arbitrary $N$^[*Warning:* in the case of $d=1$ the recursion-formula yields a $0$ denominator, but $N$ is also $0$. The right solution for this case is a derivative of $0$]:
$$\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)$$
For a B-Spline
$$s(u) = \sum_{i} N_{i,d,\tau_i}(u) p_i$$
these derivations yield $\frac{\partial^d}{\partial u} s(u) = 0$.
Another interesting property of these recursive polynomials is that they are continuous (given $d \ge 1$) as every $p_i$ gets
blended in linearly between $\tau_i$ and $\tau_{i+d}$ and out linearly between $\tau_{i+1}$ and $\tau_{i+d+1}$
as can bee seen from the two coefficients in every step of the recursion.
### Why is \ac{FFD} a good deformation function?
The usage of \ac{FFD} as a tool for manipulating follows directly from the properties of the polynomials and the correspondence to
the control points.
Having only a few control points gives the user a nicer high-level-interface, as she only needs to move these points and the
model follows in an intuitive manner. The deformation is smooth as the underlying polygon is smooth as well and affects as many
vertices of the model as needed. Moreover the changes are always local so one risks not any change that a user cannot immediately see.
But there are also disadvantages of this approach. The user loses the ability to directly influence vertices and even seemingly simple tasks as
2017-09-27 20:06:39 +00:00
creating a plateau can be difficult to achieve\cite[chapter~3.2]{hsu1991dmffd}\cite{hsu1992direct}.
2017-09-06 15:07:46 +00:00
This disadvantages led to the formulation of \acf{DM-FFD}\cite[chapter~3.3]{hsu1991dmffd} in which the user directly interacts with the surface-mesh.
All interactions will be applied proportionally to the control-points that make up the parametrization of the interaction-point
itself yielding a smooth deformation of the surface *at* the surface without seemingly arbitrary scattered control-points.
2017-09-27 20:06:39 +00:00
Moreover this increases the efficiency of an evolutionary optimization\cite{Menzel2006}, which we will use later on.
2017-09-06 15:07:46 +00:00
But this approach also has downsides as can be seen in \cite[figure~7]{hsu1991dmffd}\todo{figure hier einfügen?}, as the tessellation of
the invisible grid has a major impact on the deformation itself.
All in all \ac{FFD} and \ac{DM-FFD} are still good ways to deform a high-polygon mesh albeit the downsides.
2017-09-27 20:06:39 +00:00
## What is evolutional optimization?
2017-09-06 15:07:46 +00:00
2017-08-23 19:18:44 +00:00
## Wieso ist evo-Opt so cool?
2017-09-27 20:06:39 +00:00
The main advantage of evolutional algorithms is the ability to find optima of general functions just with the help of a given
error-function (or fitness-function in this domain). This avoids the general pitfalls of gradient-based procedures, which often
target the same error-function as an evolutional algorithm, but can get stuck in local optima.
This is mostly due to the fact that a gradient-based procedure has only one point of observation from where it evaluates the next
steps, whereas an evolutional strategy starts with a population of guessed solutions. Because an evolutional strategy modifies
the solution randomly, keeps the best solutions and purges the worst, it can also target multiple different hypothesis at the same time
where the local optima die out in the face of other, better candidates.
If an analytic best solution exists (i.e. because the error-function is convex) an evolutional algorithm is not the right choice. Although
both converge to the same solution, the analytic one is usually faster. But in reality many problems have no analytic solution, because
the problem is not convex. Here evolutional optimization has one more advantage as you get bad solutions fast, which refine over time.
2017-08-23 19:18:44 +00:00
## Evolvierbarkeitskriterien
- Konditionszahl etc.
# Hauptteil
## Was ist FFD?
2017-09-06 15:07:46 +00:00
\label{3dffd}
2017-08-23 19:18:44 +00:00
- Definition
- Wieso Newton-Optimierung?
- Was folgt daraus?
## Szenarien vorstellen
### 1D
#### Optimierungszenario
- Ebene -> Template-Fit
#### Matching in 1D
- Trivial
#### Besonderheiten der Auswertung
- Analytische Lösung einzig beste
- Ergebnis auch bei Rauschen konstant?
- normierter 1-Vektor auf den Gradienten addieren
- Kegel entsteht
### 3D
#### Optimierungsszenario
- Ball zu Mario
#### Matching in 3D
- alternierende Optimierung
#### Besonderheiten der Optimierung
- Analytische Lösung nur bis zur Optimierung der ersten Punkte gültig
- Kriterien trotzdem gut
# Evaluation
## Spearman/Pearson-Metriken
- Was ist das?
- Wieso sollte uns das interessieren?
- Wieso reicht Monotonie?
- Haben wir das gezeigt?
- Stastik, Bilder, blah!
# Schluss
HAHA .. als ob -.-