one more page

This commit is contained in:
Nicole Dresselhaus 2017-09-27 22:06:39 +02:00
parent 1bbe1682c8
commit c0399a9499
Signed by: Drezil
GPG Key ID: 057D94F356F41E25
5 changed files with 70 additions and 10 deletions

View File

@ -25,3 +25,28 @@
year={1991},
url={https://cs.brown.edu/research/pubs/theses/masters/1991/hsu.pdf},
}
@article{hsu1992direct,
title={Direct Manipulation of Free-Form Deformations},
author={Hsu, William M and Hughes, John F and Kaufman, Henry},
journal={Computer Graphics},
volume={26},
pages={2},
year={1992},
url={http://graphics.cs.brown.edu/~jfh/papers/Hsu-DMO-1992/paper.pdf},
}
@inproceedings{Menzel2006,
author = {Menzel, Stefan and Olhofer, Markus and Sendhoff, Bernhard},
title = {Direct Manipulation of Free Form Deformation in Evolutionary Design Optimisation},
booktitle = {Proceedings of the 9th International Conference on Parallel Problem Solving from Nature},
series = {PPSN'06},
year = {2006},
isbn = {3-540-38990-3, 978-3-540-38990-3},
location = {Reykjavik, Iceland},
pages = {352--361},
numpages = {10},
url = {http://dx.doi.org/10.1007/11844297_36},
doi = {10.1007/11844297_36},
acmid = {2079770},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
}

View File

@ -22,12 +22,12 @@
\setcounter{ContinuedFloat}{0}
\setcounter{float@type}{16}
\setcounter{lstnumber}{1}
\setcounter{NAT@ctr}{3}
\setcounter{NAT@ctr}{5}
\setcounter{AM@survey}{0}
\setcounter{r@tfl@t}{0}
\setcounter{subfigure}{0}
\setcounter{subtable}{0}
\setcounter{@todonotes@numberoftodonotes}{3}
\setcounter{@todonotes@numberoftodonotes}{1}
\setcounter{Item}{0}
\setcounter{Hfootnote}{2}
\setcounter{bookmark@seq@number}{16}

View File

@ -78,24 +78,39 @@ model follows in an intuitive manner. The deformation is smooth as the underlyin
vertices of the model as needed. Moreover the changes are always local so one risks not any change that a user cannot immediately see.
But there are also disadvantages of this approach. The user loses the ability to directly influence vertices and even seemingly simple tasks as
creating a plateau can be difficult to achieve\cite[chapter~3.2]{hsu1991dmffd}\todo{cite [24] aus \ref{anrichterEvol}}.
creating a plateau can be difficult to achieve\cite[chapter~3.2]{hsu1991dmffd}\cite{hsu1992direct}.
This disadvantages led to the formulation of \acf{DM-FFD}\cite[chapter~3.3]{hsu1991dmffd} in which the user directly interacts with the surface-mesh.
All interactions will be applied proportionally to the control-points that make up the parametrization of the interaction-point
itself yielding a smooth deformation of the surface *at* the surface without seemingly arbitrary scattered control-points.
Moreover this increases the efficiency of an evolutionary optimization\todo{cite [25] aus \ref{anrichterEvol}}, which we will use later on.
Moreover this increases the efficiency of an evolutionary optimization\cite{Menzel2006}, which we will use later on.
But this approach also has downsides as can be seen in \cite[figure~7]{hsu1991dmffd}\todo{figure hier einfügen?}, as the tessellation of
the invisible grid has a major impact on the deformation itself.
All in all \ac{FFD} and \ac{DM-FFD} are still good ways to deform a high-polygon mesh albeit the downsides.
## What is evaluational optimization?
## What is evolutional optimization?
## Wieso ist evo-Opt so cool?
The main advantage of evolutional algorithms is the ability to find optima of general functions just with the help of a given
error-function (or fitness-function in this domain). This avoids the general pitfalls of gradient-based procedures, which often
target the same error-function as an evolutional algorithm, but can get stuck in local optima.
This is mostly due to the fact that a gradient-based procedure has only one point of observation from where it evaluates the next
steps, whereas an evolutional strategy starts with a population of guessed solutions. Because an evolutional strategy modifies
the solution randomly, keeps the best solutions and purges the worst, it can also target multiple different hypothesis at the same time
where the local optima die out in the face of other, better candidates.
If an analytic best solution exists (i.e. because the error-function is convex) an evolutional algorithm is not the right choice. Although
both converge to the same solution, the analytic one is usually faster. But in reality many problems have no analytic solution, because
the problem is not convex. Here evolutional optimization has one more advantage as you get bad solutions fast, which refine over time.
## Evolvierbarkeitskriterien
- Konditionszahl etc.

Binary file not shown.

View File

@ -220,7 +220,7 @@ any change that a user cannot immediately see.
But there are also disadvantages of this approach. The user loses the
ability to directly influence vertices and even seemingly simple tasks
as creating a plateau can be difficult to
achieve\cite[chapter~3.2]{hsu1991dmffd}\todo{cite [24] aus \ref{anrichterEvol}}.
achieve\cite[chapter~3.2]{hsu1991dmffd}\cite{hsu1992direct}.
This disadvantages led to the formulation of
\acf{DM-FFD}\cite[chapter~3.3]{hsu1991dmffd} in which the user directly
@ -229,8 +229,7 @@ proportionally to the control-points that make up the parametrization of
the interaction-point itself yielding a smooth deformation of the
surface \emph{at} the surface without seemingly arbitrary scattered
control-points. Moreover this increases the efficiency of an
evolutionary optimization\todo{cite [25] aus \ref{anrichterEvol}}, which
we will use later on.
evolutionary optimization\cite{Menzel2006}, which we will use later on.
But this approach also has downsides as can be seen in
\cite[figure~7]{hsu1991dmffd}\todo{figure hier einfügen?}, as the
@ -240,11 +239,32 @@ itself.
All in all \ac{FFD} and \ac{DM-FFD} are still good ways to deform a
high-polygon mesh albeit the downsides.
\section{What is evaluational
optimization?}\label{what-is-evaluational-optimization}
\section{What is evolutional
optimization?}\label{what-is-evolutional-optimization}
\section{Wieso ist evo-Opt so cool?}\label{wieso-ist-evo-opt-so-cool}
The main advantage of evolutional algorithms is the ability to find
optima of general functions just with the help of a given error-function
(or fitness-function in this domain). This avoids the general pitfalls
of gradient-based procedures, which often target the same error-function
as an evolutional algorithm, but can get stuck in local optima.
This is mostly due to the fact that a gradient-based procedure has only
one point of observation from where it evaluates the next steps, whereas
an evolutional strategy starts with a population of guessed solutions.
Because an evolutional strategy modifies the solution randomly, keeps
the best solutions and purges the worst, it can also target multiple
different hypothesis at the same time where the local optima die out in
the face of other, better candidates.
If an analytic best solution exists (i.e.~because the error-function is
convex) an evolutional algorithm is not the right choice. Although both
converge to the same solution, the analytic one is usually faster. But
in reality many problems have no analytic solution, because the problem
is not convex. Here evolutional optimization has one more advantage as
you get bad solutions fast, which refine over time.
\section{Evolvierbarkeitskriterien}\label{evolvierbarkeitskriterien}
\begin{itemize}