added evo-section
This commit is contained in:
parent
b2fd6972f4
commit
cc50202cce
@ -115,3 +115,14 @@ eprint = {https://doi.org/10.1137/0111030}
|
||||
organization={IEEE},
|
||||
url={http://www.graphics.uni-bielefeld.de/publications/cec15.pdf}
|
||||
}
|
||||
@article{back1993overview,
|
||||
title={An overview of evolutionary algorithms for parameter optimization},
|
||||
author={B{\"a}ck, Thomas and Schwefel, Hans-Paul},
|
||||
journal={Evolutionary computation},
|
||||
volume={1},
|
||||
number={1},
|
||||
pages={1--23},
|
||||
year={1993},
|
||||
publisher={MIT Press},
|
||||
url={https://www.researchgate.net/profile/Hans-Paul_Schwefel/publication/220375001_An_Overview_of_Evolutionary_Algorithms_for_Parameter_Optimization/links/543663d00cf2dc341db30452.pdf}
|
||||
}
|
||||
|
51
arbeit/ma.md
51
arbeit/ma.md
@ -174,7 +174,56 @@ mesh albeit the downsides.
|
||||
## What is evolutional optimization?
|
||||
\label{sec:back:evo}
|
||||
|
||||
\change[inline]{Write this section}
|
||||
In this thesis we are using an evolutional optimization strategy to solve the
|
||||
problem of finding the best parameters for our deformation. This approach,
|
||||
however, is very generic and we introduce it here in a broader sense.
|
||||
|
||||
\begin{algorithm}
|
||||
\caption{An outline of evolutional algorithms}
|
||||
\label{alg:evo}
|
||||
\begin{algorithmic}
|
||||
\STATE t := 0;
|
||||
\STATE initialize $P(0) := \{\vec{a}_1(0),\dots,\vec{a}_\mu(0)\} \in I^\mu$;
|
||||
\STATE evaluate $F(0) : \{\Phi(x) | x \in P(0)\}$;
|
||||
\WHILE{$c(F(t)) \neq$ \TRUE}
|
||||
\STATE recombine: $P’(t) := r(P(t))$;
|
||||
\STATE mutate: $P''(t) := m(P’(t))$;
|
||||
\STATE evaluate $F''(t) : \{\Phi(x) | x \in P''(t)\}$
|
||||
\STATE select: $P(t + 1) := s(P''(t) \cup Q,\Phi)$;
|
||||
\STATE t := t + 1;
|
||||
\ENDWHILE
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
The general shape of an evolutional algorithm (adapted from
|
||||
\cite{back1993overview}} is outlined in Algorithm \ref{alg:evo}. Here, $P(t)$
|
||||
denotes the population of parameters in step $t$ of the algorithm. The
|
||||
population contains $\mu$ individuals $a_i$ that fit the shape of the parameters
|
||||
we are looking for. Typically these are initialized by a random guess or just
|
||||
zero. Further on we need a so-called *fitness-function* $\Phi : I \mapsto M$ that can take
|
||||
each parameter to a measurable space along a convergence-function $c : I \mapsto
|
||||
\mathbb{B}$ that terminates the optimization.
|
||||
|
||||
The main algorithm just repeats the following steps:
|
||||
|
||||
- **Recombine** with a recombination-function $r : I^{\mu} \mapsto I^{\lambda}$ to
|
||||
generate new individuals based on the parents characteristics.
|
||||
This makes sure that the next guess is close to the old guess.
|
||||
- **Mutate** with a mutation-function $m : I^{\lambda} \mapsto I^{\lambda}$ to
|
||||
introduce new effects that cannot be produced by mere recombination of the
|
||||
parents.
|
||||
Typically this just adds minor defects to individual members of the population
|
||||
like adding a random gaussian noise or amplifying/dampening random parts.
|
||||
- **Selection** takes a selection-function $s : (I^\lambda \cup I^{\mu + \lambda},\Phi) \mapsto I^\mu$ that
|
||||
selects from the previously generated $I^\lambda$ children and optionally also
|
||||
the parents (denoted by the set $Q$ in the algorithm) using the
|
||||
fitness-function $\Phi$. The result of this operation is the next Population
|
||||
of $\mu$ individuals.
|
||||
|
||||
All these functions can (and mostly do) have a lot of hidden parameters that
|
||||
can be changed over time. One can for example start off with a high
|
||||
mutation--rate that cools off over time (i.e. by lowering the variance of a
|
||||
gaussian noise). As the recombination and selection-steps are usually pure
|
||||
|
||||
## Advantages of evolutional algorithms
|
||||
\label{sec:back:evogood}
|
||||
|
BIN
arbeit/ma.pdf
BIN
arbeit/ma.pdf
Binary file not shown.
@ -148,20 +148,20 @@ Unless otherwise noted the following holds:
|
||||
\tightlist
|
||||
\item
|
||||
lowercase letters \(x,y,z\)\\
|
||||
refer to real variables and represent a point in 3D--Space.
|
||||
refer to real variables and represent a point in 3D--Space.
|
||||
\item
|
||||
lowercase letters \(u,v,w\)\\
|
||||
refer to real variables between \(0\) and \(1\) used as coefficients
|
||||
refer to real variables between \(0\) and \(1\) used as coefficients
|
||||
in a 3D B--Spline grid.
|
||||
\item
|
||||
other lowercase letters\\
|
||||
refer to other scalar (real) variables.
|
||||
refer to other scalar (real) variables.
|
||||
\item
|
||||
lowercase \textbf{bold} letters (e.g. \(\vec{x},\vec{y}\))\\
|
||||
refer to 3D coordinates
|
||||
refer to 3D coordinates
|
||||
\item
|
||||
uppercase \textbf{BOLD} letters (e.g. \(\vec{D}, \vec{M}\))\\
|
||||
refer to Matrices
|
||||
refer to Matrices
|
||||
\end{itemize}
|
||||
|
||||
\chapter{Introduction}\label{introduction}
|
||||
@ -331,7 +331,68 @@ optimization?}\label{what-is-evolutional-optimization}
|
||||
|
||||
\label{sec:back:evo}
|
||||
|
||||
\change[inline]{Write this section}
|
||||
\change[inline]{Write this section} In this thesis we are using an
|
||||
evolutional optimization strategy to solve the problem of finding the
|
||||
best parameters for our deformation. This approach, however, is very
|
||||
generic and we introduce it here in a broader sense.
|
||||
|
||||
\begin{algorithm}
|
||||
\caption{An outline of evolutional algorithms}
|
||||
\label{alg:evo}
|
||||
\begin{algorithmic}
|
||||
\STATE t := 0;
|
||||
\STATE initialize $P(0) := \{\vec{a}_1(0),\dots,\vec{a}_\mu(0)\} \in I^\mu$;
|
||||
\STATE evaluate $F(0) : \{\Phi(x) | x \in P(0)\}$;
|
||||
\WHILE{$c(F(t)) \neq$ \TRUE}
|
||||
\STATE recombine: $P’(t) := r(P(t))$;
|
||||
\STATE mutate: $P''(t) := m(P’(t))$;
|
||||
\STATE evaluate $F''(t) : \{\Phi(x) | x \in P''(t)\}$
|
||||
\STATE select: $P(t + 1) := s(P''(t) \cup Q,\Phi)$;
|
||||
\STATE t := t + 1;
|
||||
\ENDWHILE
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
The general shape of an evolutional algorithm (adapted from
|
||||
\cite{back1993overview}\} is outlined in Algorithm \ref{alg:evo}. Here,
|
||||
\(P(t)\) denotes the population of parameters in step \(t\) of the
|
||||
algorithm. The population contains \(\mu\) individuals \(a_i\) that fit
|
||||
the shape of the parameters we are looking for. Typically these are
|
||||
initialized by a random guess or just zero. Further on we need a
|
||||
so-called \emph{fitness-function} \(\Phi : I \mapsto M\) that can take
|
||||
each parameter to a measurable space along a convergence-function
|
||||
\(c : I \mapsto \mathbb{B}\) that terminates the optimization.
|
||||
|
||||
The main algorithm just repeats the following steps:
|
||||
|
||||
\begin{itemize}
|
||||
\tightlist
|
||||
\item
|
||||
\textbf{Recombine} with a recombination-function
|
||||
\(r : I^{\mu} \mapsto I^{\lambda}\) to generate new individuals based
|
||||
on the parents characteristics.\\
|
||||
This makes sure that the next guess is close to the old guess.
|
||||
\item
|
||||
\textbf{Mutate} with a mutation-function
|
||||
\(m : I^{\lambda} \mapsto I^{\lambda}\) to introduce new effects that
|
||||
cannot be produced by mere recombination of the parents.\\
|
||||
Typically this just adds minor defects to individual members of the
|
||||
population like adding a random gaussian noise or amplifying/dampening
|
||||
random parts.
|
||||
\item
|
||||
\textbf{Selection} takes a selection-function
|
||||
\(s : (I^\lambda \cup I^{\mu + \lambda},\Phi) \mapsto I^\mu\) that
|
||||
selects from the previously generated \(I^\lambda\) children and
|
||||
optionally also the parents (denoted by the set \(Q\) in the
|
||||
algorithm) using the fitness-function \(\Phi\). The result of this
|
||||
operation is the next Population of \(\mu\) individuals.
|
||||
\end{itemize}
|
||||
|
||||
All these functions can (and mostly do) have a lot of hidden parameters
|
||||
that can be changed over time. One can for example start off with a high
|
||||
mutation--rate that cools off over time (i.e.~by lowering the variance
|
||||
of a gaussian noise). As the recombination and selection-steps are
|
||||
usually pure
|
||||
|
||||
\section{Advantages of evolutional
|
||||
algorithms}\label{advantages-of-evolutional-algorithms}
|
||||
|
Loading…
Reference in New Issue
Block a user