diff --git a/arbeit/img/enoughCP.png b/arbeit/img/enoughCP.png new file mode 100644 index 0000000..7f90931 Binary files /dev/null and b/arbeit/img/enoughCP.png differ diff --git a/arbeit/img/enoughCP.svg b/arbeit/img/enoughCP.svg new file mode 100644 index 0000000..bc6b349 --- /dev/null +++ b/arbeit/img/enoughCP.svg @@ -0,0 +1,676 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/arbeit/ma.md b/arbeit/ma.md index dccb653..b0b06d8 100644 --- a/arbeit/ma.md +++ b/arbeit/ma.md @@ -35,7 +35,7 @@ informatics (i.e. layouting of circuit boards or stacking of 3D--objects). Moreover these are typically not static environments but requirements shift over time or from case to case. -Evolutional algorithms cope especially well with these problem domains while +Evolutionary algorithms cope especially well with these problem domains while addressing all the issues at hand\cite{minai2006complex}. One of the main concerns in these algorithms is the formulation of the problems in terms of a genome and a fitness function. While one can typically use an arbitrary @@ -72,7 +72,7 @@ First we introduce different topics in isolation in Chapter \ref{sec:back}. We take an abstract look at the definition of \ac{FFD} for a one--dimensional line (in \ref{sec:back:ffd}) and discuss why this is a sensible deformation function (in \ref{sec:back:ffdgood}). -Then we establish some background--knowledge of evolutional algorithms (in +Then we establish some background--knowledge of evolutionary algorithms (in \ref{sec:back:evo}) and why this is useful in our domain (in \ref{sec:back:evogood}). In a third step we take a look at the definition of the different evolvability @@ -173,15 +173,15 @@ on the deformation itself. All in all \ac{FFD} and \ac{DM--FFD} are still good ways to deform a high--polygon mesh albeit the downsides. -## What is evolutional optimization? +## What is evolutionary optimization? \label{sec:back:evo} -In this thesis we are using an evolutional optimization strategy to solve the +In this thesis we are using an evolutionary optimization strategy to solve the problem of finding the best parameters for our deformation. This approach, however, is very generic and we introduce it here in a broader sense. \begin{algorithm} -\caption{An outline of evolutional algorithms} +\caption{An outline of evolutionary algorithms} \label{alg:evo} \begin{algorithmic} \STATE t := 0; @@ -197,7 +197,7 @@ however, is very generic and we introduce it here in a broader sense. \end{algorithmic} \end{algorithm} -The general shape of an evolutional algorithm (adapted from +The general shape of an evolutionary algorithm (adapted from \cite{back1993overview}) is outlined in Algorithm \ref{alg:evo}. Here, $P(t)$ denotes the population of parameters in step $t$ of the algorithm. The population contains $\mu$ individuals $a_i$ that fit the shape of the parameters @@ -227,16 +227,16 @@ can be changed over time. One can for example start off with a high mutation--rate that cools off over time (i.e. by lowering the variance of a gaussian noise). -## Advantages of evolutional algorithms +## Advantages of evolutionary algorithms \label{sec:back:evogood} -The main advantage of evolutional algorithms is the ability to find optima of +The main advantage of evolutionary algorithms is the ability to find optima of general functions just with the help of a given fitness--function. With this most problems of simple gradient--based procedures, which often target the same -error--function which measures the fitness, as an evolutional algorithm, but can +error--function which measures the fitness, as an evolutionary algorithm, but can easily get stuck in local optima. -Components and techniques for evolutional algorithms are specifically known to +Components and techniques for evolutionary algorithms are specifically known to help with different problems arising in the domain of optimization\cite{weise2012evolutionary}. An overview of the typical problems are shown in figure \ref{fig:probhard}. @@ -249,20 +249,20 @@ are shown in figure \ref{fig:probhard}. Most of the advantages stem from the fact that a gradient--based procedure has only one point of observation from where it evaluates the next steps, whereas an -evolutional strategy starts with a population of guessed solutions. Because an -evolutional strategy modifies the solution randomly, keeps the best solutions +evolutionary strategy starts with a population of guessed solutions. Because an +evolutionary strategy modifies the solution randomly, keeps the best solutions and purges the worst, it can also target multiple different hypothesis at the same time where the local optima die out in the face of other, better candidates. If an analytic best solution exists and is easily computable (i.e. because the -error--function is convex) an evolutional algorithm is not the right choice. +error--function is convex) an evolutionary algorithm is not the right choice. Although both converge to the same solution, the analytic one is usually faster. But in reality many problems have no analytic solution, because the problem is either not convex or there are so many parameters that an analytic solution (mostly meaning the equivalence to an exhaustive search) is computationally not -feasible. Here evolutional optimization has one more advantage as you can at +feasible. Here evolutionary optimization has one more advantage as you can at least get a suboptimal solutions fast, which then refine over time. ## Criteria for the evolvability of linear deformations @@ -300,9 +300,9 @@ is only performed to map the criterion--range to $[0..1]$, whereas $1$ is the optimal value and $0$ is the worst value. This criterion should be characteristic for numeric stability on the on -hand\cite[chapter 2.7]{golub2012matrix} and for convergence speed of evolutional -algorithms on the other hand\cite{anrichterEvol} as it is tied to the notion of -locality\cite{weise2012evolutionary,thorhauer2014locality}. +hand\cite[chapter 2.7]{golub2012matrix} and for convergence speed of +evolutionary algorithms on the other hand\cite{anrichterEvol} as it is tied to +the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}. ### Improvement Potential @@ -439,17 +439,55 @@ and use Cramers rule for inverting the small Jacobian and solving this system of linear equations. -## Parametrisierung sinnvoll? +## Deformation Grid -- Nachteile von Parametrisierung - - wie in kap. \ref{sec:back:evo} zu sehen, ist Parametrisierung - wichtig\cite{Rothlauf2006}. - - Parametrisierung zwar lokal, aber nicht 1:1 - - Deformation ist um einen Kontrollpunkt viel direkter zu steuern. - - => DM--FFD kann abhelfen, further study. - - Schlechte Parametrisierung sorgt dafür, dass CP u.U. nicht zur - Parametrisierung verwendet werden. +As mentioned in chapter \ref{sec:back:evo}, the way of choosing the +representation to map the general problem (mesh--fitting/optimization in our +case) into a parameter-space it very important for the quality and runtime of +evolutionary algorithms\cite{Rothlauf2006}. +Because our control--points are arranged in a grid, we can accurately represent +each vertex--point inside the grids volume with proper B--Spline--coefficients +between $[0,1[$ and --- as a consequence --- we have to embed our object into it +(or create constant "dummy"-points outside). + +The great advantage of B--Splines is the locality, direct impact of each +control point without having a $1:1$--correlation, and a smooth deformation. +While the advantages are great, the issues arise from the problem to decide +where to place the control--points and how many. + +One would normally think, that the more control--points you add, the better the +result will be, but this is not the case for our B--Splines. Given any point $p$ +only the $2 \cdot (d-1)$ control--points contribute to the parametrization of +that point^[Normally these are $d-1$ to each side, but at the boundaries the +number gets increased to the inside to meet the required smoothness]. +This means, that a high resolution can have many control-points that are not +contributing to any point on the surface and are thus completely irrelevant to +the solution. + +\begin{figure}[!ht] +\begin{center} +\includegraphics{img/enoughCP.png} +\end{center} +\caption{A high resolution ($10 \times 10$) of control--points over a circle. +yellow/green points contribute to the parametrization, red points don't.\newline +An Example--point (blue) is solely determined by the position of the green +control--points.} +\label{fig:enoughCP} +\end{figure} + + +We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the four red +central points are not relevant for the parametrization of the circle. + +\unsure[inline]{erwähnen, dass man aus $\vec{D}$ einfach die Null--Spalten +entfernen kann?} + +For our tests we chose different uniformly sized grids and added gaussian noise +onto each control-point^[For the special case of the outer layer we only applied +noise away from the object] to simulate different starting-conditions. + +\unsure[inline]{verweis auf DM--FFD?} # Scenarios for testing evolvability criteria using \acf{FFD} \label{sec:eval} diff --git a/arbeit/ma.pdf b/arbeit/ma.pdf index 1efacfd..d9901c9 100644 Binary files a/arbeit/ma.pdf and b/arbeit/ma.pdf differ diff --git a/arbeit/ma.tex b/arbeit/ma.tex index 10f1767..fbcef07 100644 --- a/arbeit/ma.tex +++ b/arbeit/ma.tex @@ -179,7 +179,7 @@ simulation --- or known hard algorithmic problems in informatics these are typically not static environments but requirements shift over time or from case to case. -Evolutional algorithms cope especially well with these problem domains +Evolutionary algorithms cope especially well with these problem domains while addressing all the issues at hand\cite{minai2006complex}. One of the main concerns in these algorithms is the formulation of the problems in terms of a genome and a fitness function. While one can typically use @@ -220,7 +220,7 @@ First we introduce different topics in isolation in Chapter \ref{sec:back}. We take an abstract look at the definition of \ac{FFD} for a one--dimensional line (in \ref{sec:back:ffd}) and discuss why this is a sensible deformation function (in \ref{sec:back:ffdgood}). Then we -establish some background--knowledge of evolutional algorithms (in +establish some background--knowledge of evolutionary algorithms (in \ref{sec:back:evo}) and why this is useful in our domain (in \ref{sec:back:evogood}). In a third step we take a look at the definition of the different evolvability criteria established in @@ -328,18 +328,18 @@ major impact on the deformation itself. All in all \ac{FFD} and \ac{DM--FFD} are still good ways to deform a high--polygon mesh albeit the downsides. -\section{What is evolutional -optimization?}\label{what-is-evolutional-optimization} +\section{What is evolutionary +optimization?}\label{what-is-evolutionary-optimization} \label{sec:back:evo} -In this thesis we are using an evolutional optimization strategy to +In this thesis we are using an evolutionary optimization strategy to solve the problem of finding the best parameters for our deformation. This approach, however, is very generic and we introduce it here in a broader sense. \begin{algorithm} -\caption{An outline of evolutional algorithms} +\caption{An outline of evolutionary algorithms} \label{alg:evo} \begin{algorithmic} \STATE t := 0; @@ -355,7 +355,7 @@ broader sense. \end{algorithmic} \end{algorithm} -The general shape of an evolutional algorithm (adapted from +The general shape of an evolutionary algorithm (adapted from \cite{back1993overview}) is outlined in Algorithm \ref{alg:evo}. Here, \(P(t)\) denotes the population of parameters in step \(t\) of the algorithm. The population contains \(\mu\) individuals \(a_i\) that fit @@ -395,19 +395,19 @@ that can be changed over time. One can for example start off with a high mutation--rate that cools off over time (i.e.~by lowering the variance of a gaussian noise). -\section{Advantages of evolutional -algorithms}\label{advantages-of-evolutional-algorithms} +\section{Advantages of evolutionary +algorithms}\label{advantages-of-evolutionary-algorithms} \label{sec:back:evogood} -The main advantage of evolutional algorithms is the ability to find +The main advantage of evolutionary algorithms is the ability to find optima of general functions just with the help of a given fitness--function. With this most problems of simple gradient--based procedures, which often target the same error--function which measures -the fitness, as an evolutional algorithm, but can easily get stuck in +the fitness, as an evolutionary algorithm, but can easily get stuck in local optima. -Components and techniques for evolutional algorithms are specifically +Components and techniques for evolutionary algorithms are specifically known to help with different problems arising in the domain of optimization\cite{weise2012evolutionary}. An overview of the typical problems are shown in figure \ref{fig:probhard}. @@ -420,21 +420,21 @@ problems are shown in figure \ref{fig:probhard}. Most of the advantages stem from the fact that a gradient--based procedure has only one point of observation from where it evaluates the -next steps, whereas an evolutional strategy starts with a population of -guessed solutions. Because an evolutional strategy modifies the solution -randomly, keeps the best solutions and purges the worst, it can also -target multiple different hypothesis at the same time where the local -optima die out in the face of other, better candidates. +next steps, whereas an evolutionary strategy starts with a population of +guessed solutions. Because an evolutionary strategy modifies the +solution randomly, keeps the best solutions and purges the worst, it can +also target multiple different hypothesis at the same time where the +local optima die out in the face of other, better candidates. If an analytic best solution exists and is easily computable -(i.e.~because the error--function is convex) an evolutional algorithm is -not the right choice. Although both converge to the same solution, the -analytic one is usually faster. +(i.e.~because the error--function is convex) an evolutionary algorithm +is not the right choice. Although both converge to the same solution, +the analytic one is usually faster. But in reality many problems have no analytic solution, because the problem is either not convex or there are so many parameters that an analytic solution (mostly meaning the equivalence to an exhaustive -search) is computationally not feasible. Here evolutional optimization +search) is computationally not feasible. Here evolutionary optimization has one more advantage as you can at least get a suboptimal solutions fast, which then refine over time. @@ -478,7 +478,7 @@ criterion--range to \([0..1]\), whereas \(1\) is the optimal value and This criterion should be characteristic for numeric stability on the on hand\cite[chapter 2.7]{golub2012matrix} and for convergence speed of -evolutional algorithms on the other hand\cite{anrichterEvol} as it is +evolutionary algorithms on the other hand\cite{anrichterEvol} as it is tied to the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}. @@ -617,7 +617,55 @@ With the Gauss--Newton algorithm we iterate via the formula and use Cramers rule for inverting the small Jacobian and solving this system of linear equations. -\section{Parametrisierung sinnvoll?}\label{parametrisierung-sinnvoll} +\section{Deformation Grid}\label{deformation-grid} + +As mentioned in chapter \ref{sec:back:evo}, the way of choosing the +representation to map the general problem (mesh--fitting/optimization in +our case) into a parameter-space it very important for the quality and +runtime of evolutionary algorithms\cite{Rothlauf2006}. + +Because our control--points are arranged in a grid, we can accurately +represent each vertex--point inside the grids volume with proper +B--Spline--coefficients between \([0,1[\) and --- as a consequence --- +we have to embed our object into it (or create constant ``dummy''-points +outside). + +The great advantage of B--Splines is the locality, direct impact of each +control point without having a \(1:1\)--correlation, and a smooth +deformation. While the advantages are great, the issues arise from the +problem to decide where to place the control--points and how many. + +One would normally think, that the more control--points you add, the +better the result will be, but this is not the case for our B--Splines. +Given any point \(p\) only the \(2 \cdot (d-1)\) control--points +contribute to the parametrization of that point\footnote{Normally these + are \(d-1\) to each side, but at the boundaries the number gets + increased to the inside to meet the required smoothness}. This means, +that a high resolution can have many control-points that are not +contributing to any point on the surface and are thus completely +irrelevant to the solution. + +\begin{figure}[!ht] +\begin{center} +\includegraphics{img/enoughCP.png} +\end{center} +\caption{A high resolution ($10 \times 10$) of control--points over a circle. +yellow/green points contribute to the parametrization, red points don't.\newline +An Example--point (blue) is solely determined by the position of the green +control--points.} +\label{fig:enoughCP} +\end{figure} + +We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the +central points are not relevant for the parametrization of the circle. + +\unsure[inline]{erwähnen, dass man aus $\vec{D}$ einfach die Null--Spalten +entfernen kann?} + +For our tests we chose different uniformly sized grids and added +gaussian noise onto each control-point\footnote{For the special case of + the outer layer we only applied noise away from the object} to +simulate different starting-conditions. \begin{itemize} \tightlist