masterarbeit/presentation/presentation.html

595 lines
79 KiB
HTML
Raw Normal View History

<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<title>
Evaluation of the Performance of Randomized FFD Control Grids: Master Thesis
</title>
<link rel="stylesheet" href="./template/revealjs/css/reveal.css">
<!-- Theme of AG CG (derived from reveal's white.css) -->
<link rel="stylesheet" href="./template/agcg.css">
<!-- font needed for chalkboard buttons -->
<link rel="stylesheet" href="./template/font-awesome/css/font-awesome.min.css">
<!-- Setup code formatting with highlight.js -->
<link rel="stylesheet" href="./template/revealjs/css/highlight/xcode.css">
<!-- stuff for quiz -->
<script src="https://www.gstatic.com/charts/loader.js"></script>
<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="http://davidshimjs.github.com/qrcodejs/qrcode.min.js"></script>
<!-- Printing and PDF exports -->
<script>
var link = document.createElement( 'link' );
link.rel = 'stylesheet';
link.type = 'text/css';
link.href = window.location.search.match( /print-pdf/gi ) ? './template/revealjs/css/print/pdf.css' : './template/revealjs/css/print/paper.css';
document.getElementsByTagName( 'head' )[0].appendChild( link );
// MARIO version
if (window.location.search.match( /print-pdf/gi ))
{
var link = document.createElement( 'link' );
link.rel = 'stylesheet';
link.type = 'text/css';
link.href = './template/agcg-pdf.css';
document.getElementsByTagName( 'head' )[0].appendChild( link );
}
</script>
<!-- MathJax config -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
jax: ["input/TeX","output/HTML-CSS"],
TeX: {
Macros: {
R: "{\\mathrm{{I}\\kern-.15em{R}}}",
abs: ['\\left\\lvert #1 \\right\\rvert', 1],
norm: ['\\left\\Vert #1 \\right\\Vert', 1],
iprod: ['\\left\\langle #1 \\right\\rangle', 1],
vec: ['\\mathbf{#1}', 1],
mat: ['\\mathbf{#1}', 1],
trans: ['{#1}\\mkern-1mu^{\\mathsf{T}}', 1],
matrix: ['\\begin{bmatrix} #1 \\end{bmatrix}', 1],
vector: ['\\begin{pmatrix} #1 \\end{pmatrix}', 1],
of: ['\\mkern{-2mu}\\left( #1 \\right\)', 1]
}
},
"HTML-CSS": {
styles: { ".reveal section .MathJax_Display": { margin: "0.5em 0em" } },
styles: { ".reveal table .MathJax_Display": { margin: "0em" } },
scale: 95
}
});
</script>
</head>
<body>
<!-- here come the slides -->
<div class="reveal">
<div class="slides">
<!-- Title slide -->
<section class="white-on-blue">
<div class="title"> Evaluation of the Performance of Randomized FFD Control Grids </div>
<div class="subtitle"> Master Thesis </div>
<div class="author"> Stefan Dresselhaus </div>
<div class="affiliation"> Graphics &amp; Geometry Group </div>
</section>
<!-- Table of Contents -->
<!-- all the slides from markdown document: DO NOT INDENT THE body LINE!!! -->
<section id="introduction" class="slide level1">
<h1>Introduction</h1>
<p>Many modern industrial design processes require advanced optimization methods due to the increased complexity resulting from more and more degrees of freedom as methods refine and/or other methods are used. Examples for this are physical domains like aerodynamics (i.e. drag), fluid dynamics (i.e. throughput of liquid) — where the complexity increases with the temporal and spatial resolution of the simulation — or known hard algorithmic problems in informatics ( i.e. layouting of circuit boards or stacking of 3Dobjects). Moreover these are typically not static environments but requirements shift over time or from case to case.</p>
<p>Evolutionary algorithms cope especially well with these problem domains while addressing all the issues at hand. One of the main concerns in these algorithms is the formulation of the problems in terms of a <em>genome</em> and <em>fitnessfunction</em>. While one can typically use an arbitrary costfunction for the <em>fitnessfunctions</em> (i.e. amount of drag, amount of space, etc.), the translation of the problemdomain into a simple parametric representation (the <em>genome</em>) can be challenging.</p>
<p>This translation is often necessary as the target of the optimization may have too many degrees of freedom for a reasonable computation. In the example of an aerodynamic simulation of drag onto an object, those objectdesigns tend to have a high number of vertices to adhere to various requirements (visual, practical, physical, etc.). A simpler representation of the same object in only a few parameters that manipulate the whole in a sensible matter are desirable, as this often decreases the computation time significantly.</p>
<p>Additionally one can exploit the fact, that drag in this case is especially sensitive to nonsmooth surfaces, so that a smooth local manipulation of the surface as a whole is more advantageous than merely random manipulation of the vertices.</p>
<p>The quality of such a lowdimensional representation in biological evolution is strongly tied to the notion of <em>evolvability</em>, as the parametrization of the problem has serious implications on the convergence speed and the quality of the solution. However, there is no consensus on how <em>evolvability</em> is defined and the meaning varies from context to context. As a consequence there is need for some criteria we can measure, so that we are able to compare different representations to learn and improve upon these.</p>
<p>One example of such a general representation of an object is to generate random points and represent vertices of an object as distances to these points — for example via . If one (or the algorithm) would move such a point the object will get deformed only locally (due to the ). As this results in a simple mapping from the parameterspace onto the object one can try out different representations of the same object and evaluate which criteria may be suited to describe this notion of <em>evolvability</em>. This is exactly what Richter et al. have done.</p>
<p>As we transfer the results of Richter et al. from using as a representation to manipulate geometric objects to the use of we will use the same definition for <em>evolvability</em> the original author used, namely <em>regularity</em>, <em>variability</em>, and <em>improvement potential</em>. We introduce these term in detail in Chapter . In the original publication the author could show a correlation between these evolvabilitycriteria with the quality and convergence speed of such optimization.</p>
<p>We will replicate the same setup on the same objects but use instead of to create a local deformation near the controlpoints and evaluate if the evolutioncriteria still work as a predictor for <em>evolvability</em> of the representation given the different deformation scheme, as suspected in .</p>
<p>First we introduce different topics in isolation in Chapter . We take an abstract look at the definition of for a onedimensional line (in ) and discuss why this is a sensible deformation function (in ). Then we establish some backgroundknowledge of evolutionary algorithms (in ) and why this is useful in our domain (in ) followed by the definition of the different evolvabilitycriteria established in (in ).</p>
<p>In Chapter we take a look at our implementation of and the adaptation for 3Dmeshes that were used. Next, in Chapter , we describe the different scenarios we use to evaluate the different evolvabilitycriteria incorporating all aspects introduced in Chapter . Following that, we evaluate the results in Chapter with further on discussion, summary and outlook in Chapter .</p>
</section>
<section id="background" class="slide level1">
<h1>Background</h1>
<section id="what-is" class="level2">
<h2>What is ?</h2>
<p>First of all we have to establish how a works and why this is a good tool for deforming geometric objects (especially meshes in our case) in the first place. For simplicity we only summarize the 1Dcase from here and go into the extension to the 3D case in chapter .</p>
<p>The main idea of is to create a function <span class="math inline">\(s : [0,1[^d \mapsto \mathbb{R}^d\)</span> that spans a certain part of a vectorspace and is only linearly parametrized by some special controlpoints <span class="math inline">\(p_i\)</span> and an constant attributionfunction <span class="math inline">\(a_i(u)\)</span>, so <span class="math display">\[
s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}
\]</span> can be thought of a representation of the inside of the convex hull generated by the controlpoints where each position inside can be accessed by the right <span class="math inline">\(u \in [0,1[^d\)</span>.</p>
<p>In the 1dimensional example in figure , the controlpoints are indicated as red dots and the colourgradient should hint at the <span class="math inline">\(u\)</span>values ranging from <span class="math inline">\(0\)</span> to <span class="math inline">\(1\)</span>.</p>
<p>We now define a by the following:<br />
Given an arbitrary number of points <span class="math inline">\(p_i\)</span> alongside a line, we map a scalar value <span class="math inline">\(\tau_i \in [0,1[\)</span> to each point with <span class="math inline">\(\tau_i &lt; \tau_{i+1} \forall i\)</span> according to the position of <span class="math inline">\(p_i\)</span> on said line. Additionally, given a degree of the target polynomial <span class="math inline">\(d\)</span> we define the curve <span class="math inline">\(N_{i,d,\tau_i}(u)\)</span> as follows:</p>
<span class="math display">\[\begin{equation} \label{eqn:ffd1d1}
N_{i,0,\tau}(u) = \begin{cases} 1, &amp; u \in [\tau_i, \tau_{i+1}[ \\ 0, &amp; \mbox{otherwise} \end{cases}
\end{equation}\]</span>
<p>and <span class="math display">\[\begin{equation} \label{eqn:ffd1d2}
N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u)
\end{equation}\]</span></p>
<p>If we now multiply every <span class="math inline">\(p_i\)</span> with the corresponding <span class="math inline">\(N_{i,d,\tau_i}(u)\)</span> we get the contribution of each point <span class="math inline">\(p_i\)</span> to the final curvepoint parametrized only by <span class="math inline">\(u \in [0,1[\)</span>. As can be seen from we only access points <span class="math inline">\([p_i..p_{i+d}]\)</span> for any given <span class="math inline">\(i\)</span><a href="#/fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a>, which gives us, in combination with choosing <span class="math inline">\(p_i\)</span> and <span class="math inline">\(\tau_i\)</span> in order, only a local interference of <span class="math inline">\(d+1\)</span> points.</p>
<p>We can even derive this equation straightforward for an arbitrary <span class="math inline">\(N\)</span><a href="#/fn2" class="footnote-ref" id="fnref2"><sup>2</sup></a>:</p>
<p><span class="math display">\[\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)\]</span></p>
<p>For a BSpline <span class="math display">\[s(u) = \sum_{i} N_{i,d,\tau_i}(u) p_i\]</span> these derivations yield <span class="math inline">\(\left(\frac{\partial}{\partial u}\right)^d s(u) = 0\)</span>.</p>
<p>Another interesting property of these recursive polynomials is that they are continuous (given <span class="math inline">\(d \ge 1\)</span>) as every <span class="math inline">\(p_i\)</span> gets blended in between <span class="math inline">\(\tau_i\)</span> and <span class="math inline">\(\tau_{i+d}\)</span> and out between <span class="math inline">\(\tau_{i+1}\)</span>, and <span class="math inline">\(\tau_{i+d+1}\)</span> as can bee seen from the two coefficients in every step of the recursion.</p>
<p>This means that all changes are only a local linear combination between the controlpoint <span class="math inline">\(p_i\)</span> to <span class="math inline">\(p_{i+d+1}\)</span> and consequently this yields to the convexhullproperty of BSplines — meaning, that no matter how we choose our coefficients, the resulting points all have to lie inside convexhull of the controlpoints.</p>
<p>For a given point <span class="math inline">\(s_i\)</span> we can then calculate the contributions <span class="math inline">\(u_{i,j}~:=~N_{j,d,\tau}\)</span> of each control point <span class="math inline">\(p_j\)</span> to get the projection from the controlpointspace into the objectspace: <span class="math display">\[
s_i = \sum_j u_{i,j} \cdot p_j = \vec{n}_i^{T} \vec{p}
\]</span> or written for all points at the same time: <span class="math display">\[
\vec{s} = \vec{U} \vec{p}
\]</span> where <span class="math inline">\(\vec{U}\)</span> is the <span class="math inline">\(n \times m\)</span> transformationmatrix (later on called <strong>deformation matrix</strong>) for <span class="math inline">\(n\)</span> objectspacepoints and <span class="math inline">\(m\)</span> controlpoints.</p>
<p>Furthermore BSplinebasisfunctions form a partition of unity for all, but the first and last <span class="math inline">\(d\)</span> controlpoints. Therefore we later on use the borderpoints <span class="math inline">\(d+1\)</span> times, such that <span class="math inline">\(\sum_j u_{i,j} p_j = p_i\)</span> for these points.</p>
<p>The locality of the influence of each controlpoint and the partition of unity was beautifully pictured by Brunet, which we included here as figure .</p>
<section id="why-is-a-good-deformation-function" class="level3">
<h3>Why is a good deformation function?</h3>
<p>The usage of as a tool for manipulating follows directly from the properties of the polynomials and the correspondence to the controlpoints. Having only a few controlpoints gives the user a nicer highlevelinterface, as she only needs to move these points and the model follows in an intuitive manner. The deformation is smooth as the underlying polygon is smooth as well and affects as many vertices of the model as needed. Moreover the changes are always local so one risks not any change that a user cannot immediately see.</p>
<p>But there are also disadvantages of this approach. The user loses the ability to directly influence vertices and even seemingly simple tasks as creating a plateau can be difficult to achieve.</p>
<p>This disadvantages led to the formulation of in which the user directly interacts with the surfacemesh. All interactions will be applied proportionally to the controlpoints that make up the parametrization of the interactionpoint itself yielding a smooth deformation of the surface <em>at</em> the surface without seemingly arbitrary scattered controlpoints. Moreover this increases the efficiency of an evolutionary optimization, which we will use later on.</p>
<p>But this approach also has downsides as can be seen in figure , as the tessellation of the invisible grid has a major impact on the deformation itself.</p>
<p>All in all and are still good ways to deform a highpolygon mesh albeit the downsides.</p>
</section>
</section>
<section id="what-is-evolutionary-optimization" class="level2">
<h2>What is evolutionary optimization?</h2>
<p>In this thesis we are using an evolutionary optimization strategy to solve the problem of finding the best parameters for our deformation. This approach, however, is very generic and we introduce it here in a broader sense.</p>
<p>The general shape of an evolutionary algorithm (adapted from ) is outlined in Algorithm . Here, <span class="math inline">\(P(t)\)</span> denotes the population of parameters in step <span class="math inline">\(t\)</span> of the algorithm. The population contains <span class="math inline">\(\mu\)</span> individuals <span class="math inline">\(a_i\)</span> from the possible individualset <span class="math inline">\(I\)</span> that fit the shape of the parameters we are looking for. Typically these are initialized by a random guess or just zero. Further on we need a socalled <em>fitnessfunction</em> <span class="math inline">\(\Phi : I \mapsto M\)</span> that can take each parameter to a measurable space <span class="math inline">\(M\)</span> (usually <span class="math inline">\(M = \mathbb{R}\)</span>) along a convergencefunction <span class="math inline">\(c : I \mapsto \mathbb{B}\)</span> that terminates the optimization.</p>
<p>Biologically speaking the set <span class="math inline">\(I\)</span> corresponds to the set of possible <em>genotypes</em> while <span class="math inline">\(M\)</span> represents the possible observable <em>phenotypes</em>. <em>Genotypes</em> define all initial properties of an individual, but their properties are not directly observable. It is the genes, that evolve over time (and thus correspond to the parameters we are tweaking in our algorithms or the genes in nature), but only the <em>phenotypes</em> make certain behaviour observable (algorithmically through our <em>fitnessfunction</em>, biologically by the ability to survive and produce offspring). Any individual in our algorithm thus experience a biologically motivated life cycle of inheriting genes from the parents, modified by mutations occurring, performing according to a fitnessmetric, and generating offspring based on this. Therefore each iteration in the whileloop above is also often named generation.</p>
<p>One should note that there is a subtle difference between <em>fitnessfunction</em> and a so called <em>genotypephenotypemapping</em>. The first one directly applies the <em>genotypephenotypemapping</em> and evaluates the performance of an individual, thus going directly from genes/parameters to reproductionprobability/score. In a concrete example the <em>genotype</em> can be an arbitrary vector (the genes), the <em>phenotype</em> is then a deformed object, and the performance can be a single measurement like an airdragcoefficient. The <em>genotypephenotypemapping</em> would then just be the generation of different objects from that startingvector, whereas the <em>fitnessfunction</em> would go directly from such a startingvector to the coefficient that we want to optimize.</p>
<p>The main algorithm just repeats the following steps:</p>
<ul>
<li><strong>Recombine</strong> with a recombinationfunction <span class="math inline">\(r : I^{\mu} \mapsto I^{\lambda}\)</span> to generate <span class="math inline">\(\lambda\)</span> new individuals based on the characteristics of the <span class="math inline">\(\mu\)</span> parents.<br />
This makes sure that the next guess is close to the old guess.</li>
<li><strong>Mutate</strong> with a mutationfunction <span class="math inline">\(m : I^{\lambda} \mapsto I^{\lambda}\)</span> to introduce new effects that cannot be produced by mere recombination of the parents.<br />
Typically this just adds minor defects to individual members of the population like adding a random gaussian noise or amplifying/dampening random parts.</li>
<li><strong>Selection</strong> takes a selectionfunction <span class="math inline">\(s : (I^\lambda \cup I^{\mu + \lambda},\Phi) \mapsto I^\mu\)</span> that selects from the previously generated <span class="math inline">\(I^\lambda\)</span> children and optionally also the parents (denoted by the set <span class="math inline">\(Q\)</span> in the algorithm) using the <em>fitnessfunction</em> <span class="math inline">\(\Phi\)</span>. The result of this operation is the next Population of <span class="math inline">\(\mu\)</span> individuals.</li>
</ul>
<p>All these functions can (and mostly do) have a lot of hidden parameters that can be changed over time. A good overview of this is given in , so we only give a small excerpt here.</p>
<p>For example the mutation can consist of merely a single <span class="math inline">\(\sigma\)</span> determining the strength of the gaussian defects in every parameter — or giving a different <span class="math inline">\(\sigma\)</span> to every component of those parameters. An even more sophisticated example would be the 1/5 success rule from .</p>
<p>Also in the selectionfunction it may not be wise to only take the bestperforming individuals, because it may be that the optimization has to overcome a barrier of bad fitness to achieve a better local optimum.</p>
<p>Recombination also does not have to be mere random choosing of parents, but can also take ancestry, distance of genes or groups of individuals into account.</p>
</section>
<section id="advantages-of-evolutionary-algorithms" class="level2">
<h2>Advantages of evolutionary algorithms</h2>
<p>The main advantage of evolutionary algorithms is the ability to find optima of general functions just with the help of a given <em>fitnessfunction</em>. Components and techniques for evolutionary algorithms are specifically known to help with different problems arising in the domain of optimization. An overview of the typical problems are shown in figure .</p>
<p>Most of the advantages stem from the fact that a gradientbased procedure has usually only one point of observation from where it evaluates the next steps, whereas an evolutionary strategy starts with a population of guessed solutions. Because an evolutionary strategy can be modified according to the problemdomain (i.e. by the ideas given above) it can also approximate very difficult problems in an efficient manner and even selftune parameters depending on the ancestry at runtime<a href="#/fn3" class="footnote-ref" id="fnref3"><sup>3</sup></a>.</p>
<p>If an analytic best solution exists and is easily computable (i.e. because the errorfunction is convex) an evolutionary algorithm is not the right choice. Although both converge to the same solution, the analytic one is usually faster.</p>
<p>But in reality many problems have no analytic solution, because the problem is either not convex or there are so many parameters that an analytic solution (mostly meaning the equivalence to an exhaustive search) is computationally not feasible. Here evolutionary optimization has one more advantage as one can at least get suboptimal solutions fast, which then refine over time and still converge to a decent solution much faster than an exhaustive search.</p>
</section>
<section id="criteria-for-the-evolvability-of-linear-deformations" class="level2">
<h2>Criteria for the evolvability of linear deformations</h2>
<p>As we have established in chapter , we can describe a deformation by the formula <span class="math display">\[
\vec{S} = \vec{U}\vec{P}
\]</span> where <span class="math inline">\(\vec{S}\)</span> is a <span class="math inline">\(n \times d\)</span> matrix of vertices<a href="#/fn4" class="footnote-ref" id="fnref4"><sup>4</sup></a>, <span class="math inline">\(\vec{U}\)</span> are the (during parametrization) calculated deformationcoefficients and <span class="math inline">\(P\)</span> is a <span class="math inline">\(m \times d\)</span> matrix of controlpoints that we interact with during deformation.</p>
<p>We can also think of the deformation in terms of differences from the original coordinates <span class="math display">\[
\Delta \vec{S} = \vec{U} \cdot \Delta \vec{P}
\]</span> which is isomorphic to the former due to the linearity of the deformation. One can see in this way, that the way the deformation behaves lies solely in the entries of <span class="math inline">\(\vec{U}\)</span>, which is why the three criteria focus on this.</p>
<section id="variability" class="level3">
<h3>Variability</h3>
<p>In <em>variability</em> is defined as <span class="math display">\[\mathrm{variability}(\vec{U}) := \frac{\mathrm{rank}(\vec{U})}{n},\]</span> whereby <span class="math inline">\(\vec{U}\)</span> is the <span class="math inline">\(n \times m\)</span> deformationMatrix used to map the <span class="math inline">\(m\)</span> controlpoints onto the <span class="math inline">\(n\)</span> vertices.</p>
<p>Given <span class="math inline">\(n = m\)</span>, an identical number of controlpoints and vertices, this quotient will be <span class="math inline">\(=1\)</span> if all controlpoints are independent of each other and the solution is to trivially move every controlpoint onto a targetpoint.</p>
<p>In praxis the value of <span class="math inline">\(V(\vec{U})\)</span> is typically <span class="math inline">\(\ll 1\)</span>, because there are only few controlpoints for many vertices, so <span class="math inline">\(m \ll n\)</span>.</p>
<p>This criterion should correlate to the degrees of freedom the given parametrization has. This can be seen from the fact, that <span class="math inline">\(\mathrm{rank}(\vec{U})\)</span> is limited by <span class="math inline">\(\min(m,n)\)</span> and — as <span class="math inline">\(n\)</span> is constant — can never exceed <span class="math inline">\(n\)</span>.</p>
<p>The rank itself is also interesting, as controlpoints could theoretically be placed on top of each other or be linear dependent in another way — but will in both cases lower the rank below the number of controlpoints <span class="math inline">\(m\)</span> and are thus measurable by the <em>variability</em>.</p>
</section>
<section id="regularity" class="level3">
<h3>Regularity</h3>
<p><em>Regularity</em> is defined as <span class="math display">\[\mathrm{regularity}(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}\]</span> where <span class="math inline">\(\sigma_{min}\)</span> and <span class="math inline">\(\sigma_{max}\)</span> are the smallest and greatest right singular value of the deformationmatrix <span class="math inline">\(\vec{U}\)</span>.</p>
<p>As we deform the given Object only based on the parameters as <span class="math inline">\(\vec{p} \mapsto f(\vec{x} + \vec{U}\vec{p})\)</span> this makes sure that <span class="math inline">\(\|\vec{Up}\| \propto \|\vec{p}\|\)</span> when <span class="math inline">\(\kappa(\vec{U}) \approx 1\)</span>. The inversion of <span class="math inline">\(\kappa(\vec{U})\)</span> is only performed to map the criterionrange to <span class="math inline">\([0..1]\)</span>, where <span class="math inline">\(1\)</span> is the optimal value and <span class="math inline">\(0\)</span> is the worst value.</p>
<p>On the one hand this criterion should be characteristic for numeric stability and on the other hand for the convergence speed of evolutionary algorithms as it is tied to the notion of locality.</p>
</section>
<section id="improvement-potential" class="level3">
<h3>Improvement Potential</h3>
<p>In contrast to the general nature of <em>variability</em> and <em>regularity</em>, which are agnostic of the <em>fitnessfunction</em> at hand, the third criterion should reflect a notion of the potential for optimization, taking a guess into account.</p>
<p>Most of the times some kind of gradient <span class="math inline">\(g\)</span> is available to suggest a direction worth pursuing; either from a previous iteration or by educated guessing. We use this to guess how much change can be achieved in the given direction.</p>
<p>The definition for an <em>improvement potential</em> <span class="math inline">\(P\)</span> is: <span class="math display">\[
\mathrm{potential}(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec{G}\|^2_F
\]</span> given some approximate <span class="math inline">\(n \times d\)</span> fitnessgradient <span class="math inline">\(\vec{G}\)</span>, normalized to <span class="math inline">\(\|\vec{G}\|_F = 1\)</span>, whereby <span class="math inline">\(\|\cdot\|_F\)</span> denotes the FrobeniusNorm.</p>
</section>
</section>
</section>
<section id="implementation-of" class="slide level1">
<h1>Implementation of </h1>
<p>The general formulation of BSplines has two free parameters <span class="math inline">\(d\)</span> and <span class="math inline">\(\tau\)</span> which must be chosen beforehand.</p>
<p>As we usually work with regular grids in our we define <span class="math inline">\(\tau\)</span> statically as <span class="math inline">\(\tau_i = \nicefrac{i}{n}\)</span> whereby <span class="math inline">\(n\)</span> is the number of controlpoints in that direction.</p>
<p><span class="math inline">\(d\)</span> defines the <em>degree</em> of the BSplineFunction (the number of times this function is differentiable) and for our purposes we fix <span class="math inline">\(d\)</span> to <span class="math inline">\(3\)</span>, but give the formulas for the general case so it can be adapted quite freely.</p>
<section id="adaption-of" class="level2">
<h2>Adaption of </h2>
<p>As we have established in Chapter we can define an displacement as <span class="math display">\[\begin{equation}
\Delta_x(u) = \sum_i N_{i,d,\tau_i}(u) \Delta_x c_i
\end{equation}\]</span></p>
<p>Note that we only sum up the <span class="math inline">\(\Delta\)</span>displacements in the controlpoints <span class="math inline">\(c_i\)</span> to get the change in position of the point we are interested in.</p>
<p>In this way every deformed vertex is defined by <span class="math display">\[
\textrm{Deform}(v_x) = v_x + \Delta_x(u)
\]</span> with <span class="math inline">\(u \in [0..1[\)</span> being the variable that connects the highdetailed vertexmesh to the lowdetailed controlgrid. To actually calculate the new position of the vertex we first have to calculate the <span class="math inline">\(u\)</span>value for each vertex. This is achieved by finding out the parametrization of <span class="math inline">\(v\)</span> in terms of <span class="math inline">\(c_i\)</span> <span class="math display">\[
v_x \overset{!}{=} \sum_i N_{i,d,\tau_i}(u) c_i
\]</span> so we can minimize the error between those two: <span class="math display">\[
\underset{u}{\argmin}\,Err(u,v_x) = \underset{u}{\argmin}\,2 \cdot \|v_x - \sum_i N_{i,d,\tau_i}(u) c_i\|^2_2
\]</span> As this errorterm is quadratic we just derive by <span class="math inline">\(u\)</span> yielding <span class="math display">\[
\begin{array}{rl}
\frac{\partial}{\partial u} &amp; v_x - \sum_i N_{i,d,\tau_i}(u) c_i \\
= &amp; - \sum_i \left( \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u) \right) c_i
\end{array}
\]</span> and do a gradientdescend to approximate the value of <span class="math inline">\(u\)</span> up to an <span class="math inline">\(\epsilon\)</span> of <span class="math inline">\(0.0001\)</span>.</p>
<p>For this we employ the GaussNewton algorithm, which converges into the leastsquares solution. An exact solution of this problem is impossible most of the time, because we usually have way more vertices than controlpoints (<span class="math inline">\(\#v~\gg~\#c\)</span>).</p>
</section>
<section id="adaption-of-for-a-3dmesh" class="level2">
<h2>Adaption of for a 3DMesh</h2>
<p>This is a straightforward extension of the 1Dmethod presented in the last chapter. But this time things get a bit more complicated. As we have a 3dimensional grid we may have a different amount of controlpoints in each direction.</p>
<p>Given <span class="math inline">\(n,m,o\)</span> controlpoints in <span class="math inline">\(x,y,z\)</span>direction each Point on the curve is defined by <span class="math display">\[V(u,v,w) = \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.\]</span></p>
<p>In this case we have three different BSplines (one for each dimension) and also 3 variables <span class="math inline">\(u,v,w\)</span> for each vertex we want to approximate.</p>
<p>Given a target vertex <span class="math inline">\(\vec{p}^*\)</span> and an initial guess <span class="math inline">\(\vec{p}=V(u,v,w)\)</span> we define the errorfunction for the gradientdescent as:</p>
<p><span class="math display">\[Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)\]</span></p>
<p>And the partial version for just one direction as</p>
<p><span class="math display">\[Err_x(u,v,w,\vec{p}^{*}) = p^{*}_x - \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \]</span></p>
<p>To solve this we derive partially, like before:</p>
<p><span class="math display">\[
\begin{array}{rl}
\displaystyle \frac{\partial Err_x}{\partial u} &amp; p^{*}_x - \displaystyle \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \\
= &amp; \displaystyle - \sum_i \sum_j \sum_k N&#39;_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x
\end{array}
\]</span></p>
<p>The other partial derivatives follow the same pattern yielding the Jacobian:</p>
<p><span class="math display">\[
J(Err(u,v,w)) =
\left(
\begin{array}{ccc}
\frac{\partial Err_x}{\partial u} &amp; \frac{\partial Err_x}{\partial v} &amp; \frac{\partial Err_x}{\partial w} \\
\frac{\partial Err_y}{\partial u} &amp; \frac{\partial Err_y}{\partial v} &amp; \frac{\partial Err_y}{\partial w} \\
\frac{\partial Err_z}{\partial u} &amp; \frac{\partial Err_z}{\partial v} &amp; \frac{\partial Err_z}{\partial w}
\end{array}
\right)
\]</span> <span class="math display">\[
\scriptsize
=
\left(
\begin{array}{ccc}
- \displaystyle \sum_{i,j,k} N&#39;_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x &amp;- \displaystyle \sum_{i,j,k} N_{i}(u) N&#39;_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x &amp; - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N&#39;_{k}(w) \cdot {c_{ijk}}_x \\
- \displaystyle \sum_{i,j,k} N&#39;_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y &amp;- \displaystyle \sum_{i,j,k} N_{i}(u) N&#39;_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y &amp; - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N&#39;_{k}(w) \cdot {c_{ijk}}_y \\
- \displaystyle \sum_{i,j,k} N&#39;_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z &amp;- \displaystyle \sum_{i,j,k} N_{i}(u) N&#39;_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z &amp; - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N&#39;_{k}(w) \cdot {c_{ijk}}_z
\end{array}
\right)
\]</span></p>
<p>With the GaussNewton algorithm we iterate via the formula <span class="math display">\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\]</span> and use Cramers rule for inverting the small Jacobian and solving this system of linear equations.</p>
<p>As there is no strict upper bound of the number of iterations for this algorithm, we just iterate it long enough to be within the given <span class="math inline">\(\epsilon\)</span>error above. This takes — depending on the shape of the object and the grid — about <span class="math inline">\(3\)</span> to <span class="math inline">\(5\)</span> iterations that we observed in practice.</p>
<p>Another issue that we observed in our implementation is, that multiple local optima may exist on selfintersecting grids. We solve this problem by defining selfintersecting grids to be <em>invalid</em> and do not test any of them.</p>
<p>This is not such a big problem as it sounds at first, as selfintersections mean, that controlpoints being further away from a given vertex have more influence over the deformation than controlpoints closer to this vertex. Also this contradicts the notion of locality that we want to achieve and deemed beneficial for a good behaviour of the evolutionary algorithm.</p>
<div id="deformation-grid">
<div style="width:50%;float:left">
</div>
<div style="width:50%;float:left">
<p>As mentioned in chapter , the way of choosing the representation to map the general problem (meshfitting/optimization in our case) into a parameterspace is very important for the quality and runtime of evolutionary algorithms.</p>
</div>
<div style="clear: both">
</div>
</div>
<p>Because our controlpoints are arranged in a grid, we can accurately represent each vertexpoint inside the grids volume with proper BSplinecoefficients between <span class="math inline">\([0,1[\)</span> and — as a consequence — we have to embed our object into it (or create constant “dummy”points outside).</p>
<p>The great advantage of BSplines is the local, direct impact of each control point without having a <span class="math inline">\(1:1\)</span>correlation, and a smooth deformation. While the advantages are great, the issues arise from the problem to decide where to place the controlpoints and how many to place at all.</p>
<p>One would normally think, that the more controlpoints you add, the better the result will be, but this is not the case for our BSplines. Given any point <span class="math inline">\(\vec{p}\)</span> only the <span class="math inline">\(2 \cdot (d-1)\)</span> controlpoints contribute to the parametrization of that point<a href="#/fn5" class="footnote-ref" id="fnref5"><sup>5</sup></a>. This means, that a high resolution can have many controlpoints that are not contributing to any point on the surface and are thus completely irrelevant to the solution.</p>
<p>We illustrate this phenomenon in figure , where the red central points are not relevant for the parametrization of the circle. This leads to artefacts in the deformationmatrix <span class="math inline">\(\vec{U}\)</span>, as the columns corresponding to those controlpoints are <span class="math inline">\(0\)</span>.</p>
<p>This also leads to useless increased complexity, as the parameters corresponding to those points will never have any effect, but a naive algorithm will still try to optimize them yielding numeric artefacts in the best and nonterminating or illdefined solutions<a href="#/fn6" class="footnote-ref" id="fnref6"><sup>6</sup></a> at worst.</p>
<p>One can of course neglect those columns and their corresponding controlpoints, but this raises the question why they were introduced in the first place. We will address this in a special scenario in .</p>
<p>For our tests we chose different uniformly sized grids and added noise onto each controlpoint<a href="#/fn7" class="footnote-ref" id="fnref7"><sup>7</sup></a> to simulate different startingconditions.</p>
</section>
</section>
<section id="scenarios-for-testing-evolvabilitycriteria-using" class="slide level1">
<h1>Scenarios for testing evolvabilitycriteria using </h1>
<p>In our experiments we use the same two testingscenarios, that were also used by Richter et al. The first scenario deforms a plane into a shape originally defined by Giannelli et al., where we setup controlpoints in a 2dimensional manner and merely deform in the heightcoordinate to get the resulting shape.</p>
<p>In the second scenario we increase the degrees of freedom significantly by using a 3dimensional controlgrid to deform a sphere into a face, so each control point has three degrees of freedom in contrast to first scenario.</p>
<section id="test-scenario-1d-function-approximation" class="level2">
<h2>Test Scenario: 1D Function Approximation</h2>
<p>In this scenario we used the shape defined by Giannelli et al., which is also used by Richter et al. using the same discretization to <span class="math inline">\(150 \times 150\)</span> points for a total of <span class="math inline">\(n = 22\,500\)</span> vertices. The shape is given by the following definition <span class="math display">\[\begin{equation}
t(x,y) =
\begin{cases}
0.5 \cos(4\pi \cdot q^{0.5}) + 0.5 &amp; q(x,y) &lt; \frac{1}{16},\\
2(y-x) &amp; 0 &lt; y-x &lt; 0.5,\\
1 &amp; 0.5 &lt; y - x
\end{cases}
\end{equation}\]</span><!-- </> --> with <span class="math inline">\((x,y) \in [0,2] \times [0,1]\)</span> and <span class="math inline">\(q(x,y)=(x-1.5)^2 + (y-0.5)^2\)</span>, which we have visualized in figure .</p>
<p>As the startingplane we used the same shape, but set all <span class="math inline">\(z\)</span>coordinates to <span class="math inline">\(0\)</span>, yielding a flat plane, which is partially already correct.</p>
<p>Regarding the <em>fitnessfunction</em> <span class="math inline">\(\mathrm{f}(\vec{p})\)</span>, we use the very simple approach of calculating the squared distances for each corresponding vertex <span class="math display">\[\begin{equation}
\mathrm{f}(\vec{p}) = \sum_{i=1}^{n} \|(\vec{Up})_i - t_i\|_2^2 = \|\vec{Up} - \vec{t}\|^2 \rightarrow \min
\end{equation}\]</span> where <span class="math inline">\(t_i\)</span> are the respective targetvertices to the parametrized sourcevertices<a href="#/fn8" class="footnote-ref" id="fnref8"><sup>8</sup></a> with the current deformationparameters <span class="math inline">\(\vec{p} = (p_1,\dots, p_m)\)</span>. We can do this onetoonecorrespondence because we have exactly the same number of source and targetvertices do to our setup of just flattening the object.</p>
<p>This formula is also the leastsquares approximation error for which we can compute the analytic solution <span class="math inline">\(\vec{p^{*}} = \vec{U^+}\vec{t}\)</span>, yielding us the correct gradient in which the evolutionary optimizer should move.</p>
</section>
<section id="test-scenario-3d-function-approximation" class="level2">
<h2>Test Scenario: 3D Function Approximation</h2>
<p> Opposed to the 1dimensional scenario before, the 3dimensional scenario is much more complex — not only because we have more degrees of freedom on each control point, but also, because the <em>fitnessfunction</em> we will use has no known analytic solution and multiple local minima.</p>
<p>First of all we introduce the set up: We have given a triangulated model of a sphere consisting of <span class="math inline">\(10\,807\)</span> vertices, that we want to deform into a the targetmodel of a face with a total of <span class="math inline">\(12\,024\)</span> vertices. Both of these Models can be seen in figure .</p>
<p>Opposed to the 1Dcase we cannot map the source and targetvertices in a onetoonecorrespondence, which we especially need for the approximation of the fittingerror. Hence we state that the error of one vertex is the distance to the closest vertex of the respective other model and sum up the error from the source and target.</p>
<p>We therefore define the <em>fitnessfunction</em> to be:</p>
<span class="math display">\[\begin{equation}
\mathrm{f}(\vec{P}) = \frac{1}{n} \underbrace{\sum_{i=1}^n \|\vec{c_T(s_i)} -
\vec{s_i}\|_2^2}_{\textrm{source--to--target--distance}}
+ \frac{1}{m} \underbrace{\sum_{i=1}^m \|\vec{c_S(t_i)} -
\vec{t_i}\|_2^2}_{\textrm{target--to--source--distance}}
+ \lambda \cdot \textrm{regularization}(\vec{P})
\label{eq:fit3d}
\end{equation}\]</span>
<p>where <span class="math inline">\(\vec{c_T(s_i)}\)</span> denotes the targetvertex that is corresponding to the sourcevertex <span class="math inline">\(\vec{s_i}\)</span> and <span class="math inline">\(\vec{c_S(t_i)}\)</span> denotes the sourcevertex that corresponds to the targetvertex <span class="math inline">\(\vec{t_i}\)</span>. Note that the targetvertices are given and fixed by the targetmodel of the face we want to deform into, whereas the sourcevertices vary depending on the chosen parameters <span class="math inline">\(\vec{P}\)</span>, as those get calculated by the previously introduces formula <span class="math inline">\(\vec{S} = \vec{UP}\)</span> with <span class="math inline">\(\vec{S}\)</span> being the <span class="math inline">\(n \times 3\)</span>matrix of sourcevertices, <span class="math inline">\(\vec{U}\)</span> the <span class="math inline">\(n \times m\)</span>matrix of calculated coefficients for the — analog to the 1D case — and finally <span class="math inline">\(\vec{P}\)</span> being the <span class="math inline">\(m \times 3\)</span>matrix of the controlgrid defining the whole deformation.</p>
<p>As regularizationterm we add a weighted Laplacian of the deformation that has been used before by Aschenbach et al. on similar models and was shown to lead to a more precise fit. The Laplacian <span class="math display">\[\begin{equation}
\mathrm{regularization}(\vec{P}) = \frac{1}{\sum_i A_i} \sum_{i=1}^n A_i \cdot \left( \sum_{\vec{s}_j \in \mathcal{N}(\vec{s}_i)} w_j \cdot \|\Delta \vec{s}_j - \Delta \vec{s}_i\|^2 \right)
\label{eq:reg3d}
\end{equation}\]</span> is determined by the cotangent weighted displacement <span class="math inline">\(w_j\)</span> of the to <span class="math inline">\(s_i\)</span> connected vertices <span class="math inline">\(\mathcal{N}(s_i)\)</span> and <span class="math inline">\(A_i\)</span> is the Voronoiarea of the corresponding vertex <span class="math inline">\(\vec{s_i}\)</span>. We leave out the <span class="math inline">\(\vec{R}_i\)</span>term from the original paper as our deformation is merely linear.</p>
<p>This regularizationweight gives us a measure of stiffness for the material that we will influence via the <span class="math inline">\(\lambda\)</span>coefficient to start out with a stiff material that will get more flexible per iteration. As a sideeffect this also limits the effects of overagressive movement of the controlpoints in the beginning of the fitting process and thus should limit the generation of illdefined grids mentioned in section .</p>
</section>
</section>
<section id="evaluation-of-scenarios" class="slide level1">
<h1>Evaluation of Scenarios</h1>
<p>To compare our results to the ones given by Richter et al., we also use Spearmans rank correlation coefficient. Opposed to other popular coefficients, like the Pearson correlation coefficient, which measures a linear relationship between variables, the Spearmans coefficient assesses how well an arbitrary monotonic function can describe the relationship between two variables, without making any assumptions about the frequency distribution of the variables.</p>
<p>As we dont have any prior knowledge if any of the criteria is linear and we are just interested in a monotonic relation between the criteria and their predictive power, the Spearmans coefficient seems to fit out scenario best and was also used before by Richter et al.</p>
<p>For interpretation of these values we follow the same interpretation used in , based on : The coefficient intervals <span class="math inline">\(r_S \in [0,0.2[\)</span>, <span class="math inline">\([0.2,0.4[\)</span>, <span class="math inline">\([0.4,0.6[\)</span>, <span class="math inline">\([0.6,0.8[\)</span>, and <span class="math inline">\([0.8,1]\)</span> are classified as <em>very weak</em>, <em>weak</em>, <em>moderate</em>, <em>strong</em> and <em>very strong</em>. We interpret pvalues smaller than <span class="math inline">\(0.01\)</span> as <em>significant</em> and cut off the precision of pvalues after four decimal digits (thus often having a pvalue of <span class="math inline">\(0\)</span> given for pvalues <span class="math inline">\(&lt; 10^{-4}\)</span>). <!-- </> --></p>
<p>As we are looking for anticorrelation (i.e. our criterion should be maximized indicating a minimal result in — for example — the reconstructionerror) instead of correlation we flip the sign of the correlationcoefficient for readability and to have the correlationcoefficients be in the classificationrange given above.</p>
<p>For the evolutionary optimization we employ the of the shark3.1 library , as this algorithm was used by as well. We leave the parameters at their sensible defaults as further explained in .</p>
<section id="procedure-1d-function-approximation" class="level2">
<h2>Procedure: 1D Function Approximation</h2>
<p>For our setup we first compute the coefficients of the deformationmatrix and use the formulas for <em>variability</em> and <em>regularity</em> to get our predictions. Afterwards we solve the problem analytically to get the (normalized) correct gradient that we use as guess for the <em>improvement potential</em>. To further test the <em>improvement potential</em> we also consider a distorted gradient <span class="math inline">\(\vec{g}_{\mathrm{d}}\)</span>: <span class="math display">\[
\vec{g}_{\mathrm{d}} = \frac{\mu \vec{g}_{\mathrm{c}} + (1-\mu)\mathbb{1}}{\|\mu \vec{g}_{\mathrm{c}} + (1-\mu) \mathbb{1}\|}
\]</span> where <span class="math inline">\(\mathbb{1}\)</span> is the vector consisting of <span class="math inline">\(1\)</span> in every dimension, <span class="math inline">\(\vec{g}_\mathrm{c} = \vec{p^{*}} - \vec{p}\)</span> is the calculated correct gradient, and <span class="math inline">\(\mu\)</span> is used to blend between <span class="math inline">\(\vec{g}_\mathrm{c}\)</span> and <span class="math inline">\(\mathbb{1}\)</span>. As we always start with a gradient of <span class="math inline">\(p = \mathbb{0}\)</span> this means we can shorten the definition of <span class="math inline">\(\vec{g}_\mathrm{c}\)</span> to <span class="math inline">\(\vec{g}_\mathrm{c} = \vec{p^{*}}\)</span>.</p>
<p>We then set up a regular 2dimensional grid around the object with the desired grid resolutions. To generate a testcase we then move the gridvertices randomly inside the xyplane. As selfintersecting grids get tricky to solve with our implemented newtonsmethod (see section ) we avoid the generation of such selfintersecting grids for our testcases.</p>
<p>To achieve that we generated a gaussian distributed number with <span class="math inline">\(\mu = 0, \sigma=0.25\)</span> and clamped it to the range <span class="math inline">\([-0.25,0.25]\)</span>. We chose such an <span class="math inline">\(r \in [-0.25,0.25]\)</span> per dimension and moved the controlpoints by that factor towards their respective neighbours<a href="#/fn9" class="footnote-ref" id="fnref9"><sup>9</sup></a>.</p>
<p>In other words we set <span class="math display">\[\begin{equation*}
p_i =
\begin{cases}
p_i + (p_i - p_{i-1}) \cdot r, &amp; \textrm{if } r \textrm{ negative} \\
p_i + (p_{i+1} - p_i) \cdot r, &amp; \textrm{if } r \textrm{ positive}
\end{cases}
\end{equation*}\]</span> in each dimension separately.</p>
<p>An Example of such a testcase can be seen for a <span class="math inline">\(7 \times 4\)</span>grid in figure .</p>
</section>
<section id="results-of-1d-function-approximation" class="level2">
<h2>Results of 1D Function Approximation</h2>
<p>In the case of our 1DOptimizationproblem, we have the luxury of knowing the analytical solution to the given problemset. We use this to experimentally evaluate the quality criteria we introduced before. As an evolutional optimization is partially a random process, we use the analytical solution as a stoppingcriteria. We measure the convergence speed as number of iterations the evolutional algorithm needed to get within <span class="math inline">\(1.05 \times\)</span> of the optimal solution.</p>
<p>We used different regular grids that we manipulated as explained in Section with a different number of controlpoints. As our grids have to be the product of two integers, we compared a <span class="math inline">\(5 \times 5\)</span>grid with <span class="math inline">\(25\)</span> controlpoints to a <span class="math inline">\(4 \times 7\)</span> and <span class="math inline">\(7 \times 4\)</span>grid with <span class="math inline">\(28\)</span> controlpoints. This was done to measure the impact an improper  setup could have and how well this is displayed in the criteria we are examining.</p>
<p>Additionally we also measured the effect of increasing the total resolution of the grid by taking a closer look at <span class="math inline">\(5 \times 5\)</span>, <span class="math inline">\(7 \times 7\)</span> and <span class="math inline">\(10 \times 10\)</span> grids.</p>
<section id="variability-1" class="level3">
<h3>Variability</h3>
<p><em>Variability</em> should characterize the potential for design space exploration and is defined in terms of the normalized rank of the deformation matrix <span class="math inline">\(\vec{U}\)</span>: <span class="math inline">\(V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}\)</span>, whereby <span class="math inline">\(n\)</span> is the number of vertices. As all our tested matrices had a constant rank (being <span class="math inline">\(m = x \cdot y\)</span> for a <span class="math inline">\(x \times y\)</span> grid), we have merely plotted the errors in the box plot in figure </p>
<p>It is also noticeable, that although the <span class="math inline">\(7 \times 4\)</span> and <span class="math inline">\(4 \times 7\)</span> grids have a higher <em>variability</em>, they perform not better than the <span class="math inline">\(5 \times 5\)</span> grid. Also the <span class="math inline">\(7 \times 4\)</span> and <span class="math inline">\(4 \times 7\)</span> grids differ distinctly from each other with a mean<span class="math inline">\(\pm\)</span>sigma of <span class="math inline">\(233.09 \pm 12.32\)</span> for the former and <span class="math inline">\(286.32 \pm 22.36\)</span> for the latter, although they have the same number of controlpoints. This is an indication of an impact a proper or improper gridsetup can have. We do not draw scientific conclusions from these findings, as more research on nonsquared grids seem necessary.</p>
<p>Leaving the issue of the gridlayout aside we focused on grids having the same number of prototypes in every dimension. For the <span class="math inline">\(5 \times 5\)</span>, <span class="math inline">\(7 \times 7\)</span> and <span class="math inline">\(10 \times 10\)</span> grids we found a <em>very strong</em> correlation (<span class="math inline">\(-r_S = 0.94, p = 0\)</span>) between the <em>variability</em> and the evolutionary error.</p>
</section>
<section id="regularity-1" class="level3">
<h3>Regularity</h3>
<p><em>Regularity</em> should correspond to the convergence speed (measured in iterationsteps of the evolutionary algorithm), and is computed as inverse condition number <span class="math inline">\(\kappa(\vec{U})\)</span> of the deformationmatrix.</p>
<p>As can be seen from table , we could only show a <em>weak</em> correlation in the case of a <span class="math inline">\(5 \times 5\)</span> grid. As we increment the number of controlpoints the correlation gets worse until it is completely random in a single dataset. Taking all presented datasets into account we even get a <em>strong</em> correlation of <span class="math inline">\(- r_S = -0.72, p = 0\)</span>, that is opposed to our expectations.</p>
<p>To explain this discrepancy we took a closer look at what caused these high number of iterations. In figure we also plotted the <em>improvement potential</em> against the steps next to the <em>regularity</em>plot. Our theory is that the <em>very strong</em> correlation (<span class="math inline">\(-r_S = -0.82, p=0\)</span>) between <em>improvement potential</em> and number of iterations hints that the employed algorithm simply takes longer to converge on a better solution (as seen in figure and ) offsetting any gain the regularitymeasurement could achieve.</p>
</section>
<section id="improvement-potential-1" class="level3">
<h3>Improvement Potential</h3>
<p>The <em>improvement potential</em> should correlate to the quality of the fittingresult. We plotted the results for the tested gridsizes <span class="math inline">\(5 \times 5\)</span>, <span class="math inline">\(7 \times 7\)</span> and <span class="math inline">\(10 \times 10\)</span> in figure . We tested the <span class="math inline">\(4 \times 7\)</span> and <span class="math inline">\(7 \times 4\)</span> grids as well, but omitted them from the plot.</p>
<p>Additionally we tested the results for a distorted gradient described in with a <span class="math inline">\(\mu\)</span>value of <span class="math inline">\(0.25\)</span>, <span class="math inline">\(0.5\)</span>, <span class="math inline">\(0,75\)</span>, and <span class="math inline">\(1.0\)</span> for the <span class="math inline">\(5 \times 5\)</span> grid and with a <span class="math inline">\(\mu\)</span>value of <span class="math inline">\(0.5\)</span> for all other cases.</p>
<p>All results show the identical <em>very strong</em> and <em>significant</em> correlation with a Spearmancoefficient of <span class="math inline">\(- r_S = 1.0\)</span> and pvalue of <span class="math inline">\(0\)</span>.</p>
<p>These results indicate, that <span class="math inline">\(\|\mathbb{1} - \vec{U}\vec{U}^{+}\|_F\)</span> is close to <span class="math inline">\(0\)</span>, reducing the impacts of any kind of gradient. Nevertheless, the improvement potential seems to be suited to make estimated guesses about the quality of a fit, even lacking an exact gradient.</p>
</section>
</section>
<section id="procedure-3d-function-approximation" class="level2">
<h2>Procedure: 3D Function Approximation</h2>
<p>As explained in section in detail, we do not know the analytical solution to the global optimum. Additionally we have the problem of finding the right correspondences between the original spheremodel and the targetmodel, as they consist of <span class="math inline">\(10\,807\)</span> and <span class="math inline">\(12\,024\)</span> vertices respectively, so we cannot make a onetoonecorrespondence between them as we did in the onedimensional case.</p>
<p>Initially we set up the correspondences <span class="math inline">\(\vec{c_T(\dots)}\)</span> and <span class="math inline">\(\vec{c_S(\dots)}\)</span> to be the respectively closest vertices of the other model. We then calculate the analytical solution given these correspondences via <span class="math inline">\(\vec{P^{*}} = \vec{U^+}\vec{T}\)</span>, and also use the first solution as guessed gradient for the calculation of the <em>improvement potential</em>, as the optimal solution is not known. We then let the evolutionary algorithm run up within <span class="math inline">\(1.05\)</span> times the error of this solution and afterwards recalculate the correspondences <span class="math inline">\(\vec{c_T(\dots)}\)</span> and <span class="math inline">\(\vec{c_S(\dots)}\)</span>.</p>
<p>For the next step we then halve the regularizationimpact <span class="math inline">\(\lambda\)</span> (starting at <span class="math inline">\(1\)</span>) of our <em>fitnessfunction</em> () and calculate the next incremental solution <span class="math inline">\(\vec{P^{*}} = \vec{U^+}\vec{T}\)</span> with the updated correspondences (again, mapping each vertex to its closest neighbour in the respective other model) to get our next targeterror. We repeat this process as long as the targeterror keeps decreasing and use the number of these iterations as measure of the convergence speed. As the resulting evolutional error without regularization is in the numeric range of <span class="math inline">\(\approx 100\)</span>, whereas the regularization is numerically <span class="math inline">\(\approx 7000\)</span> we need at least <span class="math inline">\(10\)</span> to <span class="math inline">\(15\)</span> iterations until the regularizationeffect wears off.</p>
<p>The grid we use for our experiments is just very coarse due to computational limitations. We are not interested in a good reconstruction, but an estimate if the mentioned evolvabilitycriteria are good.</p>
<p>In figure we show an example setup of the scene with a <span class="math inline">\(4\times 4\times 4\)</span>grid. Identical to the 1dimensional scenario before, we create a regular grid and move the controlpoints in the exact same random manner between their neighbours as described in section , but in three instead of two dimensions<a href="#/fn10" class="footnote-ref" id="fnref10"><sup>10</sup></a>.</p>
<p>As is clearly visible from figure , the targetmodel has many vertices in the facial area, at the ears and in the neckregion. Therefore we chose to increase the gridresolutions for our tests in two different dimensions and see how well the criteria predict a suboptimal placement of these controlpoints.</p>
</section>
<section id="results-of-3d-function-approximation" class="level2">
<h2>Results of 3D Function Approximation</h2>
<p>In the 3DApproximation we tried to evaluate further on the impact of the gridlayout to the overall criteria. As the targetmodel has many vertices in concentrated in the facial area we start from a <span class="math inline">\(4 \times 4 \times 4\)</span> grid and only increase the number of controlpoints in one dimension, yielding a resolution of <span class="math inline">\(7 \times 4 \times 4\)</span> and <span class="math inline">\(4 \times 4 \times 7\)</span> respectively. We visualized those two grids in figure .</p>
<p>To evaluate the performance of the evolvabilitycriteria we also tested a more neutral resolution of <span class="math inline">\(4 \times 4 \times 4\)</span>, <span class="math inline">\(5 \times 5 \times 5\)</span>, and <span class="math inline">\(6 \times 6 \times 6\)</span> — similar to the 1Dsetup.</p>
<section id="variability-2" class="level3">
<h3>Variability</h3>
<p>Similar to the 1D case all our tested matrices had a constant rank (being <span class="math inline">\(m = x \cdot y \cdot z\)</span> for a <span class="math inline">\(x \times y \times z\)</span> grid), so we again have merely plotted the errors in the box plot in figure .</p>
<p>As expected the <span class="math inline">\(\mathrm{X} \times 4 \times 4\)</span> grids performed slightly better than their <span class="math inline">\(4 \times 4 \times \mathrm{X}\)</span> counterparts with a mean<span class="math inline">\(\pm\)</span>sigma of <span class="math inline">\(101.25 \pm 7.45\)</span> to <span class="math inline">\(102.89 \pm 6.74\)</span> for <span class="math inline">\(\mathrm{X} = 5\)</span> and <span class="math inline">\(85.37 \pm 7.12\)</span> to <span class="math inline">\(89.22 \pm 6.49\)</span> for <span class="math inline">\(\mathrm{X} = 7\)</span>.</p>
<p>Interestingly both variants end up closer in terms of fitting error than we anticipated, which shows that the evolutionary algorithm we employed is capable of correcting a purposefully created bad grid. Also this confirms, that in our cases the number of controlpoints is more important for quality than their placement, which is captured by the <em>variability</em> via the rank of the deformationmatrix.</p>
<p>Overall the correlation between <em>variability</em> and fitnesserror were <em>significant</em> and showed a <em>very strong</em> correlation in all our tests. The detailed correlationcoefficients are given in table alongside their pvalues.</p>
<p>As introduces in section and visualized in figure , we know, that not all controlpoints have to necessarily contribute to the parametrization of our 3Dmodel. Because we are starting from a sphere, some controlpoints are too far away from the surface to contribute to the deformation at all.</p>
<p>One can already see in 2D in figure , that this effect starts with a regular <span class="math inline">\(9 \times 9\)</span> grid on a perfect circle. To make sure we observe this, we evaluated the <em>variability</em> for 100 randomly moved <span class="math inline">\(10 \times 10 \times 10\)</span> grids on the sphere we start out with.</p>
<p>As the <em>variability</em> is defined by <span class="math inline">\(\frac{\mathrm{rank}(\vec{U})}{n}\)</span> we can easily recover the rank of the deformationmatrix <span class="math inline">\(\vec{U}\)</span>. The results are shown in the histogram in figure . Especially in the centre of the sphere and in the corners of our grid we effectively loose controlpoints for our parametrization.</p>
<p>This of course yields a worse error as when those controlpoints would be put to use and one should expect a loss in quality evident by a higher reconstructionerror opposed to a grid where they are used. Sadly we could not run a indepth test on this due to computational limitations.</p>
<p>Nevertheless this hints at the notion, that <em>variability</em> is a good measure for the overall quality of a fit.</p>
</section>
<section id="regularity-2" class="level3">
<h3>Regularity</h3>
<p>Opposed to the predictions of <em>variability</em> our test on <em>regularity</em> gave a mixed result — similar to the 1Dcase.</p>
<p>In roughly half of the scenarios we have a <em>significant</em>, but <em>weak</em> to <em>moderate</em> correlation between <em>regularity</em> and number of iterations. On the other hand in the scenarios where we increased the number of controlpoints, namely <span class="math inline">\(125\)</span> for the <span class="math inline">\(5 \times 5 \times 5\)</span> grid and <span class="math inline">\(216\)</span> for the <span class="math inline">\(6 \times 6 \times 6\)</span> grid we found a <em>significant</em>, but <em>weak</em> <strong>anti</strong>correlation when taking all three tests into account<a href="#/fn11" class="footnote-ref" id="fnref11"><sup>11</sup></a>, which seem to contradict the findings/trends for the sets with <span class="math inline">\(64\)</span>, <span class="math inline">\(80\)</span>, and <span class="math inline">\(112\)</span> controlpoints (first two rows of table ).</p>
<p>Taking all results together we only find a <em>very weak</em>, but <em>significant</em> link between <em>regularity</em> and the number of iterations needed for the algorithm to converge.</p>
<p>As can be seen from figure , we can observe that increasing the number of controlpoints helps the convergencespeeds. The regularitycriterion first behaves as we would like to, but then switches to behave exactly opposite to our expectations, as can be seen in the first three plots. While the number of controlpoints increases from red to green to blue and the number of iterations decreases, the <em>regularity</em> seems to increase at first, but then decreases again on higher gridresolutions.</p>
<p>This can be an artefact of the definition of <em>regularity</em>, as it is defined by the inverse conditionnumber of the deformationmatrix <span class="math inline">\(\vec{U}\)</span>, being the fraction <span class="math inline">\(\frac{\sigma_{\mathrm{min}}}{\sigma_{\mathrm{max}}}\)</span> between the least and greatest right singular value.</p>
<p>As we observed in the previous section, we cannot guarantee that each controlpoint has an effect (see figure ) and so a small minimal right singular value occurring on higher gridresolutions seems likely the problem.</p>
<p>Adding to this we also noted, that in the case of the <span class="math inline">\(10 \times 10 \times 10\)</span>grid the <em>regularity</em> was always <span class="math inline">\(0\)</span>, as a noncontributing controlpoint yields a <span class="math inline">\(0\)</span>column in the deformationmatrix, thus letting <span class="math inline">\(\sigma_\mathrm{min} = 0\)</span>. A better definition for <em>regularity</em> (i.e. using the smallest nonzero right singular value) could solve this particular issue, but not fix the trend we noticed above.</p>
</section>
<section id="improvement-potential-2" class="level3">
<h3>Improvement Potential</h3>
<p>Comparing to the 1Dscenario, we do not know the optimal solution to the given problem and for the calculation we only use the initial gradient produced by the initial correlation between both objects. This gradient changes with every iteration and will be off our first guess very quickly. This is the reason we are not trying to create artificially bad gradients, as we have a broad range in quality of such gradients anyway.</p>
<p>We plotted our findings on the <em>improvement potential</em> in a similar way as we did before with the <em>regularity</em>. In figure one can clearly see the correlation and the spread within each setup and the behaviour when we increase the number of controlpoints.</p>
<p>Along with this we also give the Spearmancoefficients along with their pvalues in table . Within one scenario we only find a <em>weak</em> to <em>moderate</em> correlation between the <em>improvement potential</em> and the fitting error, but all findings (except for <span class="math inline">\(7 \times 4 \times 4\)</span> and <span class="math inline">\(6 \times 6 \times 6\)</span>) are significant.</p>
<p>If we take multiple datasets into account the correlation is <em>very strong</em> and <em>significant</em>, which is good, as this functions as a litmustest, because the quality is naturally tied to the number of controlpoints.</p>
<p>All in all the <em>improvement potential</em> seems to be a good and sensible measure of quality, even given gradients of varying quality.</p>
<p>Lastly, a small note on the behaviour of <em>improvement potential</em> and convergence speed, as we used this in the 1D case to argue, why the <em>regularity</em> defied our expectations. As a contrast we wanted to show, that <em>improvement potential</em> cannot serve for good predictions of the convergence speed. In figure we show <em>improvement potential</em> against number of iterations for both scenarios. As one can see, in the 1D scenario we have a <em>strong</em> and <em>significant</em> correlation (with <span class="math inline">\(-r_S = -0.72\)</span>, <span class="math inline">\(p = 0\)</span>), whereas in the 3D scenario we have the opposite <em>significant</em> and <em>strong</em> effect (with <span class="math inline">\(-r_S = 0.69\)</span>, <span class="math inline">\(p=0\)</span>), so these correlations clearly seem to be dependent on the scenario and are not suited for generalization.</p>
</section>
</section>
</section>
<section id="discussion-and-outlook" class="slide level1">
<h1>Discussion and outlook</h1>
<p>In this thesis we took a look at the different criteria for <em>evolvability</em> as introduced by Richter et al., namely <em>variability</em>, <em>regularity</em> and <em>improvement potential</em> under different setupconditions. Where Richter et al. used , we employed to set up a lowcomplexity parametrization of a more complex vertexmesh.</p>
<p>In our findings we could show in the 1Dscenario, that there were statistically <em>significant</em> <em>very strong</em> correlations between <em>variability and fitting error</em> (<span class="math inline">\(0.94\)</span>) and <em>improvement potential and fitting error</em> (<span class="math inline">\(1.0\)</span>) with comparable results than Richter et al. (with <span class="math inline">\(0.31\)</span> to <span class="math inline">\(0.88\)</span> for the former and <span class="math inline">\(0.75\)</span> to <span class="math inline">\(0.99\)</span> for the latter), whereas we found only <em>weak</em> correlations for <em>regularity and convergencespeed</em> (<span class="math inline">\(0.28\)</span>) opposed to Richter et al. with <span class="math inline">\(0.39\)</span> to <span class="math inline">\(0.91\)</span>.<a href="#/fn12" class="footnote-ref" id="fnref12"><sup>12</sup></a></p>
<p>For the 3Dscenario our results show a <em>very strong</em>, <em>significant</em> correlation between <em>variability and fitting error</em> with <span class="math inline">\(0.89\)</span> to <span class="math inline">\(0.94\)</span>, which are pretty much in line with the findings of Richter et al. (<span class="math inline">\(0.65\)</span> to <span class="math inline">\(0.95\)</span>). The correlation between <em>improvement potential and fitting error</em> behave similar, with our findings having a significant coefficient of <span class="math inline">\(0.3\)</span> to <span class="math inline">\(0.95\)</span> depending on the gridresolution compared to the <span class="math inline">\(0.61\)</span> to <span class="math inline">\(0.93\)</span> from Richter et al. In the case of the correlation of <em>regularity and convergence speed</em> we found very different (and often not significant) correlations and anticorrelations ranging from <span class="math inline">\(-0.25\)</span> to <span class="math inline">\(0.46\)</span>, whereas Richter et al. reported correlations between <span class="math inline">\(0.34\)</span> to <span class="math inline">\(0.87\)</span>.</p>
<p>Taking these results into consideration, one can say, that <em>variability</em> and <em>improvement potential</em> are very good estimates for the quality of a fit using as a deformation function, while we could not reproduce similar compelling results as Richter et al. for <em>regularity and convergence speed</em>.</p>
<p>One reason for the bad or erratic behaviour of the <em>regularity</em>criterion could be that in an setting we have a likelihood of having controlpoints that are only contributing to the whole parametrization in negligible amounts, resulting in very small right singular values of the deformationmatrix <span class="math inline">\(\vec{U}\)</span> that influence the conditionnumber and thus the <em>regularity</em> in a significant way. Further research is needed to refine <em>regularity</em> so that these problems get addressed, like taking all singular values into account when capturing the notion of <em>regularity</em>.</p>
<p>Richter et al. also compared the behaviour of direct and indirect manipulation in , whereas we merely used an indirect approach. As direct manipulations tend to perform better than indirect manipulations, the usage of could also work better with the criteria we examined. This can also solve the problem of bad singular values for the <em>regularity</em> as the incorporation of the parametrization of the points on the surface — which are the essential part of a directmanipulation — could cancel out a bad controlgrid as the bad controlpoints are never or negligibly used to parametrize those surfacepoints.</p>
</section>
<section class="footnotes">
<hr />
<ol>
<li id="fn1"><p>one more for each recursive step.<a href="#/fnref1" class="footnote-back"></a></p></li>
<li id="fn2"><p><em>Warning:</em> in the case of <span class="math inline">\(d=1\)</span> the recursionformula yields a <span class="math inline">\(0\)</span> denominator, but <span class="math inline">\(N\)</span> is also <span class="math inline">\(0\)</span>. The right solution for this case is a derivative of <span class="math inline">\(0\)</span><a href="#/fnref2" class="footnote-back"></a></p></li>
<li id="fn3"><p>Some examples of this are explained in detail in <a href="#/fnref3" class="footnote-back"></a></p></li>
<li id="fn4"><p>We use <span class="math inline">\(\vec{S}\)</span> in this notation, as we will use this parametrization of a sourcemesh to manipulate <span class="math inline">\(\vec{S}\)</span> into a targetmesh <span class="math inline">\(\vec{T}\)</span> via <span class="math inline">\(\vec{P}\)</span><a href="#/fnref4" class="footnote-back"></a></p></li>
<li id="fn5"><p>Normally these are <span class="math inline">\(d-1\)</span> to each side, but at the boundaries border points get used multiple times to meet the number of points required<a href="#/fnref5" class="footnote-back"></a></p></li>
<li id="fn6"><p>One example would be, when parts of an algorithm depend on the inverse of the minimal right singular value leading to a division by <span class="math inline">\(0\)</span>.<a href="#/fnref6" class="footnote-back"></a></p></li>
<li id="fn7"><p>For the special case of the outer layer we only applied noise away from the object, so the object is still confined in the convex hull of the controlpoints.<a href="#/fnref7" class="footnote-back"></a></p></li>
<li id="fn8"><p>The parametrization is encoded in <span class="math inline">\(\vec{U}\)</span> and the initial position of the controlpoints. See <a href="#/fnref8" class="footnote-back"></a></p></li>
<li id="fn9"><p>Note: On the Edges this displacement is only applied outwards by flipping the sign of <span class="math inline">\(r\)</span>, if appropriate.<a href="#/fnref9" class="footnote-back"></a></p></li>
<li id="fn10"><p>Again, we flip the signs for the edges, if necessary to have the object still in the convex hull.<a href="#/fnref10" class="footnote-back"></a></p></li>
<li id="fn11"><p>Displayed as <span class="math inline">\(Y \times Y \times Y\)</span><a href="#/fnref11" class="footnote-back"></a></p></li>
<li id="fn12"><p>We only took statistically <em>significant</em> results into consideration when compiling these numbers. Details are given in the respective chapters.<a href="#/fnref12" class="footnote-back"></a></p></li>
</ol>
</section>
</div>
</div>
<script src="./template/revealjs/lib/js/head.min.js"></script>
<script src="./template/revealjs/js/reveal.js"></script>
<script>
// More info https://github.com/hakimel/reveal.js#configuration
Reveal.initialize({
// reveal settings
controls: false,
progress: false,
slideNumber: true,
history: true,
center: false,
transition: 'none',
viewDistance: 2, // otherwise videos start early
width: 1024,
height: 768,
minScale: 0.2,
maxScale: 5, // if this threshold is reached, the chalkboard drawing will be wrongly positioned. hence large threshold!
// use local mathjax installation
math: { mathjax: './template/mathjax/MathJax.js', config: 'TeX-AMS_HTML-full' },
// setup chalkboard
chalkboard: {
src: "presentation.json",
readOnly: false,
theme: "chalkboard",
color: [ 'rgba(255,0,0,1)', 'rgba(255,255,255,1)' ],
background: [ 'rgba(0,0,0,0)' , './template/my-chalkboard/img/blackboard.png' ],
pen: [ './template/my-chalkboard/img/boardmarker.png', './template/my-chalkboard/img/chalk.png' ],
},
// setup reveal-menu
menu: {
side: 'right',
numbers: false,
titleSelector: 'h1',
hideMissingTitles: false,
markers: false,
custom: false,
themes: false,
transitions: false,
openButton: false,
openSlideNumber: true,
keyboard: true
},
// keyboard shortcuts
keyboard: {
40: function() { Reveal.next(); }, // up: next slide
38: function() { Reveal.prev(); }, // down: prev slide
67: function() { RevealChalkboard.toggleNotesCanvas() }, // c: draw on slides
84: function() { RevealChalkboard.toggleChalkboard() }, // t: draw on blackboard
69: function() { RevealChalkboard.toggleSponge() }, // e: toggle eraser
8: function() { RevealChalkboard.clear() }, // BACKSPACE: clear chalkboard
46: function() { RevealChalkboard.reset() }, // DELETE: reset chalkboard
68: function() { RevealChalkboard.download() }, // d: downlad chalkboard drawing
66: function() { RevealQuiz.toggleChart() } , // q: show quiz results
},
// load plugins
dependencies: [
{ src: './template/revealjs/plugin/math/math.js' },
{ src: './template/revealjs/plugin/notes/notes.js', async: true },
{ src: './template/revealjs/plugin/highlight/highlight.js', async: true, callback: function() {
var code_blocks = document.querySelectorAll('code');
for( var i = 0, len = code_blocks.length; i < len; i++ ) hljs.highlightBlock(code_blocks[i]);
}},
{ src: './template/revealjs/plugin/menu/menu.js' },
{ src: './template/my-chalkboard/chalkboard.js' }, // do not load this async ('ready' event is missing, print wont work)
{ src: './template/my-zoom/zoom.js', async: true },
{ src: './template/quiz/quiz.js', async: true }
]
});
</script>
</body>
</html>