finished presentation

This commit is contained in:
Nicole Dresselhaus 2017-11-13 01:54:48 +01:00
parent 43ba0fa612
commit 2ca64a0c54
Signed by: Drezil
GPG Key ID: 057D94F356F41E25
3 changed files with 739 additions and 1679 deletions

View File

@ -24,11 +24,6 @@
<link rel="stylesheet" href="./template/revealjs/css/highlight/xcode.css"> <link rel="stylesheet" href="./template/revealjs/css/highlight/xcode.css">
<!-- stuff for quiz -->
<script src="https://www.gstatic.com/charts/loader.js"></script>
<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script src="http://davidshimjs.github.com/qrcodejs/qrcode.min.js"></script>
<!-- Printing and PDF exports --> <!-- Printing and PDF exports -->
<script> <script>
@ -68,6 +63,9 @@
of: ['\\mkern{-2mu}\\left( #1 \\right\)', 1] of: ['\\mkern{-2mu}\\left( #1 \\right\)', 1]
} }
}, },
tex2jax: {
skipTags: ["script","noscript","style","textarea"],
},
"HTML-CSS": { "HTML-CSS": {
styles: { ".reveal section .MathJax_Display": { margin: "0.5em 0em" } }, styles: { ".reveal section .MathJax_Display": { margin: "0.5em 0em" } },
styles: { ".reveal table .MathJax_Display": { margin: "0em" } }, styles: { ".reveal table .MathJax_Display": { margin: "0em" } },
@ -101,173 +99,159 @@
<!-- all the slides from markdown document: DO NOT INDENT THE body LINE!!! --> <!-- all the slides from markdown document: DO NOT INDENT THE body LINE!!! -->
<section id="introduction" class="slide level1"> <section id="introduction" class="slide level1">
<h1>Introduction</h1> <h1>Introduction</h1>
<p>Many modern industrial design processes require advanced optimization methods due to the increased complexity resulting from more and more degrees of freedom as methods refine and/or other methods are used. Examples for this are physical domains like aerodynamics (i.e. drag), fluid dynamics (i.e. throughput of liquid) — where the complexity increases with the temporal and spatial resolution of the simulation — or known hard algorithmic problems in informatics ( i.e. layouting of circuit boards or stacking of 3Dobjects). Moreover these are typically not static environments but requirements shift over time or from case to case.</p>
<p>Evolutionary algorithms cope especially well with these problem domains while addressing all the issues at hand. One of the main concerns in these algorithms is the formulation of the problems in terms of a <em>genome</em> and <em>fitnessfunction</em>. While one can typically use an arbitrary costfunction for the <em>fitnessfunctions</em> (i.e. amount of drag, amount of space, etc.), the translation of the problemdomain into a simple parametric representation (the <em>genome</em>) can be challenging.</p>
<p>This translation is often necessary as the target of the optimization may have too many degrees of freedom for a reasonable computation. In the example of an aerodynamic simulation of drag onto an object, those objectdesigns tend to have a high number of vertices to adhere to various requirements (visual, practical, physical, etc.). A simpler representation of the same object in only a few parameters that manipulate the whole in a sensible matter are desirable, as this often decreases the computation time significantly.</p>
<p>Additionally one can exploit the fact, that drag in this case is especially sensitive to nonsmooth surfaces, so that a smooth local manipulation of the surface as a whole is more advantageous than merely random manipulation of the vertices.</p>
<p>The quality of such a lowdimensional representation in biological evolution is strongly tied to the notion of <em>evolvability</em>, as the parametrization of the problem has serious implications on the convergence speed and the quality of the solution. However, there is no consensus on how <em>evolvability</em> is defined and the meaning varies from context to context. As a consequence there is need for some criteria we can measure, so that we are able to compare different representations to learn and improve upon these.</p>
<p>One example of such a general representation of an object is to generate random points and represent vertices of an object as distances to these points — for example via . If one (or the algorithm) would move such a point the object will get deformed only locally (due to the ). As this results in a simple mapping from the parameterspace onto the object one can try out different representations of the same object and evaluate which criteria may be suited to describe this notion of <em>evolvability</em>. This is exactly what Richter et al. have done.</p>
<p>As we transfer the results of Richter et al. from using as a representation to manipulate geometric objects to the use of we will use the same definition for <em>evolvability</em> the original author used, namely <em>regularity</em>, <em>variability</em>, and <em>improvement potential</em>. We introduce these term in detail in Chapter . In the original publication the author could show a correlation between these evolvabilitycriteria with the quality and convergence speed of such optimization.</p>
<p>We will replicate the same setup on the same objects but use instead of to create a local deformation near the controlpoints and evaluate if the evolutioncriteria still work as a predictor for <em>evolvability</em> of the representation given the different deformation scheme, as suspected in .</p>
<p>First we introduce different topics in isolation in Chapter . We take an abstract look at the definition of for a onedimensional line (in ) and discuss why this is a sensible deformation function (in ). Then we establish some backgroundknowledge of evolutionary algorithms (in ) and why this is useful in our domain (in ) followed by the definition of the different evolvabilitycriteria established in (in ).</p>
<p>In Chapter we take a look at our implementation of and the adaptation for 3Dmeshes that were used. Next, in Chapter , we describe the different scenarios we use to evaluate the different evolvabilitycriteria incorporating all aspects introduced in Chapter . Following that, we evaluate the results in Chapter with further on discussion, summary and outlook in Chapter .</p>
</section>
<section id="background" class="slide level1">
<h1>Background</h1>
<section id="what-is" class="level2">
<h2>What is ?</h2>
<p>First of all we have to establish how a works and why this is a good tool for deforming geometric objects (especially meshes in our case) in the first place. For simplicity we only summarize the 1Dcase from here and go into the extension to the 3D case in chapter .</p>
<p>The main idea of is to create a function <span class="math inline">\(s : [0,1[^d \mapsto \mathbb{R}^d\)</span> that spans a certain part of a vectorspace and is only linearly parametrized by some special controlpoints <span class="math inline">\(p_i\)</span> and an constant attributionfunction <span class="math inline">\(a_i(u)\)</span>, so <span class="math display">\[
s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}
\]</span> can be thought of a representation of the inside of the convex hull generated by the controlpoints where each position inside can be accessed by the right <span class="math inline">\(u \in [0,1[^d\)</span>.</p>
<p>In the 1dimensional example in figure , the controlpoints are indicated as red dots and the colourgradient should hint at the <span class="math inline">\(u\)</span>values ranging from <span class="math inline">\(0\)</span> to <span class="math inline">\(1\)</span>.</p>
<p>We now define a by the following:<br />
Given an arbitrary number of points <span class="math inline">\(p_i\)</span> alongside a line, we map a scalar value <span class="math inline">\(\tau_i \in [0,1[\)</span> to each point with <span class="math inline">\(\tau_i &lt; \tau_{i+1} \forall i\)</span> according to the position of <span class="math inline">\(p_i\)</span> on said line. Additionally, given a degree of the target polynomial <span class="math inline">\(d\)</span> we define the curve <span class="math inline">\(N_{i,d,\tau_i}(u)\)</span> as follows:</p>
<span class="math display">\[\begin{equation} \label{eqn:ffd1d1}
N_{i,0,\tau}(u) = \begin{cases} 1, &amp; u \in [\tau_i, \tau_{i+1}[ \\ 0, &amp; \mbox{otherwise} \end{cases}
\end{equation}\]</span>
<p>and <span class="math display">\[\begin{equation} \label{eqn:ffd1d2}
N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u)
\end{equation}\]</span></p>
<p>If we now multiply every <span class="math inline">\(p_i\)</span> with the corresponding <span class="math inline">\(N_{i,d,\tau_i}(u)\)</span> we get the contribution of each point <span class="math inline">\(p_i\)</span> to the final curvepoint parametrized only by <span class="math inline">\(u \in [0,1[\)</span>. As can be seen from we only access points <span class="math inline">\([p_i..p_{i+d}]\)</span> for any given <span class="math inline">\(i\)</span><a href="#/fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a>, which gives us, in combination with choosing <span class="math inline">\(p_i\)</span> and <span class="math inline">\(\tau_i\)</span> in order, only a local interference of <span class="math inline">\(d+1\)</span> points.</p>
<p>We can even derive this equation straightforward for an arbitrary <span class="math inline">\(N\)</span><a href="#/fn2" class="footnote-ref" id="fnref2"><sup>2</sup></a>:</p>
<p><span class="math display">\[\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)\]</span></p>
<p>For a BSpline <span class="math display">\[s(u) = \sum_{i} N_{i,d,\tau_i}(u) p_i\]</span> these derivations yield <span class="math inline">\(\left(\frac{\partial}{\partial u}\right)^d s(u) = 0\)</span>.</p>
<p>Another interesting property of these recursive polynomials is that they are continuous (given <span class="math inline">\(d \ge 1\)</span>) as every <span class="math inline">\(p_i\)</span> gets blended in between <span class="math inline">\(\tau_i\)</span> and <span class="math inline">\(\tau_{i+d}\)</span> and out between <span class="math inline">\(\tau_{i+1}\)</span>, and <span class="math inline">\(\tau_{i+d+1}\)</span> as can bee seen from the two coefficients in every step of the recursion.</p>
<p>This means that all changes are only a local linear combination between the controlpoint <span class="math inline">\(p_i\)</span> to <span class="math inline">\(p_{i+d+1}\)</span> and consequently this yields to the convexhullproperty of BSplines — meaning, that no matter how we choose our coefficients, the resulting points all have to lie inside convexhull of the controlpoints.</p>
<p>For a given point <span class="math inline">\(s_i\)</span> we can then calculate the contributions <span class="math inline">\(u_{i,j}~:=~N_{j,d,\tau}\)</span> of each control point <span class="math inline">\(p_j\)</span> to get the projection from the controlpointspace into the objectspace: <span class="math display">\[
s_i = \sum_j u_{i,j} \cdot p_j = \vec{n}_i^{T} \vec{p}
\]</span> or written for all points at the same time: <span class="math display">\[
\vec{s} = \vec{U} \vec{p}
\]</span> where <span class="math inline">\(\vec{U}\)</span> is the <span class="math inline">\(n \times m\)</span> transformationmatrix (later on called <strong>deformation matrix</strong>) for <span class="math inline">\(n\)</span> objectspacepoints and <span class="math inline">\(m\)</span> controlpoints.</p>
<p>Furthermore BSplinebasisfunctions form a partition of unity for all, but the first and last <span class="math inline">\(d\)</span> controlpoints. Therefore we later on use the borderpoints <span class="math inline">\(d+1\)</span> times, such that <span class="math inline">\(\sum_j u_{i,j} p_j = p_i\)</span> for these points.</p>
<p>The locality of the influence of each controlpoint and the partition of unity was beautifully pictured by Brunet, which we included here as figure .</p>
<section id="why-is-a-good-deformation-function" class="level3">
<h3>Why is a good deformation function?</h3>
<p>The usage of as a tool for manipulating follows directly from the properties of the polynomials and the correspondence to the controlpoints. Having only a few controlpoints gives the user a nicer highlevelinterface, as she only needs to move these points and the model follows in an intuitive manner. The deformation is smooth as the underlying polygon is smooth as well and affects as many vertices of the model as needed. Moreover the changes are always local so one risks not any change that a user cannot immediately see.</p>
<p>But there are also disadvantages of this approach. The user loses the ability to directly influence vertices and even seemingly simple tasks as creating a plateau can be difficult to achieve.</p>
<p>This disadvantages led to the formulation of in which the user directly interacts with the surfacemesh. All interactions will be applied proportionally to the controlpoints that make up the parametrization of the interactionpoint itself yielding a smooth deformation of the surface <em>at</em> the surface without seemingly arbitrary scattered controlpoints. Moreover this increases the efficiency of an evolutionary optimization, which we will use later on.</p>
<p>But this approach also has downsides as can be seen in figure , as the tessellation of the invisible grid has a major impact on the deformation itself.</p>
<p>All in all and are still good ways to deform a highpolygon mesh albeit the downsides.</p>
</section>
</section>
<section id="what-is-evolutionary-optimization" class="level2">
<h2>What is evolutionary optimization?</h2>
<p>In this thesis we are using an evolutionary optimization strategy to solve the problem of finding the best parameters for our deformation. This approach, however, is very generic and we introduce it here in a broader sense.</p>
<p>The general shape of an evolutionary algorithm (adapted from ) is outlined in Algorithm . Here, <span class="math inline">\(P(t)\)</span> denotes the population of parameters in step <span class="math inline">\(t\)</span> of the algorithm. The population contains <span class="math inline">\(\mu\)</span> individuals <span class="math inline">\(a_i\)</span> from the possible individualset <span class="math inline">\(I\)</span> that fit the shape of the parameters we are looking for. Typically these are initialized by a random guess or just zero. Further on we need a socalled <em>fitnessfunction</em> <span class="math inline">\(\Phi : I \mapsto M\)</span> that can take each parameter to a measurable space <span class="math inline">\(M\)</span> (usually <span class="math inline">\(M = \mathbb{R}\)</span>) along a convergencefunction <span class="math inline">\(c : I \mapsto \mathbb{B}\)</span> that terminates the optimization.</p>
<p>Biologically speaking the set <span class="math inline">\(I\)</span> corresponds to the set of possible <em>genotypes</em> while <span class="math inline">\(M\)</span> represents the possible observable <em>phenotypes</em>. <em>Genotypes</em> define all initial properties of an individual, but their properties are not directly observable. It is the genes, that evolve over time (and thus correspond to the parameters we are tweaking in our algorithms or the genes in nature), but only the <em>phenotypes</em> make certain behaviour observable (algorithmically through our <em>fitnessfunction</em>, biologically by the ability to survive and produce offspring). Any individual in our algorithm thus experience a biologically motivated life cycle of inheriting genes from the parents, modified by mutations occurring, performing according to a fitnessmetric, and generating offspring based on this. Therefore each iteration in the whileloop above is also often named generation.</p>
<p>One should note that there is a subtle difference between <em>fitnessfunction</em> and a so called <em>genotypephenotypemapping</em>. The first one directly applies the <em>genotypephenotypemapping</em> and evaluates the performance of an individual, thus going directly from genes/parameters to reproductionprobability/score. In a concrete example the <em>genotype</em> can be an arbitrary vector (the genes), the <em>phenotype</em> is then a deformed object, and the performance can be a single measurement like an airdragcoefficient. The <em>genotypephenotypemapping</em> would then just be the generation of different objects from that startingvector, whereas the <em>fitnessfunction</em> would go directly from such a startingvector to the coefficient that we want to optimize.</p>
<p>The main algorithm just repeats the following steps:</p>
<ul> <ul>
<li><strong>Recombine</strong> with a recombinationfunction <span class="math inline">\(r : I^{\mu} \mapsto I^{\lambda}\)</span> to generate <span class="math inline">\(\lambda\)</span> new individuals based on the characteristics of the <span class="math inline">\(\mu\)</span> parents.<br /> <li>Many modern industrial design processes require advanced optimization methods due to increased complexity</li>
This makes sure that the next guess is close to the old guess.</li> <li>Examples are
<li><strong>Mutate</strong> with a mutationfunction <span class="math inline">\(m : I^{\lambda} \mapsto I^{\lambda}\)</span> to introduce new effects that cannot be produced by mere recombination of the parents.<br /> <ul>
Typically this just adds minor defects to individual members of the population like adding a random gaussian noise or amplifying/dampening random parts.</li> <li>physical domains
<li><strong>Selection</strong> takes a selectionfunction <span class="math inline">\(s : (I^\lambda \cup I^{\mu + \lambda},\Phi) \mapsto I^\mu\)</span> that selects from the previously generated <span class="math inline">\(I^\lambda\)</span> children and optionally also the parents (denoted by the set <span class="math inline">\(Q\)</span> in the algorithm) using the <em>fitnessfunction</em> <span class="math inline">\(\Phi\)</span>. The result of this operation is the next Population of <span class="math inline">\(\mu\)</span> individuals.</li> <ul>
<li>aerodynamics (i.e. drag)</li>
<li>fluid dynamics (i.e. throughput of liquid)</li>
</ul></li>
<li>NP-hard problems
<ul>
<li>layouting of circuit boards</li>
<li>stacking of 3Dobjects</li>
</ul></li>
</ul></li>
</ul> </ul>
<p>All these functions can (and mostly do) have a lot of hidden parameters that can be changed over time. A good overview of this is given in , so we only give a small excerpt here.</p>
<p>For example the mutation can consist of merely a single <span class="math inline">\(\sigma\)</span> determining the strength of the gaussian defects in every parameter — or giving a different <span class="math inline">\(\sigma\)</span> to every component of those parameters. An even more sophisticated example would be the 1/5 success rule from .</p>
<p>Also in the selectionfunction it may not be wise to only take the bestperforming individuals, because it may be that the optimization has to overcome a barrier of bad fitness to achieve a better local optimum.</p>
<p>Recombination also does not have to be mere random choosing of parents, but can also take ancestry, distance of genes or groups of individuals into account.</p>
</section> </section>
<section id="advantages-of-evolutionary-algorithms" class="level2"> <section id="motivation" class="slide level1">
<h2>Advantages of evolutionary algorithms</h2> <h1>Motivation</h1>
<ul>
<p>The main advantage of evolutionary algorithms is the ability to find optima of general functions just with the help of a given <em>fitnessfunction</em>. Components and techniques for evolutionary algorithms are specifically known to help with different problems arising in the domain of optimization. An overview of the typical problems are shown in figure .</p> <li>Evolutionary algorithms cope especially well with these problem domains <figure class="" style=""><img src="../arbeit/img/Evo_overview.png" style=""></img><figcaption>Example of the use of evolutionary algorithms in automotive design</figcaption></figure></li>
<li>But formulation can be tricky</li>
<p>Most of the advantages stem from the fact that a gradientbased procedure has usually only one point of observation from where it evaluates the next steps, whereas an evolutionary strategy starts with a population of guessed solutions. Because an evolutionary strategy can be modified according to the problemdomain (i.e. by the ideas given above) it can also approximate very difficult problems in an efficient manner and even selftune parameters depending on the ancestry at runtime<a href="#/fn3" class="footnote-ref" id="fnref3"><sup>3</sup></a>.</p> </ul>
<p>If an analytic best solution exists and is easily computable (i.e. because the errorfunction is convex) an evolutionary algorithm is not the right choice. Although both converge to the same solution, the analytic one is usually faster.</p>
<p>But in reality many problems have no analytic solution, because the problem is either not convex or there are so many parameters that an analytic solution (mostly meaning the equivalence to an exhaustive search) is computationally not feasible. Here evolutionary optimization has one more advantage as one can at least get suboptimal solutions fast, which then refine over time and still converge to a decent solution much faster than an exhaustive search.</p>
</section> </section>
<section id="criteria-for-the-evolvability-of-linear-deformations" class="level2"> <section id="motivation-1" class="slide level1">
<h2>Criteria for the evolvability of linear deformations</h2> <h1>Motivation</h1>
<ul>
<p>As we have established in chapter , we can describe a deformation by the formula <span class="math display">\[ <li>Problems tend to be very complex
\vec{S} = \vec{U}\vec{P} <ul>
\]</span> where <span class="math inline">\(\vec{S}\)</span> is a <span class="math inline">\(n \times d\)</span> matrix of vertices<a href="#/fn4" class="footnote-ref" id="fnref4"><sup>4</sup></a>, <span class="math inline">\(\vec{U}\)</span> are the (during parametrization) calculated deformationcoefficients and <span class="math inline">\(P\)</span> is a <span class="math inline">\(m \times d\)</span> matrix of controlpoints that we interact with during deformation.</p> <li>i.e. a surface with <span class="math inline">\(n\)</span> vertices has <span class="math inline">\(3\cdot n\)</span> Degrees of Freedom (DoF).</li>
<p>We can also think of the deformation in terms of differences from the original coordinates <span class="math display">\[ </ul></li>
\Delta \vec{S} = \vec{U} \cdot \Delta \vec{P} <li>Need for a small-dimensional representation that manipulates the high-dimensional problem-space.</li>
\]</span> which is isomorphic to the former due to the linearity of the deformation. One can see in this way, that the way the deformation behaves lies solely in the entries of <span class="math inline">\(\vec{U}\)</span>, which is why the three criteria focus on this.</p> <li>We concentrate on smooth deformations (<span class="math inline">\(C^3\)</span>-continuous)</li>
<section id="variability" class="level3"> <li>But what representation is good?</li>
<h3>Variability</h3> </ul>
<p>In <em>variability</em> is defined as <span class="math display">\[\mathrm{variability}(\vec{U}) := \frac{\mathrm{rank}(\vec{U})}{n},\]</span> whereby <span class="math inline">\(\vec{U}\)</span> is the <span class="math inline">\(n \times m\)</span> deformationMatrix used to map the <span class="math inline">\(m\)</span> controlpoints onto the <span class="math inline">\(n\)</span> vertices.</p>
<p>Given <span class="math inline">\(n = m\)</span>, an identical number of controlpoints and vertices, this quotient will be <span class="math inline">\(=1\)</span> if all controlpoints are independent of each other and the solution is to trivially move every controlpoint onto a targetpoint.</p>
<p>In praxis the value of <span class="math inline">\(V(\vec{U})\)</span> is typically <span class="math inline">\(\ll 1\)</span>, because there are only few controlpoints for many vertices, so <span class="math inline">\(m \ll n\)</span>.</p>
<p>This criterion should correlate to the degrees of freedom the given parametrization has. This can be seen from the fact, that <span class="math inline">\(\mathrm{rank}(\vec{U})\)</span> is limited by <span class="math inline">\(\min(m,n)\)</span> and — as <span class="math inline">\(n\)</span> is constant — can never exceed <span class="math inline">\(n\)</span>.</p>
<p>The rank itself is also interesting, as controlpoints could theoretically be placed on top of each other or be linear dependent in another way — but will in both cases lower the rank below the number of controlpoints <span class="math inline">\(m\)</span> and are thus measurable by the <em>variability</em>.</p>
</section> </section>
<section id="regularity" class="level3"> <section id="what-representation-is-good" class="slide level1">
<h3>Regularity</h3> <h1>What representation is good?</h1>
<p><em>Regularity</em> is defined as <span class="math display">\[\mathrm{regularity}(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}\]</span> where <span class="math inline">\(\sigma_{min}\)</span> and <span class="math inline">\(\sigma_{max}\)</span> are the smallest and greatest right singular value of the deformationmatrix <span class="math inline">\(\vec{U}\)</span>.</p> <ul>
<p>As we deform the given Object only based on the parameters as <span class="math inline">\(\vec{p} \mapsto f(\vec{x} + \vec{U}\vec{p})\)</span> this makes sure that <span class="math inline">\(\|\vec{Up}\| \propto \|\vec{p}\|\)</span> when <span class="math inline">\(\kappa(\vec{U}) \approx 1\)</span>. The inversion of <span class="math inline">\(\kappa(\vec{U})\)</span> is only performed to map the criterionrange to <span class="math inline">\([0..1]\)</span>, where <span class="math inline">\(1\)</span> is the optimal value and <span class="math inline">\(0\)</span> is the worst value.</p> <li>In biological evolution this measure is called <em>evolvability</em>.
<p>On the one hand this criterion should be characteristic for numeric stability and on the other hand for the convergence speed of evolutionary algorithms as it is tied to the notion of locality.</p> <ul>
<li>no consensus on definition</li>
<li>meaning varies from context to context</li>
<li>measurable?</li>
</ul></li>
<li>Measure depends on representation as well.</li>
</ul>
</section> </section>
<section id="improvement-potential" class="level3"> <section id="rbf-and-ffd" class="slide level1">
<h3>Improvement Potential</h3> <h1>RBF and FFD</h1>
<p>In contrast to the general nature of <em>variability</em> and <em>regularity</em>, which are agnostic of the <em>fitnessfunction</em> at hand, the third criterion should reflect a notion of the potential for optimization, taking a guess into account.</p> <ul>
<p>Most of the times some kind of gradient <span class="math inline">\(g\)</span> is available to suggest a direction worth pursuing; either from a previous iteration or by educated guessing. We use this to guess how much change can be achieved in the given direction.</p> <li>Andreas Richter uses Radial Basis Functions (RBF) to smoothly deform meshes</li>
<p>The definition for an <em>improvement potential</em> <span class="math inline">\(P\)</span> is: <span class="math display">\[ </ul>
\mathrm{potential}(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec{G}\|^2_F <p><figure class="" style=""><img src="../arbeit/img/deformations.png" style=""></img><figcaption>Example of RBFbased deformation and FFD targeting the same mesh.</figcaption></figure></p>
\]</span> given some approximate <span class="math inline">\(n \times d\)</span> fitnessgradient <span class="math inline">\(\vec{G}\)</span>, normalized to <span class="math inline">\(\|\vec{G}\|_F = 1\)</span>, whereby <span class="math inline">\(\|\cdot\|_F\)</span> denotes the FrobeniusNorm.</p>
</section> </section>
<section id="rbf-and-ffd-1" class="slide level1">
<h1>RBF and FFD</h1>
<ul>
<li>My master thesis transferred his idea to Freeform-Deformation (FFD)
<ul>
<li>same setup</li>
<li>same measurements</li>
<li>same results?</li>
</ul></li>
</ul>
<p><figure class="" style=""><img src="../arbeit/img/deformations.png" style=""></img><figcaption>Example of RBFbased deformation and FFD targeting the same mesh.</figcaption></figure></p>
</section> </section>
<section id="outline" class="slide level1">
<h1>Outline</h1>
<ul>
<li><strong>What is FFD?</strong></li>
<li>What is evolutionary optimization?</li>
<li>How to measure evolvability?</li>
<li>Scenarios</li>
<li>Results</li>
</ul>
</section> </section>
<section id="implementation-of" class="slide level1"> <section id="what-is-ffd" class="slide level1">
<h1>Implementation of </h1> <h1>What is FFD?</h1>
<ul>
<p>The general formulation of BSplines has two free parameters <span class="math inline">\(d\)</span> and <span class="math inline">\(\tau\)</span> which must be chosen beforehand.</p> <li>Create a function <span class="math inline">\(s : [0,1[^d \mapsto \mathbb{R}^d\)</span> that is parametrized by some special controlpoints <span class="math inline">\(p_i\)</span> with coefficient functions <span class="math inline">\(a_i(u)\)</span>: <span class="math display">\[
<p>As we usually work with regular grids in our we define <span class="math inline">\(\tau\)</span> statically as <span class="math inline">\(\tau_i = \nicefrac{i}{n}\)</span> whereby <span class="math inline">\(n\)</span> is the number of controlpoints in that direction.</p> s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}
<p><span class="math inline">\(d\)</span> defines the <em>degree</em> of the BSplineFunction (the number of times this function is differentiable) and for our purposes we fix <span class="math inline">\(d\)</span> to <span class="math inline">\(3\)</span>, but give the formulas for the general case so it can be adapted quite freely.</p> \]</span></li>
<section id="adaption-of" class="level2"> <li>All points inside the convex hull of <span class="math inline">\(\vec{p_i}\)</span> accessed by the right <span class="math inline">\(u \in [0,1[^d\)</span>.</li>
<h2>Adaption of </h2> </ul>
<p><figure class="" style=""><img src="../arbeit/img/B-Splines.png" style=""></img><figcaption>Example of a parametrization of a line with corresponding deformation to generate a deformed objet</figcaption></figure></p>
<p>As we have established in Chapter we can define an displacement as <span class="math display">\[\begin{equation}
\Delta_x(u) = \sum_i N_{i,d,\tau_i}(u) \Delta_x c_i
\end{equation}\]</span></p>
<p>Note that we only sum up the <span class="math inline">\(\Delta\)</span>displacements in the controlpoints <span class="math inline">\(c_i\)</span> to get the change in position of the point we are interested in.</p>
<p>In this way every deformed vertex is defined by <span class="math display">\[
\textrm{Deform}(v_x) = v_x + \Delta_x(u)
\]</span> with <span class="math inline">\(u \in [0..1[\)</span> being the variable that connects the highdetailed vertexmesh to the lowdetailed controlgrid. To actually calculate the new position of the vertex we first have to calculate the <span class="math inline">\(u\)</span>value for each vertex. This is achieved by finding out the parametrization of <span class="math inline">\(v\)</span> in terms of <span class="math inline">\(c_i\)</span> <span class="math display">\[
v_x \overset{!}{=} \sum_i N_{i,d,\tau_i}(u) c_i
\]</span> so we can minimize the error between those two: <span class="math display">\[
\underset{u}{\argmin}\,Err(u,v_x) = \underset{u}{\argmin}\,2 \cdot \|v_x - \sum_i N_{i,d,\tau_i}(u) c_i\|^2_2
\]</span> As this errorterm is quadratic we just derive by <span class="math inline">\(u\)</span> yielding <span class="math display">\[
\begin{array}{rl}
\frac{\partial}{\partial u} &amp; v_x - \sum_i N_{i,d,\tau_i}(u) c_i \\
= &amp; - \sum_i \left( \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u) \right) c_i
\end{array}
\]</span> and do a gradientdescend to approximate the value of <span class="math inline">\(u\)</span> up to an <span class="math inline">\(\epsilon\)</span> of <span class="math inline">\(0.0001\)</span>.</p>
<p>For this we employ the GaussNewton algorithm, which converges into the leastsquares solution. An exact solution of this problem is impossible most of the time, because we usually have way more vertices than controlpoints (<span class="math inline">\(\#v~\gg~\#c\)</span>).</p>
</section> </section>
<section id="adaption-of-for-a-3dmesh" class="level2"> <section id="definition-b-splines" class="slide level1">
<h2>Adaption of for a 3DMesh</h2> <h1>Definition B-Splines</h1>
<ul>
<p>This is a straightforward extension of the 1Dmethod presented in the last chapter. But this time things get a bit more complicated. As we have a 3dimensional grid we may have a different amount of controlpoints in each direction.</p> <li>The coefficient functions <span class="math inline">\(a_i(u)\)</span> in <span class="math inline">\(s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}\)</span> are different for each control-point</li>
<p>Given <span class="math inline">\(n,m,o\)</span> controlpoints in <span class="math inline">\(x,y,z\)</span>direction each Point on the curve is defined by <span class="math display">\[V(u,v,w) = \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.\]</span></p> <li>Given a degree <span class="math inline">\(d\)</span> and position <span class="math inline">\(\tau_i\)</span> for the <span class="math inline">\(i\)</span>th control-point <span class="math inline">\(p_i\)</span> we define <span class="math display">\[\begin{equation}
<p>In this case we have three different BSplines (one for each dimension) and also 3 variables <span class="math inline">\(u,v,w\)</span> for each vertex we want to approximate.</p> N_{i,0,\tau}(u) = \begin{cases} 1, &amp; u \in [\tau_i, \tau_{i+1}[ \\ 0, &amp; \mbox{otherwise} \end{cases}
<p>Given a target vertex <span class="math inline">\(\vec{p}^*\)</span> and an initial guess <span class="math inline">\(\vec{p}=V(u,v,w)\)</span> we define the errorfunction for the gradientdescent as:</p> \end{equation}\]</span> and <span class="math display">\[\begin{equation} \label{eqn:ffd1d2}
<p><span class="math display">\[Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)\]</span></p> N_{i,d,\tau}(u) = \frac{u-\tau_i}{\tau_{i+d}} N_{i,d-1,\tau}(u) + \frac{\tau_{i+d+1} - u}{\tau_{i+d+1}-\tau_{i+1}} N_{i+1,d-1,\tau}(u)
<p>And the partial version for just one direction as</p> \end{equation}\]</span></li>
<p><span class="math display">\[Err_x(u,v,w,\vec{p}^{*}) = p^{*}_x - \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \]</span></p> <li>The derivatives of these coefficients are also easy to compute: <span class="math display">\[\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)\]</span></li>
<p>To solve this we derive partially, like before:</p> </ul>
<p><span class="math display">\[ </section>
<section id="properties-of-b-splines" class="slide level1">
<h1>Properties of B-Splines</h1>
<ul>
<li>Coefficients vanish after <span class="math inline">\(d\)</span> differentiations</li>
<li>Coefficients are continuous with respect to <span class="math inline">\(u\)</span></li>
<li>A change in prototypes only deforms the mapping locally<br />
(between <span class="math inline">\(p_i\)</span> to <span class="math inline">\(p_{i+d+1}\)</span>)</li>
</ul>
<p><figure class="" style=""><img src="../arbeit/img/unity.png" style=""></img><figcaption>Example of Basis-Functions for degree <span class="math inline">\(2\)</span>. [Brunet, 2010]<br /> Note, that Brunet starts his index at <span class="math inline">\(-d\)</span> opposed to our definition, where we start at <span class="math inline">\(0\)</span>.</figcaption></figure></p>
</section>
<section id="definition-ffd" class="slide level1">
<h1>Definition FFD</h1>
<ul>
<li>FFD is a space-deformation resulting based on the underlying B-Splines</li>
<li>Coefficients of space-mapping <span class="math inline">\(s(u) = \sum_j a_j(u) p_j\)</span> for an initial vertex <span class="math inline">\(v_i\)</span> are constant</li>
<li>Set <span class="math inline">\(u_{i,j}~:=~N_{j,d,\tau}\)</span> for each <span class="math inline">\(v_i\)</span> and <span class="math inline">\(p_j\)</span> to get the projection: <span class="math display">\[
v_i = \sum_j u_{i,j} \cdot p_j = \vec{u}_i^{T} \vec{p}
\]</span> or written with matrices: <span class="math display">\[
\vec{v} = \vec{U} \vec{p}
\]</span></li>
<li><span class="math inline">\(\vec{U}\)</span> is called <strong>deformation matrix</strong></li>
</ul>
</section>
<section id="implementation-of-ffd" class="slide level1">
<h1>Implementation of FFD</h1>
<ul>
<li>As we deal with 3D-Models we have to extend the introduced 1D-version</li>
<li>We get one parameter for each dimension: <span class="math inline">\(u,v,w\)</span> instead of <span class="math inline">\(u\)</span></li>
<li>Task: Find correct <span class="math inline">\(u,v,w\)</span> for each vertex in our model
<ul>
<li>We used a gradient-descent (via the gauss-newton algorithm)</li>
</ul></li>
</ul>
</section>
<section id="implementation-of-ffd-1" class="slide level1">
<h1>Implementation of FFD</h1>
<ul>
<li>Given <span class="math inline">\(n,m,o\)</span> control-points in <span class="math inline">\(x,y,z\)</span>direction each Point inside the convex hull is defined by <span class="math display">\[V(u,v,w) = \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.\]</span></li>
<li>Given a target vertex <span class="math inline">\(\vec{p}^*\)</span> and an initial guess <span class="math inline">\(\vec{p}=V(u,v,w)\)</span> we define the errorfunction for the gradientdescent as: <span class="math display">\[Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)\]</span></li>
</ul>
</section>
<section id="implementation-of-ffd-2" class="slide level1">
<h1>Implementation of FFD</h1>
<ul>
<li>Derivation is straightforward <span class="math display">\[
\scriptsize
\begin{array}{rl} \begin{array}{rl}
\displaystyle \frac{\partial Err_x}{\partial u} &amp; p^{*}_x - \displaystyle \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \\ \displaystyle \frac{\partial Err_x}{\partial u} &amp; p^{*}_x - \displaystyle \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x \\
= &amp; \displaystyle - \sum_i \sum_j \sum_k N&#39;_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x = &amp; \displaystyle - \sum_i \sum_j \sum_k N&#39;_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot {c_{ijk}}_x
\end{array} \end{array}
\]</span></p> \]</span> yielding a Jacobian:</li>
<p>The other partial derivatives follow the same pattern yielding the Jacobian:</p> </ul>
<p><span class="math display">\[ <p><span class="math display">\[
\scriptsize
J(Err(u,v,w)) = J(Err(u,v,w)) =
\left( \left(
\begin{array}{ccc} \begin{array}{ccc}
@ -276,226 +260,290 @@ J(Err(u,v,w)) =
\frac{\partial Err_z}{\partial u} &amp; \frac{\partial Err_z}{\partial v} &amp; \frac{\partial Err_z}{\partial w} \frac{\partial Err_z}{\partial u} &amp; \frac{\partial Err_z}{\partial v} &amp; \frac{\partial Err_z}{\partial w}
\end{array} \end{array}
\right) \right)
\]</span> <span class="math display">\[
\scriptsize
=
\left(
\begin{array}{ccc}
- \displaystyle \sum_{i,j,k} N&#39;_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x &amp;- \displaystyle \sum_{i,j,k} N_{i}(u) N&#39;_{j}(v) N_{k}(w) \cdot {c_{ijk}}_x &amp; - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N&#39;_{k}(w) \cdot {c_{ijk}}_x \\
- \displaystyle \sum_{i,j,k} N&#39;_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y &amp;- \displaystyle \sum_{i,j,k} N_{i}(u) N&#39;_{j}(v) N_{k}(w) \cdot {c_{ijk}}_y &amp; - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N&#39;_{k}(w) \cdot {c_{ijk}}_y \\
- \displaystyle \sum_{i,j,k} N&#39;_{i}(u) N_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z &amp;- \displaystyle \sum_{i,j,k} N_{i}(u) N&#39;_{j}(v) N_{k}(w) \cdot {c_{ijk}}_z &amp; - \displaystyle \sum_{i,j,k} N_{i}(u) N_{j}(v) N&#39;_{k}(w) \cdot {c_{ijk}}_z
\end{array}
\right)
\]</span></p> \]</span></p>
<p>With the GaussNewton algorithm we iterate via the formula <span class="math display">\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\]</span> and use Cramers rule for inverting the small Jacobian and solving this system of linear equations.</p> </section>
<p>As there is no strict upper bound of the number of iterations for this algorithm, we just iterate it long enough to be within the given <span class="math inline">\(\epsilon\)</span>error above. This takes — depending on the shape of the object and the grid — about <span class="math inline">\(3\)</span> to <span class="math inline">\(5\)</span> iterations that we observed in practice.</p> <section id="implementation-of-ffd-3" class="slide level1">
<p>Another issue that we observed in our implementation is, that multiple local optima may exist on selfintersecting grids. We solve this problem by defining selfintersecting grids to be <em>invalid</em> and do not test any of them.</p> <h1>Implementation of FFD</h1>
<p>This is not such a big problem as it sounds at first, as selfintersections mean, that controlpoints being further away from a given vertex have more influence over the deformation than controlpoints closer to this vertex. Also this contradicts the notion of locality that we want to achieve and deemed beneficial for a good behaviour of the evolutionary algorithm.</p> <ul>
<div id="deformation-grid"> <li>Armed with this we iterate the formula <span class="math display">\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\]</span> using Cramers rule for inverting the small Jacobian.</li>
<li>Usually terminates after <span class="math inline">\(3\)</span> to <span class="math inline">\(5\)</span> iteration with an <span class="math inline">\(\epsilon := \vec{p^*} - V(u,v,w) &lt; 10^{-4}\)</span></li>
<li>self-intersecting grids can invalidate the results
<ul>
<li>no problem, as these get not generated and contradict some properties we want (like locality)</li>
</ul></li>
</ul>
</section>
<section id="outline-1" class="slide level1">
<h1>Outline</h1>
<ul>
<li>What is FFD?</li>
<li><strong>What is evolutionary optimization?</strong></li>
<li>How to measure evolvability?</li>
<li>Scenarios</li>
<li>Results</li>
</ul>
</section>
<section id="what-is-evolutionary-optimization" class="slide level1">
<h1>What is evolutionary optimization?</h1>
<div id="section">
<div style="width:50%;float:left"> <div style="width:50%;float:left">
<pre><code data-noescape data-trim class="" style="">$t := 0$;
initialize $P(0) := \{\vec{a}_1(0),\dots,\vec{a}_\mu(0)\} \in I^\mu$;
evaluate $F(0) : \{\Phi(x) | x \in P(0)\}$;
while($c(F(t)) \neq$ true) {
recombine: $P(t) := r(P(t))$;
mutate: $P''(t) := m(P(t))$;
evaluate $F(t) : \{\Phi(x) | x \in P''(t)\}$
select: $P(t + 1) := s(P''(t) \cup Q,\Phi)$;
$t := t + 1$;
}</code></pre>
</div> </div>
<div style="width:50%;float:left"> <div style="width:50%;float:left">
<p>As mentioned in chapter , the way of choosing the representation to map the general problem (meshfitting/optimization in our case) into a parameterspace is very important for the quality and runtime of evolutionary algorithms.</p> <pre><code data-noescape data-trim class="" style="">$t$: Iteration-step
$I$: Set of possible Individuals
$P$: Population of Individuals
$F$: Fitness of Individuals
$Q$: Either set of parents or $\emptyset$
$r(..) : I^\mu \mapsto I^\lambda$
$m(..) : I^\lambda \mapsto I^\lambda$
$s(..) : I^{\lambda + \mu} \mapsto I^\mu$</code></pre>
</div> </div>
<div style="clear: both"> <div style="clear: both">
</div> </div>
</div> </div>
<p>Because our controlpoints are arranged in a grid, we can accurately represent each vertexpoint inside the grids volume with proper BSplinecoefficients between <span class="math inline">\([0,1[\)</span> and — as a consequence — we have to embed our object into it (or create constant “dummy”points outside).</p> <ul>
<p>The great advantage of BSplines is the local, direct impact of each control point without having a <span class="math inline">\(1:1\)</span>correlation, and a smooth deformation. While the advantages are great, the issues arise from the problem to decide where to place the controlpoints and how many to place at all.</p> <li>Algorithm to model simple inheritance</li>
<li>Consists of three main steps
<p>One would normally think, that the more controlpoints you add, the better the result will be, but this is not the case for our BSplines. Given any point <span class="math inline">\(\vec{p}\)</span> only the <span class="math inline">\(2 \cdot (d-1)\)</span> controlpoints contribute to the parametrization of that point<a href="#/fn5" class="footnote-ref" id="fnref5"><sup>5</sup></a>. This means, that a high resolution can have many controlpoints that are not contributing to any point on the surface and are thus completely irrelevant to the solution.</p> <ul>
<p>We illustrate this phenomenon in figure , where the red central points are not relevant for the parametrization of the circle. This leads to artefacts in the deformationmatrix <span class="math inline">\(\vec{U}\)</span>, as the columns corresponding to those controlpoints are <span class="math inline">\(0\)</span>.</p> <li>recombination</li>
<p>This also leads to useless increased complexity, as the parameters corresponding to those points will never have any effect, but a naive algorithm will still try to optimize them yielding numeric artefacts in the best and nonterminating or illdefined solutions<a href="#/fn6" class="footnote-ref" id="fnref6"><sup>6</sup></a> at worst.</p> <li>mutation</li>
<p>One can of course neglect those columns and their corresponding controlpoints, but this raises the question why they were introduced in the first place. We will address this in a special scenario in .</p> <li>selection</li>
<p>For our tests we chose different uniformly sized grids and added noise onto each controlpoint<a href="#/fn7" class="footnote-ref" id="fnref7"><sup>7</sup></a> to simulate different startingconditions.</p> </ul></li>
<li>An “individual” in our case is the displacement of control-points</li>
</ul>
</section> </section>
<section id="evolutional-loop" class="slide level1">
<h1>Evolutional loop</h1>
<ul>
<li><strong>Recombination</strong> generates <span class="math inline">\(\lambda\)</span> new individuals based on the characteristics of the <span class="math inline">\(\mu\)</span> parents.
<ul>
<li>This makes sure that the next guess is close to the old guess.</li>
</ul></li>
<li><strong>Mutation</strong> introduces new effects that cannot be produced by mere recombination of the parents.
<ul>
<li>Typically these are minor defects to individual members of the population i.e. through added noise</li>
</ul></li>
<li><strong>Selection</strong> selects <span class="math inline">\(\mu\)</span> individuals from the children (and optionally the parents) using a <em>fitnessfunction</em> <span class="math inline">\(\Phi\)</span>.
<ul>
<li>Fitness could mean low error, good improvement, etc.</li>
<li>Fitness not solely determines who survives, there are many possibilities</li>
</ul></li>
</ul>
</section> </section>
<section id="scenarios-for-testing-evolvabilitycriteria-using" class="slide level1"> <section id="outline-2" class="slide level1">
<h1>Scenarios for testing evolvabilitycriteria using </h1> <h1>Outline</h1>
<ul>
<p>In our experiments we use the same two testingscenarios, that were also used by Richter et al. The first scenario deforms a plane into a shape originally defined by Giannelli et al., where we setup controlpoints in a 2dimensional manner and merely deform in the heightcoordinate to get the resulting shape.</p> <li>What is FFD?</li>
<p>In the second scenario we increase the degrees of freedom significantly by using a 3dimensional controlgrid to deform a sphere into a face, so each control point has three degrees of freedom in contrast to first scenario.</p> <li>What is evolutionary optimization?</li>
<section id="test-scenario-1d-function-approximation" class="level2"> <li><strong>How to measure evolvability?</strong></li>
<h2>Test Scenario: 1D Function Approximation</h2> <li>Scenarios</li>
<p>In this scenario we used the shape defined by Giannelli et al., which is also used by Richter et al. using the same discretization to <span class="math inline">\(150 \times 150\)</span> points for a total of <span class="math inline">\(n = 22\,500\)</span> vertices. The shape is given by the following definition <span class="math display">\[\begin{equation} <li>Results</li>
t(x,y) = </ul>
\begin{cases}
0.5 \cos(4\pi \cdot q^{0.5}) + 0.5 &amp; q(x,y) &lt; \frac{1}{16},\\
2(y-x) &amp; 0 &lt; y-x &lt; 0.5,\\
1 &amp; 0.5 &lt; y - x
\end{cases}
\end{equation}\]</span><!-- </> --> with <span class="math inline">\((x,y) \in [0,2] \times [0,1]\)</span> and <span class="math inline">\(q(x,y)=(x-1.5)^2 + (y-0.5)^2\)</span>, which we have visualized in figure .</p>
<p>As the startingplane we used the same shape, but set all <span class="math inline">\(z\)</span>coordinates to <span class="math inline">\(0\)</span>, yielding a flat plane, which is partially already correct.</p>
<p>Regarding the <em>fitnessfunction</em> <span class="math inline">\(\mathrm{f}(\vec{p})\)</span>, we use the very simple approach of calculating the squared distances for each corresponding vertex <span class="math display">\[\begin{equation}
\mathrm{f}(\vec{p}) = \sum_{i=1}^{n} \|(\vec{Up})_i - t_i\|_2^2 = \|\vec{Up} - \vec{t}\|^2 \rightarrow \min
\end{equation}\]</span> where <span class="math inline">\(t_i\)</span> are the respective targetvertices to the parametrized sourcevertices<a href="#/fn8" class="footnote-ref" id="fnref8"><sup>8</sup></a> with the current deformationparameters <span class="math inline">\(\vec{p} = (p_1,\dots, p_m)\)</span>. We can do this onetoonecorrespondence because we have exactly the same number of source and targetvertices do to our setup of just flattening the object.</p>
<p>This formula is also the leastsquares approximation error for which we can compute the analytic solution <span class="math inline">\(\vec{p^{*}} = \vec{U^+}\vec{t}\)</span>, yielding us the correct gradient in which the evolutionary optimizer should move.</p>
</section> </section>
<section id="test-scenario-3d-function-approximation" class="level2"> <section id="how-to-measure-evolvability" class="slide level1">
<h2>Test Scenario: 3D Function Approximation</h2> <h1>How to measure evolvability?</h1>
<p> Opposed to the 1dimensional scenario before, the 3dimensional scenario is much more complex — not only because we have more degrees of freedom on each control point, but also, because the <em>fitnessfunction</em> we will use has no known analytic solution and multiple local minima.</p> <ul>
<li>Different (conflicting) optimization targets
<p>First of all we introduce the set up: We have given a triangulated model of a sphere consisting of <span class="math inline">\(10\,807\)</span> vertices, that we want to deform into a the targetmodel of a face with a total of <span class="math inline">\(12\,024\)</span> vertices. Both of these Models can be seen in figure .</p> <ul>
<p>Opposed to the 1Dcase we cannot map the source and targetvertices in a onetoonecorrespondence, which we especially need for the approximation of the fittingerror. Hence we state that the error of one vertex is the distance to the closest vertex of the respective other model and sum up the error from the source and target.</p> <li>convergence speed?</li>
<p>We therefore define the <em>fitnessfunction</em> to be:</p> <li>convergence quality?</li>
<span class="math display">\[\begin{equation} </ul></li>
\mathrm{f}(\vec{P}) = \frac{1}{n} \underbrace{\sum_{i=1}^n \|\vec{c_T(s_i)} - <li>As <span class="math inline">\(\vec{v} = \vec{U}\vec{p}\)</span> is linear, we can also look at <span class="math inline">\(\Delta \vec{v} = \vec{U}\, \Delta \vec{p}\)</span>
\vec{s_i}\|_2^2}_{\textrm{source--to--target--distance}} <ul>
+ \frac{1}{m} \underbrace{\sum_{i=1}^m \|\vec{c_S(t_i)} - <li>We only change <span class="math inline">\(\Delta \vec{p}\)</span>, so evolvability should only use <span class="math inline">\(\vec{U}\)</span> for predictions</li>
\vec{t_i}\|_2^2}_{\textrm{target--to--source--distance}} </ul></li>
+ \lambda \cdot \textrm{regularization}(\vec{P}) </ul>
\label{eq:fit3d}
\end{equation}\]</span>
<p>where <span class="math inline">\(\vec{c_T(s_i)}\)</span> denotes the targetvertex that is corresponding to the sourcevertex <span class="math inline">\(\vec{s_i}\)</span> and <span class="math inline">\(\vec{c_S(t_i)}\)</span> denotes the sourcevertex that corresponds to the targetvertex <span class="math inline">\(\vec{t_i}\)</span>. Note that the targetvertices are given and fixed by the targetmodel of the face we want to deform into, whereas the sourcevertices vary depending on the chosen parameters <span class="math inline">\(\vec{P}\)</span>, as those get calculated by the previously introduces formula <span class="math inline">\(\vec{S} = \vec{UP}\)</span> with <span class="math inline">\(\vec{S}\)</span> being the <span class="math inline">\(n \times 3\)</span>matrix of sourcevertices, <span class="math inline">\(\vec{U}\)</span> the <span class="math inline">\(n \times m\)</span>matrix of calculated coefficients for the — analog to the 1D case — and finally <span class="math inline">\(\vec{P}\)</span> being the <span class="math inline">\(m \times 3\)</span>matrix of the controlgrid defining the whole deformation.</p>
<p>As regularizationterm we add a weighted Laplacian of the deformation that has been used before by Aschenbach et al. on similar models and was shown to lead to a more precise fit. The Laplacian <span class="math display">\[\begin{equation}
\mathrm{regularization}(\vec{P}) = \frac{1}{\sum_i A_i} \sum_{i=1}^n A_i \cdot \left( \sum_{\vec{s}_j \in \mathcal{N}(\vec{s}_i)} w_j \cdot \|\Delta \vec{s}_j - \Delta \vec{s}_i\|^2 \right)
\label{eq:reg3d}
\end{equation}\]</span> is determined by the cotangent weighted displacement <span class="math inline">\(w_j\)</span> of the to <span class="math inline">\(s_i\)</span> connected vertices <span class="math inline">\(\mathcal{N}(s_i)\)</span> and <span class="math inline">\(A_i\)</span> is the Voronoiarea of the corresponding vertex <span class="math inline">\(\vec{s_i}\)</span>. We leave out the <span class="math inline">\(\vec{R}_i\)</span>term from the original paper as our deformation is merely linear.</p>
<p>This regularizationweight gives us a measure of stiffness for the material that we will influence via the <span class="math inline">\(\lambda\)</span>coefficient to start out with a stiff material that will get more flexible per iteration. As a sideeffect this also limits the effects of overagressive movement of the controlpoints in the beginning of the fitting process and thus should limit the generation of illdefined grids mentioned in section .</p>
</section> </section>
<section id="evolvability-criteria" class="slide level1">
<h1>Evolvability criteria</h1>
<ul>
<li><strong>Variability</strong>
<ul>
<li>roughly: “How many actual Degrees of Freedom exist?”</li>
<li>Defined by <span class="math display">\[\mathrm{variability}(\vec{U}) := \frac{\mathrm{rank}(\vec{U})}{n} \in [0..1]\]</span></li>
<li>in FFD this is <span class="math inline">\(1/\#\textrm{CP}\)</span> for the number of control-points used for parametrization</li>
</ul></li>
</ul>
</section> </section>
<section id="evaluation-of-scenarios" class="slide level1"> <section id="evolvability-criteria-1" class="slide level1">
<h1>Evaluation of Scenarios</h1> <h1>Evolvability criteria</h1>
<ul>
<p>To compare our results to the ones given by Richter et al., we also use Spearmans rank correlation coefficient. Opposed to other popular coefficients, like the Pearson correlation coefficient, which measures a linear relationship between variables, the Spearmans coefficient assesses how well an arbitrary monotonic function can describe the relationship between two variables, without making any assumptions about the frequency distribution of the variables.</p> <li><strong>Regularity</strong>
<p>As we dont have any prior knowledge if any of the criteria is linear and we are just interested in a monotonic relation between the criteria and their predictive power, the Spearmans coefficient seems to fit out scenario best and was also used before by Richter et al.</p> <ul>
<p>For interpretation of these values we follow the same interpretation used in , based on : The coefficient intervals <span class="math inline">\(r_S \in [0,0.2[\)</span>, <span class="math inline">\([0.2,0.4[\)</span>, <span class="math inline">\([0.4,0.6[\)</span>, <span class="math inline">\([0.6,0.8[\)</span>, and <span class="math inline">\([0.8,1]\)</span> are classified as <em>very weak</em>, <em>weak</em>, <em>moderate</em>, <em>strong</em> and <em>very strong</em>. We interpret pvalues smaller than <span class="math inline">\(0.01\)</span> as <em>significant</em> and cut off the precision of pvalues after four decimal digits (thus often having a pvalue of <span class="math inline">\(0\)</span> given for pvalues <span class="math inline">\(&lt; 10^{-4}\)</span>). <!-- </> --></p> <li>roughly: “How numerically stable is the optimization?”</li>
<p>As we are looking for anticorrelation (i.e. our criterion should be maximized indicating a minimal result in — for example — the reconstructionerror) instead of correlation we flip the sign of the correlationcoefficient for readability and to have the correlationcoefficients be in the classificationrange given above.</p> <li>Defined by <span class="math display">\[\mathrm{regularity}(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}} \in [0..1]\]</span> with <span class="math inline">\(\sigma_{min/max}\)</span> being the least/greatest right singular value.</li>
<p>For the evolutionary optimization we employ the of the shark3.1 library , as this algorithm was used by as well. We leave the parameters at their sensible defaults as further explained in .</p> <li>high, when <span class="math inline">\(\|\vec{Up}\| \propto \|\vec{p}\|\)</span></li>
<section id="procedure-1d-function-approximation" class="level2"> </ul></li>
<h2>Procedure: 1D Function Approximation</h2> </ul>
<p>For our setup we first compute the coefficients of the deformationmatrix and use the formulas for <em>variability</em> and <em>regularity</em> to get our predictions. Afterwards we solve the problem analytically to get the (normalized) correct gradient that we use as guess for the <em>improvement potential</em>. To further test the <em>improvement potential</em> we also consider a distorted gradient <span class="math inline">\(\vec{g}_{\mathrm{d}}\)</span>: <span class="math display">\[
\vec{g}_{\mathrm{d}} = \frac{\mu \vec{g}_{\mathrm{c}} + (1-\mu)\mathbb{1}}{\|\mu \vec{g}_{\mathrm{c}} + (1-\mu) \mathbb{1}\|}
\]</span> where <span class="math inline">\(\mathbb{1}\)</span> is the vector consisting of <span class="math inline">\(1\)</span> in every dimension, <span class="math inline">\(\vec{g}_\mathrm{c} = \vec{p^{*}} - \vec{p}\)</span> is the calculated correct gradient, and <span class="math inline">\(\mu\)</span> is used to blend between <span class="math inline">\(\vec{g}_\mathrm{c}\)</span> and <span class="math inline">\(\mathbb{1}\)</span>. As we always start with a gradient of <span class="math inline">\(p = \mathbb{0}\)</span> this means we can shorten the definition of <span class="math inline">\(\vec{g}_\mathrm{c}\)</span> to <span class="math inline">\(\vec{g}_\mathrm{c} = \vec{p^{*}}\)</span>.</p>
<p>We then set up a regular 2dimensional grid around the object with the desired grid resolutions. To generate a testcase we then move the gridvertices randomly inside the xyplane. As selfintersecting grids get tricky to solve with our implemented newtonsmethod (see section ) we avoid the generation of such selfintersecting grids for our testcases.</p>
<p>To achieve that we generated a gaussian distributed number with <span class="math inline">\(\mu = 0, \sigma=0.25\)</span> and clamped it to the range <span class="math inline">\([-0.25,0.25]\)</span>. We chose such an <span class="math inline">\(r \in [-0.25,0.25]\)</span> per dimension and moved the controlpoints by that factor towards their respective neighbours<a href="#/fn9" class="footnote-ref" id="fnref9"><sup>9</sup></a>.</p>
<p>In other words we set <span class="math display">\[\begin{equation*}
p_i =
\begin{cases}
p_i + (p_i - p_{i-1}) \cdot r, &amp; \textrm{if } r \textrm{ negative} \\
p_i + (p_{i+1} - p_i) \cdot r, &amp; \textrm{if } r \textrm{ positive}
\end{cases}
\end{equation*}\]</span> in each dimension separately.</p>
<p>An Example of such a testcase can be seen for a <span class="math inline">\(7 \times 4\)</span>grid in figure .</p>
</section> </section>
<section id="results-of-1d-function-approximation" class="level2"> <section id="evolvability-criteria-2" class="slide level1">
<h2>Results of 1D Function Approximation</h2> <h1>Evolvability criteria</h1>
<p>In the case of our 1DOptimizationproblem, we have the luxury of knowing the analytical solution to the given problemset. We use this to experimentally evaluate the quality criteria we introduced before. As an evolutional optimization is partially a random process, we use the analytical solution as a stoppingcriteria. We measure the convergence speed as number of iterations the evolutional algorithm needed to get within <span class="math inline">\(1.05 \times\)</span> of the optimal solution.</p> <ul>
<p>We used different regular grids that we manipulated as explained in Section with a different number of controlpoints. As our grids have to be the product of two integers, we compared a <span class="math inline">\(5 \times 5\)</span>grid with <span class="math inline">\(25\)</span> controlpoints to a <span class="math inline">\(4 \times 7\)</span> and <span class="math inline">\(7 \times 4\)</span>grid with <span class="math inline">\(28\)</span> controlpoints. This was done to measure the impact an improper  setup could have and how well this is displayed in the criteria we are examining.</p> <li><strong>Improvement Potential</strong>
<p>Additionally we also measured the effect of increasing the total resolution of the grid by taking a closer look at <span class="math inline">\(5 \times 5\)</span>, <span class="math inline">\(7 \times 7\)</span> and <span class="math inline">\(10 \times 10\)</span> grids.</p> <ul>
<section id="variability-1" class="level3"> <li>roughly: “How good can the best fit become?”</li>
<h3>Variability</h3> <li>Defined by <span class="math display">\[\mathrm{potential}(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec{G}\|^2_F\]</span> with a unit-normed guessed gradient <span class="math inline">\(\vec{G}\)</span></li>
</ul></li>
<p><em>Variability</em> should characterize the potential for design space exploration and is defined in terms of the normalized rank of the deformation matrix <span class="math inline">\(\vec{U}\)</span>: <span class="math inline">\(V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}\)</span>, whereby <span class="math inline">\(n\)</span> is the number of vertices. As all our tested matrices had a constant rank (being <span class="math inline">\(m = x \cdot y\)</span> for a <span class="math inline">\(x \times y\)</span> grid), we have merely plotted the errors in the box plot in figure </p> </ul>
<p>It is also noticeable, that although the <span class="math inline">\(7 \times 4\)</span> and <span class="math inline">\(4 \times 7\)</span> grids have a higher <em>variability</em>, they perform not better than the <span class="math inline">\(5 \times 5\)</span> grid. Also the <span class="math inline">\(7 \times 4\)</span> and <span class="math inline">\(4 \times 7\)</span> grids differ distinctly from each other with a mean<span class="math inline">\(\pm\)</span>sigma of <span class="math inline">\(233.09 \pm 12.32\)</span> for the former and <span class="math inline">\(286.32 \pm 22.36\)</span> for the latter, although they have the same number of controlpoints. This is an indication of an impact a proper or improper gridsetup can have. We do not draw scientific conclusions from these findings, as more research on nonsquared grids seem necessary.</p>
<p>Leaving the issue of the gridlayout aside we focused on grids having the same number of prototypes in every dimension. For the <span class="math inline">\(5 \times 5\)</span>, <span class="math inline">\(7 \times 7\)</span> and <span class="math inline">\(10 \times 10\)</span> grids we found a <em>very strong</em> correlation (<span class="math inline">\(-r_S = 0.94, p = 0\)</span>) between the <em>variability</em> and the evolutionary error.</p>
</section> </section>
<section id="regularity-1" class="level3"> <section id="outline-3" class="slide level1">
<h3>Regularity</h3> <h1>Outline</h1>
<ul>
<li>What is FFD?</li>
<p><em>Regularity</em> should correspond to the convergence speed (measured in iterationsteps of the evolutionary algorithm), and is computed as inverse condition number <span class="math inline">\(\kappa(\vec{U})\)</span> of the deformationmatrix.</p> <li>What is evolutionary optimization?</li>
<p>As can be seen from table , we could only show a <em>weak</em> correlation in the case of a <span class="math inline">\(5 \times 5\)</span> grid. As we increment the number of controlpoints the correlation gets worse until it is completely random in a single dataset. Taking all presented datasets into account we even get a <em>strong</em> correlation of <span class="math inline">\(- r_S = -0.72, p = 0\)</span>, that is opposed to our expectations.</p> <li>How to measure evolvability?</li>
<p>To explain this discrepancy we took a closer look at what caused these high number of iterations. In figure we also plotted the <em>improvement potential</em> against the steps next to the <em>regularity</em>plot. Our theory is that the <em>very strong</em> correlation (<span class="math inline">\(-r_S = -0.82, p=0\)</span>) between <em>improvement potential</em> and number of iterations hints that the employed algorithm simply takes longer to converge on a better solution (as seen in figure and ) offsetting any gain the regularitymeasurement could achieve.</p> <li><strong>Scenarios</strong></li>
<li>Results</li>
</ul>
</section> </section>
<section id="improvement-potential-1" class="level3"> <section id="scenarios" class="slide level1">
<h3>Improvement Potential</h3> <h1>Scenarios</h1>
<ul>
<p>The <em>improvement potential</em> should correlate to the quality of the fittingresult. We plotted the results for the tested gridsizes <span class="math inline">\(5 \times 5\)</span>, <span class="math inline">\(7 \times 7\)</span> and <span class="math inline">\(10 \times 10\)</span> in figure . We tested the <span class="math inline">\(4 \times 7\)</span> and <span class="math inline">\(7 \times 4\)</span> grids as well, but omitted them from the plot.</p> <li>2 Testing Scenarios</li>
<p>Additionally we tested the results for a distorted gradient described in with a <span class="math inline">\(\mu\)</span>value of <span class="math inline">\(0.25\)</span>, <span class="math inline">\(0.5\)</span>, <span class="math inline">\(0,75\)</span>, and <span class="math inline">\(1.0\)</span> for the <span class="math inline">\(5 \times 5\)</span> grid and with a <span class="math inline">\(\mu\)</span>value of <span class="math inline">\(0.5\)</span> for all other cases.</p> <li>1-dimensional fit
<p>All results show the identical <em>very strong</em> and <em>significant</em> correlation with a Spearmancoefficient of <span class="math inline">\(- r_S = 1.0\)</span> and pvalue of <span class="math inline">\(0\)</span>.</p> <ul>
<p>These results indicate, that <span class="math inline">\(\|\mathbb{1} - \vec{U}\vec{U}^{+}\|_F\)</span> is close to <span class="math inline">\(0\)</span>, reducing the impacts of any kind of gradient. Nevertheless, the improvement potential seems to be suited to make estimated guesses about the quality of a fit, even lacking an exact gradient.</p> <li><span class="math inline">\(xy\)</span>-plane to <span class="math inline">\(xyz\)</span>-model, where only the <span class="math inline">\(z\)</span>-coordinate changes</li>
<li>can be solved analytically with known global optimum</li>
</ul></li>
<li>3-dimensional fit
<ul>
<li>fit a parametrized sphere into a face</li>
<li>cannot be solved analytically</li>
<li>number of vertices differ between models</li>
</ul></li>
</ul>
</section> </section>
<section id="d-scenario" class="slide level1">
<h1>1D-Scenario</h1>
<p><figure class="" style=""><img src="../arbeit/img/example1d_grid.png" style=""></img><figcaption>Left: A regular <span class="math inline">\(7 \times 4\)</span>grid<br />Right: The same grid after a random distortion to generate a testcase.</figcaption></figure></p>
<p><figure class="" style="width:70%;"><img src="../arbeit/img/1dtarget.png" style="width:70%;"></img><figcaption>The targetshape for our 1dimensional optimizationscenario including a wireframeoverlay of the vertices.</figcaption></figure></p>
</section> </section>
<section id="procedure-3d-function-approximation" class="level2"> <section id="d-scenarios" class="slide level1">
<h2>Procedure: 3D Function Approximation</h2> <h1>3D-Scenarios</h1>
<p><figure class="" style=""><img src="../arbeit/img/3dtarget.png" style=""></img><figcaption>Left: The sphere we start from with 10 807 vertices<br />Right: The face we want to deform the sphere into with 12 024 vertices.</figcaption></figure></p>
<p>As explained in section in detail, we do not know the analytical solution to the global optimum. Additionally we have the problem of finding the right correspondences between the original spheremodel and the targetmodel, as they consist of <span class="math inline">\(10\,807\)</span> and <span class="math inline">\(12\,024\)</span> vertices respectively, so we cannot make a onetoonecorrespondence between them as we did in the onedimensional case.</p>
<p>Initially we set up the correspondences <span class="math inline">\(\vec{c_T(\dots)}\)</span> and <span class="math inline">\(\vec{c_S(\dots)}\)</span> to be the respectively closest vertices of the other model. We then calculate the analytical solution given these correspondences via <span class="math inline">\(\vec{P^{*}} = \vec{U^+}\vec{T}\)</span>, and also use the first solution as guessed gradient for the calculation of the <em>improvement potential</em>, as the optimal solution is not known. We then let the evolutionary algorithm run up within <span class="math inline">\(1.05\)</span> times the error of this solution and afterwards recalculate the correspondences <span class="math inline">\(\vec{c_T(\dots)}\)</span> and <span class="math inline">\(\vec{c_S(\dots)}\)</span>.</p>
<p>For the next step we then halve the regularizationimpact <span class="math inline">\(\lambda\)</span> (starting at <span class="math inline">\(1\)</span>) of our <em>fitnessfunction</em> () and calculate the next incremental solution <span class="math inline">\(\vec{P^{*}} = \vec{U^+}\vec{T}\)</span> with the updated correspondences (again, mapping each vertex to its closest neighbour in the respective other model) to get our next targeterror. We repeat this process as long as the targeterror keeps decreasing and use the number of these iterations as measure of the convergence speed. As the resulting evolutional error without regularization is in the numeric range of <span class="math inline">\(\approx 100\)</span>, whereas the regularization is numerically <span class="math inline">\(\approx 7000\)</span> we need at least <span class="math inline">\(10\)</span> to <span class="math inline">\(15\)</span> iterations until the regularizationeffect wears off.</p>
<p>The grid we use for our experiments is just very coarse due to computational limitations. We are not interested in a good reconstruction, but an estimate if the mentioned evolvabilitycriteria are good.</p>
<p>In figure we show an example setup of the scene with a <span class="math inline">\(4\times 4\times 4\)</span>grid. Identical to the 1dimensional scenario before, we create a regular grid and move the controlpoints in the exact same random manner between their neighbours as described in section , but in three instead of two dimensions<a href="#/fn10" class="footnote-ref" id="fnref10"><sup>10</sup></a>.</p>
<p>As is clearly visible from figure , the targetmodel has many vertices in the facial area, at the ears and in the neckregion. Therefore we chose to increase the gridresolutions for our tests in two different dimensions and see how well the criteria predict a suboptimal placement of these controlpoints.</p>
</section> </section>
<section id="results-of-3d-function-approximation" class="level2"> <section id="outline-4" class="slide level1">
<h2>Results of 3D Function Approximation</h2> <h1>Outline</h1>
<p>In the 3DApproximation we tried to evaluate further on the impact of the gridlayout to the overall criteria. As the targetmodel has many vertices in concentrated in the facial area we start from a <span class="math inline">\(4 \times 4 \times 4\)</span> grid and only increase the number of controlpoints in one dimension, yielding a resolution of <span class="math inline">\(7 \times 4 \times 4\)</span> and <span class="math inline">\(4 \times 4 \times 7\)</span> respectively. We visualized those two grids in figure .</p> <ul>
<p>To evaluate the performance of the evolvabilitycriteria we also tested a more neutral resolution of <span class="math inline">\(4 \times 4 \times 4\)</span>, <span class="math inline">\(5 \times 5 \times 5\)</span>, and <span class="math inline">\(6 \times 6 \times 6\)</span> — similar to the 1Dsetup.</p> <li>What is FFD?</li>
<li>What is evolutionary optimization?</li>
<section id="variability-2" class="level3"> <li>How to measure evolvability?</li>
<h3>Variability</h3> <li>Scenarios</li>
<li><strong>Results</strong></li>
</ul>
<p>Similar to the 1D case all our tested matrices had a constant rank (being <span class="math inline">\(m = x \cdot y \cdot z\)</span> for a <span class="math inline">\(x \times y \times z\)</span> grid), so we again have merely plotted the errors in the box plot in figure .</p>
<p>As expected the <span class="math inline">\(\mathrm{X} \times 4 \times 4\)</span> grids performed slightly better than their <span class="math inline">\(4 \times 4 \times \mathrm{X}\)</span> counterparts with a mean<span class="math inline">\(\pm\)</span>sigma of <span class="math inline">\(101.25 \pm 7.45\)</span> to <span class="math inline">\(102.89 \pm 6.74\)</span> for <span class="math inline">\(\mathrm{X} = 5\)</span> and <span class="math inline">\(85.37 \pm 7.12\)</span> to <span class="math inline">\(89.22 \pm 6.49\)</span> for <span class="math inline">\(\mathrm{X} = 7\)</span>.</p>
<p>Interestingly both variants end up closer in terms of fitting error than we anticipated, which shows that the evolutionary algorithm we employed is capable of correcting a purposefully created bad grid. Also this confirms, that in our cases the number of controlpoints is more important for quality than their placement, which is captured by the <em>variability</em> via the rank of the deformationmatrix.</p>
<p>Overall the correlation between <em>variability</em> and fitnesserror were <em>significant</em> and showed a <em>very strong</em> correlation in all our tests. The detailed correlationcoefficients are given in table alongside their pvalues.</p>
<p>As introduces in section and visualized in figure , we know, that not all controlpoints have to necessarily contribute to the parametrization of our 3Dmodel. Because we are starting from a sphere, some controlpoints are too far away from the surface to contribute to the deformation at all.</p>
<p>One can already see in 2D in figure , that this effect starts with a regular <span class="math inline">\(9 \times 9\)</span> grid on a perfect circle. To make sure we observe this, we evaluated the <em>variability</em> for 100 randomly moved <span class="math inline">\(10 \times 10 \times 10\)</span> grids on the sphere we start out with.</p>
<p>As the <em>variability</em> is defined by <span class="math inline">\(\frac{\mathrm{rank}(\vec{U})}{n}\)</span> we can easily recover the rank of the deformationmatrix <span class="math inline">\(\vec{U}\)</span>. The results are shown in the histogram in figure . Especially in the centre of the sphere and in the corners of our grid we effectively loose controlpoints for our parametrization.</p>
<p>This of course yields a worse error as when those controlpoints would be put to use and one should expect a loss in quality evident by a higher reconstructionerror opposed to a grid where they are used. Sadly we could not run a indepth test on this due to computational limitations.</p>
<p>Nevertheless this hints at the notion, that <em>variability</em> is a good measure for the overall quality of a fit.</p>
</section> </section>
<section id="regularity-2" class="level3"> <section id="variability-1d" class="slide level1">
<h3>Regularity</h3> <h1>Variability 1D</h1>
<ul>
<p>Opposed to the predictions of <em>variability</em> our test on <em>regularity</em> gave a mixed result — similar to the 1Dcase.</p> <li>Should measure Degrees of Freedom and thus quality</li>
<p>In roughly half of the scenarios we have a <em>significant</em>, but <em>weak</em> to <em>moderate</em> correlation between <em>regularity</em> and number of iterations. On the other hand in the scenarios where we increased the number of controlpoints, namely <span class="math inline">\(125\)</span> for the <span class="math inline">\(5 \times 5 \times 5\)</span> grid and <span class="math inline">\(216\)</span> for the <span class="math inline">\(6 \times 6 \times 6\)</span> grid we found a <em>significant</em>, but <em>weak</em> <strong>anti</strong>correlation when taking all three tests into account<a href="#/fn11" class="footnote-ref" id="fnref11"><sup>11</sup></a>, which seem to contradict the findings/trends for the sets with <span class="math inline">\(64\)</span>, <span class="math inline">\(80\)</span>, and <span class="math inline">\(112\)</span> controlpoints (first two rows of table ).</p> </ul>
<p>Taking all results together we only find a <em>very weak</em>, but <em>significant</em> link between <em>regularity</em> and the number of iterations needed for the algorithm to converge.</p> <p><figure class="" style=""><img src="../arbeit/img/evolution1d/variability_boxplot.png" style=""></img><figcaption>The squared error for the various grids we examined.<br /> Note that <span class="math inline">\(7 \times 4\)</span> and <span class="math inline">\(4 \times 7\)</span> have the same number of controlpoints.</figcaption></figure></p>
<ul>
<p>As can be seen from figure , we can observe that increasing the number of controlpoints helps the convergencespeeds. The regularitycriterion first behaves as we would like to, but then switches to behave exactly opposite to our expectations, as can be seen in the first three plots. While the number of controlpoints increases from red to green to blue and the number of iterations decreases, the <em>regularity</em> seems to increase at first, but then decreases again on higher gridresolutions.</p> <li><span class="math inline">\(5 \times 5\)</span>, <span class="math inline">\(7 \times 7\)</span> and <span class="math inline">\(10 \times 10\)</span> have <em>very strong</em> correlation (<span class="math inline">\(-r_S = 0.94, p = 0\)</span>) between the <em>variability</em> and the evolutionary error.</li>
<p>This can be an artefact of the definition of <em>regularity</em>, as it is defined by the inverse conditionnumber of the deformationmatrix <span class="math inline">\(\vec{U}\)</span>, being the fraction <span class="math inline">\(\frac{\sigma_{\mathrm{min}}}{\sigma_{\mathrm{max}}}\)</span> between the least and greatest right singular value.</p> </ul>
<p>As we observed in the previous section, we cannot guarantee that each controlpoint has an effect (see figure ) and so a small minimal right singular value occurring on higher gridresolutions seems likely the problem.</p>
<p>Adding to this we also noted, that in the case of the <span class="math inline">\(10 \times 10 \times 10\)</span>grid the <em>regularity</em> was always <span class="math inline">\(0\)</span>, as a noncontributing controlpoint yields a <span class="math inline">\(0\)</span>column in the deformationmatrix, thus letting <span class="math inline">\(\sigma_\mathrm{min} = 0\)</span>. A better definition for <em>regularity</em> (i.e. using the smallest nonzero right singular value) could solve this particular issue, but not fix the trend we noticed above.</p>
</section> </section>
<section id="improvement-potential-2" class="level3"> <section id="variability-3d" class="slide level1">
<h3>Improvement Potential</h3> <h1>Variability 3D</h1>
<ul>
<p>Comparing to the 1Dscenario, we do not know the optimal solution to the given problem and for the calculation we only use the initial gradient produced by the initial correlation between both objects. This gradient changes with every iteration and will be off our first guess very quickly. This is the reason we are not trying to create artificially bad gradients, as we have a broad range in quality of such gradients anyway.</p> <li>Should measure Degrees of Freedom and thus quality</li>
</ul>
<p>We plotted our findings on the <em>improvement potential</em> in a similar way as we did before with the <em>regularity</em>. In figure one can clearly see the correlation and the spread within each setup and the behaviour when we increase the number of controlpoints.</p> <p><figure class="" style=""><img src="../arbeit/img/evolution3d/variability_boxplot.png" style=""></img><figcaption>The fitting error for the various grids we examined.<br />Note that the number of controlpoints is a product of the resolution, so <span class="math inline">\(X \times 4 \times 4\)</span> and <span class="math inline">\(4 \times 4 \times X\)</span> have the same number of controlpoints.</figcaption></figure></p>
<p>Along with this we also give the Spearmancoefficients along with their pvalues in table . Within one scenario we only find a <em>weak</em> to <em>moderate</em> correlation between the <em>improvement potential</em> and the fitting error, but all findings (except for <span class="math inline">\(7 \times 4 \times 4\)</span> and <span class="math inline">\(6 \times 6 \times 6\)</span>) are significant.</p> <ul>
<p>If we take multiple datasets into account the correlation is <em>very strong</em> and <em>significant</em>, which is good, as this functions as a litmustest, because the quality is naturally tied to the number of controlpoints.</p> <li><span class="math inline">\(4 \times 4 \times 4\)</span>, <span class="math inline">\(5 \times 5 \times 5\)</span> and <span class="math inline">\(6 \times 6 \times 6\)</span> have <em>very strong</em> correlation (<span class="math inline">\(-r_S = 0.91, p = 0\)</span>) between the <em>variability</em> and the evolutionary error.</li>
<p>All in all the <em>improvement potential</em> seems to be a good and sensible measure of quality, even given gradients of varying quality.</p> </ul>
<p>Lastly, a small note on the behaviour of <em>improvement potential</em> and convergence speed, as we used this in the 1D case to argue, why the <em>regularity</em> defied our expectations. As a contrast we wanted to show, that <em>improvement potential</em> cannot serve for good predictions of the convergence speed. In figure we show <em>improvement potential</em> against number of iterations for both scenarios. As one can see, in the 1D scenario we have a <em>strong</em> and <em>significant</em> correlation (with <span class="math inline">\(-r_S = -0.72\)</span>, <span class="math inline">\(p = 0\)</span>), whereas in the 3D scenario we have the opposite <em>significant</em> and <em>strong</em> effect (with <span class="math inline">\(-r_S = 0.69\)</span>, <span class="math inline">\(p=0\)</span>), so these correlations clearly seem to be dependent on the scenario and are not suited for generalization.</p>
</section> </section>
</section> <section id="varying-variability" class="slide level1">
</section> <h1>Varying Variability</h1>
<section id="discussion-and-outlook" class="slide level1"> <div id="section-1">
<h1>Discussion and outlook</h1> <div style="width:50%;float:left">
<p><figure class="" style=""><img src="../arbeit/img/enoughCP.png" style=""></img><figcaption>A high resolution (<span class="math inline">\(10 \times 10\)</span>) of controlpoints over a circle. Yellow/green points contribute to the parametrization, red points dont.<br />An Examplepoint (blue) is solely determined by the position of the green controlpoints.</figcaption></figure></p>
</div>
<div style="width:50%;float:left">
<p><figure class="" style=""><img src="../arbeit/img/evolution3d/variability2_boxplot.png" style=""></img><figcaption>Histogram of ranks of various <span class="math inline">\(10 \times 10 \times 10\)</span> grids with <span class="math inline">\(1000\)</span> controlpoints each showing in this case how many controlpoints are actually used in the calculations.</figcaption></figure></p>
</div>
<div style="clear: both">
<p>In this thesis we took a look at the different criteria for <em>evolvability</em> as introduced by Richter et al., namely <em>variability</em>, <em>regularity</em> and <em>improvement potential</em> under different setupconditions. Where Richter et al. used , we employed to set up a lowcomplexity parametrization of a more complex vertexmesh.</p> </div>
<p>In our findings we could show in the 1Dscenario, that there were statistically <em>significant</em> <em>very strong</em> correlations between <em>variability and fitting error</em> (<span class="math inline">\(0.94\)</span>) and <em>improvement potential and fitting error</em> (<span class="math inline">\(1.0\)</span>) with comparable results than Richter et al. (with <span class="math inline">\(0.31\)</span> to <span class="math inline">\(0.88\)</span> for the former and <span class="math inline">\(0.75\)</span> to <span class="math inline">\(0.99\)</span> for the latter), whereas we found only <em>weak</em> correlations for <em>regularity and convergencespeed</em> (<span class="math inline">\(0.28\)</span>) opposed to Richter et al. with <span class="math inline">\(0.39\)</span> to <span class="math inline">\(0.91\)</span>.<a href="#/fn12" class="footnote-ref" id="fnref12"><sup>12</sup></a></p> </div>
<p>For the 3Dscenario our results show a <em>very strong</em>, <em>significant</em> correlation between <em>variability and fitting error</em> with <span class="math inline">\(0.89\)</span> to <span class="math inline">\(0.94\)</span>, which are pretty much in line with the findings of Richter et al. (<span class="math inline">\(0.65\)</span> to <span class="math inline">\(0.95\)</span>). The correlation between <em>improvement potential and fitting error</em> behave similar, with our findings having a significant coefficient of <span class="math inline">\(0.3\)</span> to <span class="math inline">\(0.95\)</span> depending on the gridresolution compared to the <span class="math inline">\(0.61\)</span> to <span class="math inline">\(0.93\)</span> from Richter et al. In the case of the correlation of <em>regularity and convergence speed</em> we found very different (and often not significant) correlations and anticorrelations ranging from <span class="math inline">\(-0.25\)</span> to <span class="math inline">\(0.46\)</span>, whereas Richter et al. reported correlations between <span class="math inline">\(0.34\)</span> to <span class="math inline">\(0.87\)</span>.</p>
<p>Taking these results into consideration, one can say, that <em>variability</em> and <em>improvement potential</em> are very good estimates for the quality of a fit using as a deformation function, while we could not reproduce similar compelling results as Richter et al. for <em>regularity and convergence speed</em>.</p>
<p>One reason for the bad or erratic behaviour of the <em>regularity</em>criterion could be that in an setting we have a likelihood of having controlpoints that are only contributing to the whole parametrization in negligible amounts, resulting in very small right singular values of the deformationmatrix <span class="math inline">\(\vec{U}\)</span> that influence the conditionnumber and thus the <em>regularity</em> in a significant way. Further research is needed to refine <em>regularity</em> so that these problems get addressed, like taking all singular values into account when capturing the notion of <em>regularity</em>.</p>
<p>Richter et al. also compared the behaviour of direct and indirect manipulation in , whereas we merely used an indirect approach. As direct manipulations tend to perform better than indirect manipulations, the usage of could also work better with the criteria we examined. This can also solve the problem of bad singular values for the <em>regularity</em> as the incorporation of the parametrization of the points on the surface — which are the essential part of a directmanipulation — could cancel out a bad controlgrid as the bad controlpoints are never or negligibly used to parametrize those surfacepoints.</p>
</section> </section>
<section class="footnotes"> <section id="regularity-1d" class="slide level1">
<hr /> <h1>Regularity 1D</h1>
<ol> <ul>
<li id="fn1"><p>one more for each recursive step.<a href="#/fnref1" class="footnote-back"></a></p></li> <li>Should measure convergence speed</li>
<li id="fn2"><p><em>Warning:</em> in the case of <span class="math inline">\(d=1\)</span> the recursionformula yields a <span class="math inline">\(0\)</span> denominator, but <span class="math inline">\(N\)</span> is also <span class="math inline">\(0\)</span>. The right solution for this case is a derivative of <span class="math inline">\(0\)</span><a href="#/fnref2" class="footnote-back"></a></p></li> </ul>
<li id="fn3"><p>Some examples of this are explained in detail in <a href="#/fnref3" class="footnote-back"></a></p></li> <p><figure class="" style="width:70%;"><img src="../arbeit/img/evolution1d/55_to_1010_steps.png" style="width:70%;"></img><figcaption>Left: <em>Improvement potential</em> against number of iterations until convergence<br />Right: <em>Regularity</em> against number of iterations until convergence<br />Coloured by their gridresolution, both with a linear fit over the whole dataset.</figcaption></figure></p>
<li id="fn4"><p>We use <span class="math inline">\(\vec{S}\)</span> in this notation, as we will use this parametrization of a sourcemesh to manipulate <span class="math inline">\(\vec{S}\)</span> into a targetmesh <span class="math inline">\(\vec{T}\)</span> via <span class="math inline">\(\vec{P}\)</span><a href="#/fnref4" class="footnote-back"></a></p></li> <ul>
<li id="fn5"><p>Normally these are <span class="math inline">\(d-1\)</span> to each side, but at the boundaries border points get used multiple times to meet the number of points required<a href="#/fnref5" class="footnote-back"></a></p></li> <li>Not in our scenarios - maybe due to the fact that a better solution simply takes longer to converge, thus dominating.</li>
<li id="fn6"><p>One example would be, when parts of an algorithm depend on the inverse of the minimal right singular value leading to a division by <span class="math inline">\(0\)</span>.<a href="#/fnref6" class="footnote-back"></a></p></li> </ul>
<li id="fn7"><p>For the special case of the outer layer we only applied noise away from the object, so the object is still confined in the convex hull of the controlpoints.<a href="#/fnref7" class="footnote-back"></a></p></li> </section>
<li id="fn8"><p>The parametrization is encoded in <span class="math inline">\(\vec{U}\)</span> and the initial position of the controlpoints. See <a href="#/fnref8" class="footnote-back"></a></p></li> <section id="regularity-3d" class="slide level1">
<li id="fn9"><p>Note: On the Edges this displacement is only applied outwards by flipping the sign of <span class="math inline">\(r\)</span>, if appropriate.<a href="#/fnref9" class="footnote-back"></a></p></li> <h1>Regularity 3D</h1>
<li id="fn10"><p>Again, we flip the signs for the edges, if necessary to have the object still in the convex hull.<a href="#/fnref10" class="footnote-back"></a></p></li> <ul>
<li id="fn11"><p>Displayed as <span class="math inline">\(Y \times Y \times Y\)</span><a href="#/fnref11" class="footnote-back"></a></p></li> <li>Should measure convergence speed</li>
<li id="fn12"><p>We only took statistically <em>significant</em> results into consideration when compiling these numbers. Details are given in the respective chapters.<a href="#/fnref12" class="footnote-back"></a></p></li> </ul>
</ol> <p><figure class="" style="width:70%;"><img src="../arbeit/img/evolution3d/regularity_montage.png" style="width:70%;"></img><figcaption>Plots of <em>regularity</em> against number of iterations for various scenarios together with a linear fit to indicate trends.</figcaption></figure></p>
<ul>
<li>Only <em>very weak</em> correlation</li>
<li>Point that contributes the worst dominates regularity by lowering the least right singular value towards 0.</li>
</ul>
</section>
<section id="improvement-potential-in-1d" class="slide level1">
<h1>Improvement Potential in 1D</h1>
<ul>
<li>Should measure expected quality given a gradient</li>
</ul>
<p><figure class="" style="width:70%;"><img src="../arbeit/img/evolution1d/55_to_1010_improvement-vs-evo-error.png" style="width:70%;"></img><figcaption><em>Improvement potential</em> plotted against the error yielded by the evolutionary optimization for different gridresolutions</figcaption></figure></p>
<ul>
<li><em>very strong</em> correlation of <span class="math inline">\(- r_S = 1.0, p = 0\)</span>.</li>
<li>Even with a distorted gradient</li>
</ul>
</section>
<section id="improvement-potential-in-3d" class="slide level1">
<h1>Improvement Potential in 3D</h1>
<ul>
<li>Should measure expected quality given a gradient</li>
</ul>
<p><figure class="" style="width:70%;"><img src="../arbeit/img/evolution3d/improvement_montage.png" style="width:70%;"></img><figcaption>Plots of <em>improvement potential</em> against error given by our <em>fitnessfunction</em> after convergence together with a linear fit of each of the plotted data to indicate trends.</figcaption></figure></p>
<ul>
<li><em>weak</em> to <em>moderate</em> correlation within each group.</li>
</ul>
</section>
<section id="summary" class="slide level1">
<h1>Summary</h1>
<ul>
<li><em>Variability</em> and <em>Improvement Potential</em> are good measurements in our cases</li>
<li><em>Regularity</em> does not work well because of small singular right values
<ul>
<li>But optimizing for regularity <em>could</em> still lead to a better grid-setup (not shown, but likely)</li>
<li>Effect can be dominated by other factors (i.e. better solutions just take longer)</li>
</ul></li>
</ul>
</section>
<section id="outlook-further-research" class="slide level1">
<h1>Outlook / Further research</h1>
<ul>
<li>Only focused on FFD, but will DM-FFD perform better?
<ul>
<li>for RBF the indirect manipulation also performed worse than the direct one</li>
</ul></li>
<li>Do grids with high regularity indeed perform better?</li>
</ul>
</section>
<section id="thank-you" class="slide level1">
<h1>Thank you</h1>
<p>Any questions?</p>
</section> </section>
@ -521,14 +569,17 @@ p_i =
center: false, center: false,
transition: 'none', transition: 'none',
viewDistance: 2, // otherwise videos start early viewDistance: 2, // otherwise videos start early
width: 1024, width: 1280,
height: 768, height: 1024,
minScale: 0.2, minScale: 0.2,
maxScale: 5, // if this threshold is reached, the chalkboard drawing will be wrongly positioned. hence large threshold! maxScale: 5, // if this threshold is reached, the chalkboard drawing will be wrongly positioned. hence large threshold!
// use local mathjax installation // use local mathjax installation
math: { mathjax: './template/mathjax/MathJax.js', config: 'TeX-AMS_HTML-full' }, math: { mathjax: './template/mathjax/MathJax.js',
config: 'TeX-AMS_HTML-full',
extensions: ["content-mathml.js"]
},
// setup chalkboard // setup chalkboard
@ -568,7 +619,6 @@ p_i =
8: function() { RevealChalkboard.clear() }, // BACKSPACE: clear chalkboard 8: function() { RevealChalkboard.clear() }, // BACKSPACE: clear chalkboard
46: function() { RevealChalkboard.reset() }, // DELETE: reset chalkboard 46: function() { RevealChalkboard.reset() }, // DELETE: reset chalkboard
68: function() { RevealChalkboard.download() }, // d: downlad chalkboard drawing 68: function() { RevealChalkboard.download() }, // d: downlad chalkboard drawing
66: function() { RevealQuiz.toggleChart() } , // q: show quiz results
}, },
@ -576,14 +626,13 @@ p_i =
dependencies: [ dependencies: [
{ src: './template/revealjs/plugin/math/math.js' }, { src: './template/revealjs/plugin/math/math.js' },
{ src: './template/revealjs/plugin/notes/notes.js', async: true }, { src: './template/revealjs/plugin/notes/notes.js', async: true },
{ src: './template/revealjs/plugin/highlight/highlight.js', async: true, callback: function() { /*{ src: './template/revealjs/plugin/highlight/highlight.js', async: true, callback: function() {
var code_blocks = document.querySelectorAll('code'); var code_blocks = document.querySelectorAll('code');
for( var i = 0, len = code_blocks.length; i < len; i++ ) hljs.highlightBlock(code_blocks[i]); for( var i = 0, len = code_blocks.length; i < len; i++ ) hljs.highlightBlock(code_blocks[i]);
}}, }},*/
{ src: './template/revealjs/plugin/menu/menu.js' }, { src: './template/revealjs/plugin/menu/menu.js' },
{ src: './template/my-chalkboard/chalkboard.js' }, // do not load this async ('ready' event is missing, print wont work) { src: './template/my-chalkboard/chalkboard.js' }, // do not load this async ('ready' event is missing, print wont work)
{ src: './template/my-zoom/zoom.js', async: true }, { src: './template/my-zoom/zoom.js', async: true },
{ src: './template/quiz/quiz.js', async: true }
] ]
}); });

File diff suppressed because it is too large Load Diff

View File

@ -65,6 +65,9 @@
of: ['\\mkern{-2mu}\\left( #1 \\right\)', 1] of: ['\\mkern{-2mu}\\left( #1 \\right\)', 1]
} }
}, },
tex2jax: {
skipTags: ["script","noscript","style","textarea"],
},
"HTML-CSS": { "HTML-CSS": {
styles: { ".reveal section .MathJax_Display": { margin: "0.5em 0em" } }, styles: { ".reveal section .MathJax_Display": { margin: "0.5em 0em" } },
styles: { ".reveal table .MathJax_Display": { margin: "0em" } }, styles: { ".reveal table .MathJax_Display": { margin: "0em" } },
@ -137,14 +140,17 @@ $body$
center: false, center: false,
transition: 'none', transition: 'none',
viewDistance: 2, // otherwise videos start early viewDistance: 2, // otherwise videos start early
width: 1024, width: 1280,
height: 768, height: 1024,
minScale: 0.2, minScale: 0.2,
maxScale: 5, // if this threshold is reached, the chalkboard drawing will be wrongly positioned. hence large threshold! maxScale: 5, // if this threshold is reached, the chalkboard drawing will be wrongly positioned. hence large threshold!
// use local mathjax installation // use local mathjax installation
math: { mathjax: '$template$/mathjax/MathJax.js', config: 'TeX-AMS_HTML-full' }, math: { mathjax: '$template$/mathjax/MathJax.js',
config: 'TeX-AMS_HTML-full',
extensions: ["content-mathml.js"]
},
// setup chalkboard // setup chalkboard
@ -191,10 +197,10 @@ $body$
dependencies: [ dependencies: [
{ src: '$template$/revealjs/plugin/math/math.js' }, { src: '$template$/revealjs/plugin/math/math.js' },
{ src: '$template$/revealjs/plugin/notes/notes.js', async: true }, { src: '$template$/revealjs/plugin/notes/notes.js', async: true },
{ src: '$template$/revealjs/plugin/highlight/highlight.js', async: true, callback: function() { /*{ src: '$template$/revealjs/plugin/highlight/highlight.js', async: true, callback: function() {
var code_blocks = document.querySelectorAll('code'); var code_blocks = document.querySelectorAll('code');
for( var i = 0, len = code_blocks.length; i < len; i++ ) hljs.highlightBlock(code_blocks[i]); for( var i = 0, len = code_blocks.length; i < len; i++ ) hljs.highlightBlock(code_blocks[i]);
}}, }},*/
{ src: '$template$/revealjs/plugin/menu/menu.js' }, { src: '$template$/revealjs/plugin/menu/menu.js' },
{ src: '$template$/my-chalkboard/chalkboard.js' }, // do not load this async ('ready' event is missing, print wont work) { src: '$template$/my-chalkboard/chalkboard.js' }, // do not load this async ('ready' event is missing, print wont work)
{ src: '$template$/my-zoom/zoom.js', async: true }, { src: '$template$/my-zoom/zoom.js', async: true },