<li>Many modern industrial design processes require advanced optimization methods due to increased complexity</li>
<li>Examples are
<ul>
<li>physical domains
<ul>
<li>aerodynamics (i.e.drag)</li>
<li>fluid dynamics (i.e.throughput of liquid)</li>
</ul></li>
<li>NP-hard problems
<ul>
<li>layouting of circuit boards</li>
<li>stacking of 3D–objects</li>
</ul></li>
</ul></li>
</ul>
</section>
<sectionid="motivation"class="slide level1">
<h1>Motivation</h1>
<ul>
<li>Evolutionary algorithms cope especially well with these problem domains <figureclass=""style=""><imgsrc="../arbeit/img/Evo_overview.png"style=""></img><figcaption>Example of the use of evolutionary algorithms in automotive design</figcaption></figure></li>
<li>But formulation can be tricky</li>
</ul>
</section>
<sectionid="motivation-1"class="slide level1">
<h1>Motivation</h1>
<ul>
<li>Problems tend to be very complex
<ul>
<li>i.e.a surface with <spanclass="math inline">\(n\)</span> vertices has <spanclass="math inline">\(3\cdot n\)</span> Degrees of Freedom (DoF).</li>
</ul></li>
<li>Need for a small-dimensional representation that manipulates the high-dimensional problem-space.</li>
<li>We concentrate on smooth deformations (<spanclass="math inline">\(C^3\)</span>-continuous)</li>
<p><figureclass=""style=""><imgsrc="../arbeit/img/deformations.png"style=""></img><figcaption>Example of RBF–based deformation and FFD targeting the same mesh.</figcaption></figure></p>
</section>
<sectionid="rbf-and-ffd-1"class="slide level1">
<h1>RBF and FFD</h1>
<ul>
<li>My master thesis transferred his idea to Freeform-Deformation (FFD)
<ul>
<li>same setup</li>
<li>same measurements</li>
<li>same results?</li>
</ul></li>
</ul>
<p><figureclass=""style=""><imgsrc="../arbeit/img/deformations.png"style=""></img><figcaption>Example of RBF–based deformation and FFD targeting the same mesh.</figcaption></figure></p>
</section>
<sectionid="outline"class="slide level1">
<h1>Outline</h1>
<ul>
<li><strong>What is FFD?</strong></li>
<li>What is evolutionary optimization?</li>
<li>How to measure evolvability?</li>
<li>Scenarios</li>
<li>Results</li>
</ul>
</section>
<sectionid="what-is-ffd"class="slide level1">
<h1>What is FFD?</h1>
<ul>
<li>Create a function <spanclass="math inline">\(s : [0,1[^d \mapsto \mathbb{R}^d\)</span> that is parametrized by some special control–points <spanclass="math inline">\(p_i\)</span> with coefficient functions <spanclass="math inline">\(a_i(u)\)</span>: <spanclass="math display">\[
s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}
\]</span></li>
<li>All points inside the convex hull of <spanclass="math inline">\(\vec{p_i}\)</span> accessed by the right <spanclass="math inline">\(u \in [0,1[^d\)</span>.</li>
</ul>
<p><figureclass=""style=""><imgsrc="../arbeit/img/B-Splines.png"style=""></img><figcaption>Example of a parametrization of a line with corresponding deformation to generate a deformed objet</figcaption></figure></p>
<li>The coefficient functions <spanclass="math inline">\(a_i(u)\)</span> in <spanclass="math inline">\(s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}\)</span> are different for each control-point</li>
<li>Given a degree <spanclass="math inline">\(d\)</span> and position <spanclass="math inline">\(\tau_i\)</span> for the <spanclass="math inline">\(i\)</span>th control-point <spanclass="math inline">\(p_i\)</span> we define <spanclass="math display">\[\begin{equation}
<li>The derivatives of these coefficients are also easy to compute: <spanclass="math display">\[\frac{\partial}{\partial u} N_{i,d,r}(u) = \frac{d}{\tau_{i+d} - \tau_i} N_{i,d-1,\tau}(u) - \frac{d}{\tau_{i+d+1} - \tau_{i+1}} N_{i+1,d-1,\tau}(u)\]</span></li>
<li>Coefficients vanish after <spanclass="math inline">\(d\)</span> differentiations</li>
<li>Coefficients are continuous with respect to <spanclass="math inline">\(u\)</span></li>
<li>A change in prototypes only deforms the mapping locally<br/>
(between <spanclass="math inline">\(p_i\)</span> to <spanclass="math inline">\(p_{i+d+1}\)</span>)</li>
</ul>
<p><figureclass=""style=""><imgsrc="../arbeit/img/unity.png"style=""></img><figcaption>Example of Basis-Functions for degree <spanclass="math inline">\(2\)</span>. [Brunet, 2010]<br/> Note, that Brunet starts his index at <spanclass="math inline">\(-d\)</span> opposed to our definition, where we start at <spanclass="math inline">\(0\)</span>.</figcaption></figure></p>
</section>
<sectionid="definition-ffd"class="slide level1">
<h1>Definition FFD</h1>
<ul>
<li>FFD is a space-deformation resulting based on the underlying B-Splines</li>
<li>Coefficients of space-mapping <spanclass="math inline">\(s(u) = \sum_j a_j(u) p_j\)</span> for an initial vertex <spanclass="math inline">\(v_i\)</span> are constant</li>
<li>Set <spanclass="math inline">\(u_{i,j}~:=~N_{j,d,\tau}\)</span> for each <spanclass="math inline">\(v_i\)</span> and <spanclass="math inline">\(p_j\)</span> to get the projection: <spanclass="math display">\[
<li>Given <spanclass="math inline">\(n,m,o\)</span> control-points in <spanclass="math inline">\(x,y,z\)</span>–direction each Point inside the convex hull is defined by <spanclass="math display">\[V(u,v,w) = \sum_i \sum_j \sum_k N_{i,d,\tau_i}(u) N_{j,d,\tau_j}(v) N_{k,d,\tau_k}(w) \cdot C_{ijk}.\]</span></li>
<li>Given a target vertex <spanclass="math inline">\(\vec{p}^*\)</span> and an initial guess <spanclass="math inline">\(\vec{p}=V(u,v,w)\)</span> we define the error–function for the gradient–descent as: <spanclass="math display">\[Err(u,v,w,\vec{p}^{*}) = \vec{p}^{*} - V(u,v,w)\]</span></li>
<li>Armed with this we iterate the formula <spanclass="math display">\[J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \right) = -Err(u,v,w)\]</span> using Cramer’s rule for inverting the small Jacobian.</li>
<li>Usually terminates after <spanclass="math inline">\(3\)</span> to <spanclass="math inline">\(5\)</span> iteration with an <spanclass="math inline">\(\epsilon := \vec{p^*} - V(u,v,w) < 10^{-4}\)</span></li>
<li>self-intersecting grids can invalidate the results
<ul>
<li>no problem, as these get not generated and contradict some properties we want (like locality)</li>
</ul></li>
</ul>
</section>
<sectionid="outline-1"class="slide level1">
<h1>Outline</h1>
<ul>
<li>What is FFD?</li>
<li><strong>What is evolutionary optimization?</strong></li>
<li><strong>Recombination</strong> generates <spanclass="math inline">\(\lambda\)</span> new individuals based on the characteristics of the <spanclass="math inline">\(\mu\)</span> parents.
<ul>
<li>This makes sure that the next guess is close to the old guess.</li>
</ul></li>
<li><strong>Mutation</strong> introduces new effects that cannot be produced by mere recombination of the parents.
<ul>
<li>Typically these are minor defects to individual members of the population i.e.through added noise</li>
</ul></li>
<li><strong>Selection</strong> selects <spanclass="math inline">\(\mu\)</span> individuals from the children (and optionally the parents) using a <em>fitness–function</em><spanclass="math inline">\(\Phi\)</span>.
<ul>
<li>Fitness could mean low error, good improvement, etc.</li>
<li>Fitness not solely determines who survives, there are many possibilities</li>
<li>As <spanclass="math inline">\(\vec{v} = \vec{U}\vec{p}\)</span> is linear, we can also look at <spanclass="math inline">\(\Delta \vec{v} = \vec{U}\, \Delta \vec{p}\)</span>
<ul>
<li>We only change <spanclass="math inline">\(\Delta \vec{p}\)</span>, so evolvability should only use <spanclass="math inline">\(\vec{U}\)</span> for predictions</li>
<li>roughly: “How numerically stable is the optimization?”</li>
<li>Defined by <spanclass="math display">\[\mathrm{regularity}(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}} \in [0..1]\]</span> with <spanclass="math inline">\(\sigma_{min/max}\)</span> being the least/greatest right singular value.</li>
<li>high, when <spanclass="math inline">\(\|\vec{Up}\| \propto \|\vec{p}\|\)</span></li>
<li><spanclass="math inline">\(xy\)</span>-plane to <spanclass="math inline">\(xyz\)</span>-model, where only the <spanclass="math inline">\(z\)</span>-coordinate changes</li>
<li>can be solved analytically with known global optimum</li>
</ul></li>
<li>3-dimensional fit
<ul>
<li>fit a parametrized sphere into a face</li>
<li>cannot be solved analytically</li>
<li>number of vertices differ between models</li>
</ul></li>
</ul>
</section>
<sectionid="d-scenario"class="slide level1">
<h1>1D-Scenario</h1>
<p><figureclass=""style=""><imgsrc="../arbeit/img/example1d_grid.png"style=""></img><figcaption>Left: A regular <spanclass="math inline">\(7 \times 4\)</span>–grid<br/>Right: The same grid after a random distortion to generate a testcase.</figcaption></figure></p>
<p><figureclass=""style="width:70%;"><imgsrc="../arbeit/img/1dtarget.png"style="width:70%;"></img><figcaption>The target–shape for our 1–dimensional optimization–scenario including a wireframe–overlay of the vertices.</figcaption></figure></p>
</section>
<sectionid="d-scenarios"class="slide level1">
<h1>3D-Scenarios</h1>
<p><figureclass=""style=""><imgsrc="../arbeit/img/3dtarget.png"style=""></img><figcaption>Left: The sphere we start from with 10807 vertices<br/>Right: The face we want to deform the sphere into with 12024 vertices.</figcaption></figure></p>
</section>
<sectionid="outline-4"class="slide level1">
<h1>Outline</h1>
<ul>
<li>What is FFD?</li>
<li>What is evolutionary optimization?</li>
<li>How to measure evolvability?</li>
<li>Scenarios</li>
<li><strong>Results</strong></li>
</ul>
</section>
<sectionid="variability-1d"class="slide level1">
<h1>Variability 1D</h1>
<ul>
<li>Should measure Degrees of Freedom and thus quality</li>
</ul>
<p><figureclass=""style=""><imgsrc="../arbeit/img/evolution1d/variability_boxplot.png"style=""></img><figcaption>The squared error for the various grids we examined.<br/> Note that <spanclass="math inline">\(7 \times 4\)</span> and <spanclass="math inline">\(4 \times 7\)</span> have the same number of control–points.</figcaption></figure></p>
<ul>
<li><spanclass="math inline">\(5 \times 5\)</span>, <spanclass="math inline">\(7 \times 7\)</span> and <spanclass="math inline">\(10 \times 10\)</span> have <em>very strong</em> correlation (<spanclass="math inline">\(-r_S = 0.94, p = 0\)</span>) between the <em>variability</em> and the evolutionary error.</li>
</ul>
</section>
<sectionid="variability-3d"class="slide level1">
<h1>Variability 3D</h1>
<ul>
<li>Should measure Degrees of Freedom and thus quality</li>
</ul>
<p><figureclass=""style=""><imgsrc="../arbeit/img/evolution3d/variability_boxplot.png"style=""></img><figcaption>The fitting error for the various grids we examined.<br/>Note that the number of control–points is a product of the resolution, so <spanclass="math inline">\(X \times 4 \times 4\)</span> and <spanclass="math inline">\(4 \times 4 \times X\)</span> have the same number of control–points.</figcaption></figure></p>
<ul>
<li><spanclass="math inline">\(4 \times 4 \times 4\)</span>, <spanclass="math inline">\(5 \times 5 \times 5\)</span> and <spanclass="math inline">\(6 \times 6 \times 6\)</span> have <em>very strong</em> correlation (<spanclass="math inline">\(-r_S = 0.91, p = 0\)</span>) between the <em>variability</em> and the evolutionary error.</li>
<p><figureclass=""style=""><imgsrc="../arbeit/img/enoughCP.png"style=""></img><figcaption>A high resolution (<spanclass="math inline">\(10 \times 10\)</span>) of control–points over a circle. Yellow/green points contribute to the parametrization, red points don’t.<br/>An Example–point (blue) is solely determined by the position of the green control–points.</figcaption></figure></p>
</div>
<divstyle="width:50%;float:left">
<p><figureclass=""style=""><imgsrc="../arbeit/img/evolution3d/variability2_boxplot.png"style=""></img><figcaption>Histogram of ranks of various <spanclass="math inline">\(10 \times 10 \times 10\)</span> grids with <spanclass="math inline">\(1000\)</span> control–points each showing in this case how many control–points are actually used in the calculations.</figcaption></figure></p>
<p><figureclass=""style="width:70%;"><imgsrc="../arbeit/img/evolution1d/55_to_1010_steps.png"style="width:70%;"></img><figcaption>Left: <em>Improvement potential</em> against number of iterations until convergence<br/>Right: <em>Regularity</em> against number of iterations until convergence<br/>Coloured by their grid–resolution, both with a linear fit over the whole dataset.</figcaption></figure></p>
<ul>
<li>Not in our scenarios - maybe due to the fact that a better solution simply takes longer to converge, thus dominating.</li>
</ul>
</section>
<sectionid="regularity-3d"class="slide level1">
<h1>Regularity 3D</h1>
<ul>
<li>Should measure convergence speed</li>
</ul>
<p><figureclass=""style="width:70%;"><imgsrc="../arbeit/img/evolution3d/regularity_montage.png"style="width:70%;"></img><figcaption>Plots of <em>regularity</em> against number of iterations for various scenarios together with a linear fit to indicate trends.</figcaption></figure></p>
<ul>
<li>Only <em>very weak</em> correlation</li>
<li>Point that contributes the worst dominates regularity by lowering the least right singular value towards 0.</li>
<li>Should measure expected quality given a gradient</li>
</ul>
<p><figureclass=""style="width:70%;"><imgsrc="../arbeit/img/evolution1d/55_to_1010_improvement-vs-evo-error.png"style="width:70%;"></img><figcaption><em>Improvement potential</em> plotted against the error yielded by the evolutionary optimization for different grid–resolutions</figcaption></figure></p>
<ul>
<li><em>very strong</em> correlation of <spanclass="math inline">\(- r_S = 1.0, p = 0\)</span>.</li>
<li>Should measure expected quality given a gradient</li>
</ul>
<p><figureclass=""style="width:70%;"><imgsrc="../arbeit/img/evolution3d/improvement_montage.png"style="width:70%;"></img><figcaption>Plots of <em>improvement potential</em> against error given by our <em>fitness–function</em> after convergence together with a linear fit of each of the plotted data to indicate trends.</figcaption></figure></p>
<ul>
<li><em>weak</em> to <em>moderate</em> correlation within each group.</li>
</ul>
</section>
<sectionid="summary"class="slide level1">
<h1>Summary</h1>
<ul>
<li><em>Variability</em> and <em>Improvement Potential</em> are good measurements in our cases</li>
<li><em>Regularity</em> does not work well because of small singular right values
<ul>
<li>But optimizing for regularity <em>could</em> still lead to a better grid-setup (not shown, but likely)</li>
<li>Effect can be dominated by other factors (i.e.better solutions just take longer)</li>