savepoint

This commit is contained in:
Stefan Dresselhaus
2017-10-24 13:22:20 +02:00
parent 4b92377ec9
commit 88f30c3d87
264 changed files with 42924 additions and 389 deletions

View File

@ -202,3 +202,37 @@
url = {http://graphics.uni-bielefeld.de/publications/disclaimer.php?dlurl=vmv15.pdf},
ISBN = {978-3-905674-95-8},
}
@article{hauke2011comparison,
title={Comparison of values of Pearson's and Spearman's correlation coefficients on the same sets of data},
author={Hauke, Jan and Kossowski, Tomasz},
journal={Quaestiones geographicae},
volume={30},
number={2},
pages={87},
year={2011},
publisher={De Gruyter Open Sp. z oo},
url={https://www.degruyter.com/downloadpdf/j/quageo.2011.30.issue-2/v10117-011-0021-1/v10117-011-0021-1.pdf},
}
@article{weir2015spearman,
title={Spearmans correlation},
author={Weir, I},
journal={Retrieved from statstutor},
year={2015},
url={http://www.statstutor.ac.uk/resources/uploaded/spearmans.pdf},
}
@Article{shark08,
author = {Christian Igel and Verena Heidrich-Meisner and Tobias Glasmachers},
title = {Shark},
journal = {Journal of Machine Learning Research},
year = {2008},
volume = {9},
pages = {993-996},
url={http://image.diku.dk/shark/index.html},
}
@article{hansen2016cma,
title={The CMA evolution strategy: A tutorial},
author={Hansen, Nikolaus},
journal={arXiv preprint arXiv:1604.00772},
year={2016},
url={https://arxiv.org/abs/1604.00772}
}

View File

@ -660,7 +660,8 @@ can compute the analytic solution $\vec{p^{*}} = \vec{U^+}\vec{t}$, yielding us
the correct gradient in which the evolutionary optimizer should move.
## Procedure: 1D Function Approximation
\label{sec:proc:1d}
For our setup we first compute the coefficients of the deformation--matrix and
use then the formulas for *variability* and *regularity* to get our predictions.
Afterwards we solve the problem analytically to get the (normalized) correct
@ -696,6 +697,7 @@ dimension and shrink the distance to the neighbours (the smaller neighbour for
$r < 0$, the larger for $r > 0$) by the factor $r$^[Note: On the Edges this
displacement is only applied outwards by flipping the sign of $r$, if
appropriate.].
\improvement[inline]{update!! gaussian, not uniform!!}
An Example of such a testcase can be seen for a $7 \times 4$--grid in figure
\ref{fig:example1d_grid}.
@ -806,20 +808,148 @@ control-points.
# Evaluation of Scenarios
\label{sec:res}
## Spearman/Pearson--Metriken
To compare our results to the ones given by Richter et al.\cite{anrichterEvol},
we also use Spearman's rank correlation coefficient. Opposed to other popular
coefficients, like the Pearson correlation coefficient, which measures a linear
relationship between variables, the Spearmans's coefficient assesses \glqq how
well an arbitrary monotonic function can descripbe the relationship between two
variables, without making any assumptions about the frequency distribution of
the variables\grqq\cite{hauke2011comparison}.
- Was ist das?
- Wieso sollte uns das interessieren?
- Wieso reicht Monotonie?
- Haben wir das gezeigt?
- Statistik, Bilder, blah!
As we don't have any prior knowledge if any of the criteria is linear and we are
just interested in a monotonic relation between the criteria and their
predictive power, the Spearman's coefficient seems to fit out scenario best.
For interpretation of these values we follow the same interpretation used in
\cite{anrichterEvol}, based on \cite{weir2015spearman}: The coefficient
intervals $r_S \in [0,0.2[$, $[0.2,0.4[$, $[0.4,0.6[$, $[0.6,0.8[$, and $[0.8,1]$ are
classified as *very weak*, *weak*, *moderate*, *strong* and *very strong*. We
interpret p--values smaller than $0.1$ as *significant* and cut off the
precision of p--values after four decimal digits (thus often having a p--value
of $0$ given for p--values $< 10^{-4}$).
As we are looking for anti--correlation (i.e. our criterion should be maximized
indicating a minimal result in --- for example --- the reconstruction--error)
instead of correlation we flip the sign of the correlation--coefficient for
readability and to have the correlation--coefficients be in the
classification--range given above.
For the evolutionary optimization we employ the CMA--ES (covariance matrix
adaptation evolution strategy) of the shark3.1 library \cite{shark08}, as this
algorithm was used by \cite{anrichterEvol} as well. We leave the parameters at
their sensible defaults as further explained in
\cite[Appendix~A: Table~1]{hansen2016cma}.
## Results of 1D Function Approximation
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution1d/20171005-all_appended.png}
\caption{Results 1D}
In the case of our 1D--Optimization--problem, we have the luxury of knowing the
analytical solution to the given problem--set. We use this to experimentally
evaluate the quality criteria we introduced before. As an evolutional
optimization is partially a random process, we use the analytical solution as a
stopping-criteria. We measure the convergence speed as number of iterations the
evolutional algorithm needed to get within $1.05\%$ of the optimal solution.
We used different regular grids that we manipulated as explained in Section
\ref{sec:proc:1d} with a different number of control points. As our grids have
to be the product of two integers, we compared a $5 \times 5$--grid with $25$
control--points to a $4 \times 7$ and $7 \times 4$--grid with $28$
control--points. This was done to measure the impact an \glqq improper\grqq
setup could have and how well this is displayed in the criteria we are
examining.
Additionally we also measured the effect of increasing the total resolution of
the grid by taking a closer look at $5 \times 5$, $7 \times 7$ and $10 \times 10$ grids.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{img/evolution1d/variability_boxplot.png}
\caption[1D Fitting Errors for various grids]{The squared error for the various
grids we examined.\newline
Note that $7 \times 4$ and $4 \times 7$ have the same number of control--points.}
\label{fig:1dvar}
\end{figure}
### Variability
Variability should characterize the potential for design space exploration and
is defined in terms of the normalized rank of the deformation matrix $\vec{U}$:
$V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}$, whereby $n$ is the number of
vertices.
As all our tested matrices had a constant rank (being $m = x \cdot y$ for a $x \times y$
grid), we have merely plotted the errors in the boxplot in figure
\ref{fig:1dvar}
It is also noticeable, that although the $7 \times 4$ and $4 \times 7$ grids
have a higher variability, they perform not better than the $5 \times 5$ grid.
Also the $7 \times 4$ and $4 \times 7$ grids differ distinctly from each other,
although they have the same number of control--points. This is an indication the
impact a proper or improper grid--setup can have. We do not draw scientific
conclusions from these findings, as more research on non-squared grids seem
necessary.\todo{machen wir die noch? :D}
Leaving the issue of the grid--layout aside we focused on grids having the same
number of prototypes in every dimension. For the $5 \times 5$, $7 \times 7$ and
$10 \times 10$ grids we found a *very strong* correlation ($-r_S = 0.94, p = 0$)
between the variability and the evolutionary error.
### Regularity
\begin{table}[bht]
\centering
\begin{tabular}{c|c|c|c|c}
$5 \times 5$ & $7 \times 4$ & $4 \times 7$ & $7 \times 7$ & $10 \times 10$\\
\hline
$0.28$ ($0.0045$) & \textcolor{red}{$0.21$} ($0.0396$) & \textcolor{red}{$0.1$} ($0.3019$) & \textcolor{red}{$0.01$} ($0.9216$) & \textcolor{red}{$0.01$} ($0.9185$)
\end{tabular}
\caption[Correlation 1D Regularity/Steps]{Spearman's correlation (and p-values)
between regularity and convergence speed for the 1D function approximation
problem.\newline
Not significant entries are marked in red.
}
\label{tab:1dreg}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{img/evolution1d/55_to_1010_steps.png}
\caption[Improvement potential and regularity vs. steps]{\newline
Left: Improvement potential against steps until convergence\newline
Right: Regularity against steps until convergence\newline
Coloured by their grid--resolution, both with a linear fit over the whole
dataset.}
\label{fig:1dreg}
\end{figure}
Regularity should correspond to the convergence speed (measured in
iteration--steps of the evolutionary algorithm), and is computed as inverse
condition number $\kappa(\vec{U})$ of the deformation--matrix.
As can be seen from table \ref{tab:1dreg}, we could only show a *weak* correlation
in the case of a $5 \times 5$ grid. As we increment the number of
control--points the correlation gets worse until it is completely random in a
single dataset. Taking all presented datasets into account we even get a *strong*
correlation of $- r_S = -0.72, p = 0$, that is opposed to our expectations.
To explain this discrepancy we took a closer look at what caused these high number
of iterations. In figure \ref{fig:1dreg} we also plotted the
improvement-potential against the steps next to the regularity--plot. Our theory
is that the *very strong* correlation ($-r_S = -0.82, p=0$) between
improvement--potential and number of iterations hints that the employed
algorithm simply takes longer to converge on a better solution (as seen in
figure \ref{fig:1dvar} and \ref{fig:1dimp}) offsetting any gain the regularity--measurement could
achieve.
### Improvement Potential
- Alle Spearman 1 und p-value 0.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{img/evolution1d/55_to_1010_improvement-vs-evo-error.png}
\caption[Correlation 1D Improvement vs. Error]{Improvement potential plotted
against the error yielded by the evolutionary optimization for different
grid--resolutions}
\label{fig:1dimp}
\end{figure}
<!-- ![Improvement potential vs steps](img/evolution1d/20170830-evolution1D_5x5_100Times-all_improvement-vs-steps.png) -->
@ -841,6 +971,11 @@ control-points.
\caption{Results 3D for Xx4x4}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution3d/YxYxY_montage.png}
\caption{Results 3D for YxYxY for Y $\in [4,5,6]$}
\end{figure}
<!-- ![Improvement potential vs steps](img/evolution3d/20170926_3dFit_both_improvement-vs-steps.png) -->
<!-- -->
<!-- ![Improvement potential vs evolutional -->
@ -851,7 +986,7 @@ control-points.
# Schluss
\label{sec:dis}
HAHA .. als ob -.-
- Regularity ist kacke für unser setup. Bessere Vorschläge? EW/EV?
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.}

Binary file not shown.

View File

@ -3,7 +3,7 @@
\documentclass[
a4paper, % default
12pt, % default = 11pt
BCOR6mm, % Bindungskorrektur bei Klebebindung 6mm, bei Lochen BCOR8.25mm
BCOR10mm, % Bindungskorrektur bei Klebebindung 6mm, bei Lochen BCOR8.25mm
twoside, % default, 2seitig
titlepage,
% pagesize=auto
@ -31,10 +31,10 @@ xcolor=dvipsnames,
%%%%%%%%%%%%%%% Globale Einstellungen %%%%%%%%%%%%%%%
\input{settings/commands}
\input{settings/environments}
%\setlength{\parindent}{0pt} % kein einzug bei absaetzen
%\setlength{\lineskip}{1ex plus0.5ex minus0.5ex} % dafr abstand zwischen abs<EFBFBD>zen (funktioniert noch nicht)
\setlength{\parindent}{0pt} % kein einzug bei absaetzen
\setlength{\parskip}{12pt plus6pt minus2pt} % dafür abstand zwischen absäzen
% \renewcommand{\familydefault}{\sfdefault}
\setstretch{1.44} % 1.5-facher zeilenabstand
\setstretch{1.5} % 1.5-facher zeilenabstand
%%%%%%%%%%%%%%% Header - Footer %%%%%%%%%%%%%%%
% ### Fr 2 Seitig (option twopage):
@ -850,6 +850,8 @@ should move.
\section{Procedure: 1D Function
Approximation}\label{procedure-1d-function-approximation}
\label{sec:proc:1d}
For our setup we first compute the coefficients of the
deformation--matrix and use then the formulas for \emph{variability} and
\emph{regularity} to get our predictions. Afterwards we solve the
@ -886,6 +888,7 @@ neighbours (the smaller neighbour for \(r < 0\), the larger for
\(r > 0\)) by the factor \(r\)\footnote{Note: On the Edges this
displacement is only applied outwards by flipping the sign of \(r\),
if appropriate.}.
\improvement[inline]{update!! gaussian, not uniform!!}
An Example of such a testcase can be seen for a \(7 \times 4\)--grid in
figure \ref{fig:example1d_grid}.
@ -1004,29 +1007,162 @@ predict a suboptimal placement of these control-points.
\label{sec:res}
\section{Spearman/Pearson--Metriken}\label{spearmanpearsonmetriken}
To compare our results to the ones given by Richter et
al.\cite{anrichterEvol}, we also use Spearman's rank correlation
coefficient. Opposed to other popular coefficients, like the Pearson
correlation coefficient, which measures a linear relationship between
variables, the Spearmans's coefficient assesses \glqq how well an
arbitrary monotonic function can descripbe the relationship between two
variables, without making any assumptions about the frequency
distribution of the variables\grqq\cite{hauke2011comparison}.
\begin{itemize}
\tightlist
\item
Was ist das?
\item
Wieso sollte uns das interessieren?
\item
Wieso reicht Monotonie?
\item
Haben wir das gezeigt?
\item
Statistik, Bilder, blah!
\end{itemize}
As we don't have any prior knowledge if any of the criteria is linear
and we are just interested in a monotonic relation between the criteria
and their predictive power, the Spearman's coefficient seems to fit out
scenario best.
For interpretation of these values we follow the same interpretation
used in \cite{anrichterEvol}, based on \cite{weir2015spearman}: The
coefficient intervals \(r_S \in [0,0.2[\), \([0.2,0.4[\), \([0.4,0.6[\),
\([0.6,0.8[\), and \([0.8,1]\) are classified as \emph{very weak},
\emph{weak}, \emph{moderate}, \emph{strong} and \emph{very strong}. We
interpret p--values smaller than \(0.1\) as \emph{significant} and cut
off the precision of p--values after four decimal digits (thus often
having a p--value of \(0\) given for p--values \(< 10^{-4}\)).
As we are looking for anti--correlation (i.e.~our criterion should be
maximized indicating a minimal result in --- for example --- the
reconstruction--error) instead of correlation we flip the sign of the
correlation--coefficient for readability and to have the
correlation--coefficients be in the classification--range given above.
For the evolutionary optimization we employ the CMA--ES (covariance
matrix adaptation evolution strategy) of the shark3.1 library
\cite{shark08}, as this algorithm was used by \cite{anrichterEvol} as
well. We leave the parameters at their sensible defaults as further
explained in \cite[Appendix~A: Table~1]{hansen2016cma}.
\section{Results of 1D Function
Approximation}\label{results-of-1d-function-approximation}
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution1d/20171005-all_appended.png}
\caption{Results 1D}
In the case of our 1D--Optimization--problem, we have the luxury of
knowing the analytical solution to the given problem--set. We use this
to experimentally evaluate the quality criteria we introduced before. As
an evolutional optimization is partially a random process, we use the
analytical solution as a stopping-criteria. We measure the convergence
speed as number of iterations the evolutional algorithm needed to get
within \(1.05\%\) of the optimal solution.
We used different regular grids that we manipulated as explained in
Section \ref{sec:proc:1d} with a different number of control points. As
our grids have to be the product of two integers, we compared a
\(5 \times 5\)--grid with \(25\) control--points to a \(4 \times 7\) and
\(7 \times 4\)--grid with \(28\) control--points. This was done to
measure the impact an \glqq improper\grqq
setup could have and how well this is displayed in the criteria we are
examining.
Additionally we also measured the effect of increasing the total
resolution of the grid by taking a closer look at \(5 \times 5\),
\(7 \times 7\) and \(10 \times 10\) grids.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{img/evolution1d/variability_boxplot.png}
\caption[1D Fitting Errors for various grids]{The squared error for the various
grids we examined.\newline
Note that $7 \times 4$ and $4 \times 7$ have the same number of control--points.}
\label{fig:1dfiterr}
\end{figure}
\subsection{Variability}\label{variability-1}
Variability should characterize the potential for design space
exploration and is defined in terms of the normalized rank of the
deformation matrix \(\vec{U}\):
\(V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}\), whereby \(n\) is the
number of vertices. As all our tested matrices had a constant rank
(being \(m = x \cdot y\) for a \(x \times y\) grid), we have merely
plotted the errors in the boxplot in figure \ref{fig:1dfiterr}
It is also noticeable, that although the \(7 \times 4\) and
\(4 \times 7\) grids have a higher variability, they perform not better
than the \(5 \times 5\) grid. Also the \(7 \times 4\) and \(4 \times 7\)
grids differ distinctly from each other, although they have the same
number of control--points. This is an indication the impact a proper or
improper grid--setup can have. We do not draw scientific conclusions
from these findings, as more research on non-squared grids seem
necessary.\todo{machen wir die noch? :D}
Leaving the issue of the grid--layout aside we focused on grids having
the same number of prototypes in every dimension. For the
\(5 \times 5\), \(7 \times 7\) and \(10 \times 10\) grids we found a
\emph{very strong} correlation (\(-r_S = 0.94, p = 0\)) between the
variability and the evolutionary error.
\subsection{Regularity}\label{regularity-1}
\begin{table}[bht]
\centering
\begin{tabular}{c|c|c|c|c}
$5 \times 5$ & $7 \times 4$ & $4 \times 7$ & $7 \times 7$ & $10 \times 10$\\
\hline
$0.28$ ($0.0045$) & \textcolor{red}{$0.21$} ($0.0396$) & \textcolor{red}{$0.1$} ($0.3019$) & \textcolor{red}{$0.01$} ($0.9216$) & \textcolor{red}{$0.01$} ($0.9185$)
\end{tabular}
\caption[Correlation 1D Regularity/Steps]{Spearman's correlation (and p-values)
between regularity and convergence speed for the 1D function approximation
problem.\newline
Not significant entries are marked in red.
}
\label{tab:1dreg}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{img/evolution1d/55_to_1010_steps.png}
\caption[Improvement potential and regularity vs. steps]{\newline
Left: Improvement potential against steps until convergence\newline
Right: Regularity against steps until convergence\newline
Coloured by their grid--resolution, both with a linear fit over the whole
dataset.}
\label{fig:1dreg}
\end{figure}
Regularity should correspond to the convergence speed (measured in
iteration--steps of the evolutionary algorithm), and is computed as
inverse condition number \(\kappa(\vec{U})\) of the deformation--matrix.
As can be seen from table \ref{tab:1dreg}, we could only show a
\emph{weak} correlation in the case of a \(5 \times 5\) grid. As we
increment the number of control--points the correlation gets worse until
it is completely random in a single dataset. Taking all presented
datasets into account we even get a \emph{strong} correlation of
\(- r_S = -0.72, p = 0\), that is opposed to our expectations.
To explain this discrepancy we took a closer look at what caused these
high number of iterations. In figure \ref{fig:1dreg} we also plotted the
improvement-potential against the steps next to the regularity--plot.
Our theory is that the \emph{very strong} correlation
(\(-r_S = -0.82, p=0\)) between improvement--potential and number of
iterations hints that the employed algorithm simply takes longer to
converge on a better solution (as seen in figure \ref{fig:1dimp})
offsetting any gain the regularity--measurement could achieve.
\subsection{Improvement Potential}\label{improvement-potential-1}
\begin{itemize}
\tightlist
\item
Alle Spearman 1 und p-value 0.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{img/evolution1d/55_to_1010_improvement-vs-evo-error.png}
\caption[Correlation 1D Improvement vs. Error]{Improvement potential plotted
against the error yielded by the evolutionary optimization for different
grid--resolutions}
\label{fig:1dimp}
\end{figure}
\section{Results of 3D Function
@ -1042,11 +1178,20 @@ Approximation}\label{results-of-3d-function-approximation}
\caption{Results 3D for Xx4x4}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/evolution3d/YxYxY_montage.png}
\caption{Results 3D for YxYxY for Y $\in [4,5,6]$}
\end{figure}
\chapter{Schluss}\label{schluss}
\label{sec:dis}
HAHA .. als ob -.-
\begin{itemize}
\tightlist
\item
Regularity ist kacke für unser setup. Bessere Vorschläge? EW/EV?
\end{itemize}
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.}

View File

@ -3,7 +3,7 @@
\documentclass[
a4paper, % default
$if(fontsize)$$fontsize$,$endif$ % default = 11pt
BCOR6mm, % Bindungskorrektur bei Klebebindung 6mm, bei Lochen BCOR8.25mm
BCOR10mm, % Bindungskorrektur bei Klebebindung 6mm, bei Lochen BCOR8.25mm
twoside, % default, 2seitig
titlepage,
% pagesize=auto
@ -31,10 +31,10 @@ xcolor=dvipsnames,
%%%%%%%%%%%%%%% Globale Einstellungen %%%%%%%%%%%%%%%
\input{settings/commands}
\input{settings/environments}
%\setlength{\parindent}{0pt} % kein einzug bei absaetzen
%\setlength{\lineskip}{1ex plus0.5ex minus0.5ex} % dafr abstand zwischen abs<EFBFBD>zen (funktioniert noch nicht)
\setlength{\parindent}{0pt} % kein einzug bei absaetzen
\setlength{\parskip}{12pt plus6pt minus2pt} % dafür abstand zwischen absäzen
% \renewcommand{\familydefault}{\sfdefault}
\setstretch{1.44} % 1.5-facher zeilenabstand
\setstretch{1.5} % 1.5-facher zeilenabstand
%%%%%%%%%%%%%%% Header - Footer %%%%%%%%%%%%%%%
% ### Fr 2 Seitig (option twopage):