savepoint

This commit is contained in:
Nicole Dresselhaus 2017-10-27 14:31:55 +02:00
parent 5b22c181be
commit f901716f60
Signed by: Drezil
GPG Key ID: 057D94F356F41E25
104 changed files with 2941 additions and 866 deletions

View File

@ -4,7 +4,7 @@ all: ma.md bibma.bib template.tex settings/abkuerzungen.tex settings/commands.te
xelatex -interaction batchmode ma.tex || true xelatex -interaction batchmode ma.tex || true
bibtexu ma bibtexu ma
xelatex -interaction batchmode ma.tex || true xelatex -interaction batchmode ma.tex || true
while test `cat ma.log | grep -e "\(Rerun to get citations correct\)" | wc -l` -gt 0 ; do \ while test `cat ma.log | grep -e "Rerun to get \(citations correct\|cross-references right\)" | wc -l` -gt 0 ; do \
rm ma.log && (xelatex -interaction batchmode ma.tex || true) \ rm ma.log && (xelatex -interaction batchmode ma.tex || true) \
done done
rm -f ma.aux ma.idx ma.lof ma.lot ma.out ma.tdo ma.toc ma.bbl ma.blg ma.loa rm -f ma.aux ma.idx ma.lof ma.lot ma.out ma.tdo ma.toc ma.bbl ma.blg ma.loa

View File

@ -61,17 +61,18 @@ strongly tied to the notion of *evolvability*\cite{wagner1996complex}, as the
parametrization of the problem has serious implications on the convergence speed parametrization of the problem has serious implications on the convergence speed
and the quality of the solution\cite{Rothlauf2006}. and the quality of the solution\cite{Rothlauf2006}.
However, there is no consensus on how *evolvability* is defined and the meaning However, there is no consensus on how *evolvability* is defined and the meaning
varies from context to context\cite{richter2015evolvability}, so there is need varies from context to context\cite{richter2015evolvability}. As a consequence
for some criteria we can measure, so that we are able to compare different there is need for some criteria we can measure, so that we are able to compare different
representations to learn and improve upon these. representations to learn and improve upon these.
One example of such a general representation of an object is to generate random One example of such a general representation of an object is to generate random
points and represent vertices of an object as distances to these points --- for points and represent vertices of an object as distances to these points --- for
example via \acf{RBF}. If one (or the algorithm) would move such a point the example via \acf{RBF}. If one (or the algorithm) would move such a point the
object will get deformed locally (due to the \ac{RBF}). As this results in a object will get deformed only locally (due to the \ac{RBF}). As this results in
simple mapping from the parameter-space onto the object one can try out a simple mapping from the parameter-space onto the object one can try out
different representations of the same object and evaluate the *evolvability*. different representations of the same object and evaluate which criteria may be
This is exactly what Richter et al.\cite{anrichterEvol} have done. suited to describe this notion of *evolvability*. This is exactly what Richter
et al.\cite{anrichterEvol} have done.
As we transfer the results of Richter et al.\cite{anrichterEvol} from using As we transfer the results of Richter et al.\cite{anrichterEvol} from using
\acf{RBF} as a representation to manipulate geometric objects to the use of \acf{RBF} as a representation to manipulate geometric objects to the use of
@ -94,17 +95,16 @@ take an abstract look at the definition of \ac{FFD} for a one--dimensional line
(in \ref{sec:back:ffdgood}). (in \ref{sec:back:ffdgood}).
Then we establish some background--knowledge of evolutionary algorithms (in Then we establish some background--knowledge of evolutionary algorithms (in
\ref{sec:back:evo}) and why this is useful in our domain (in \ref{sec:back:evo}) and why this is useful in our domain (in
\ref{sec:back:evogood}). \ref{sec:back:evogood}) followed by the definition of the different evolvability
In a third step we take a look at the definition of the different evolvability criteria established in \cite{anrichterEvol} (in \ref {sec:back:rvi}).
criteria established in \cite{anrichterEvol}.
In Chapter \ref{sec:impl} we take a look at our implementation of \ac{FFD} and In Chapter \ref{sec:impl} we take a look at our implementation of \ac{FFD} and
the adaptation for 3D--meshes that were used. the adaptation for 3D--meshes that were used. Next, in Chapter \ref{sec:eval},
we describe the different scenarios we use to evaluate the different
Next, in Chapter \ref{sec:eval}, we describe the different scenarios we use to evolvability--criteria incorporating all aspects introduced in Chapter
evaluate the different evolvability--criteria incorporating all aspects \ref{sec:back}. Following that, we evaluate the results in
introduced in Chapter \ref{sec:back}. Following that, we evaluate the results in Chapter \ref{sec:res} with further on discussion, summary and outlook in
Chapter \ref{sec:res} with further on discussion in Chapter \ref{sec:dis}. Chapter \ref{sec:dis}.
# Background # Background
@ -124,10 +124,10 @@ The main idea of \ac{FFD} is to create a function $s : [0,1[^d \mapsto
parametrized by some special control points $p_i$ and an constant parametrized by some special control points $p_i$ and an constant
attribution--function $a_i(u)$, so attribution--function $a_i(u)$, so
$$ $$
s(u) = \sum_i a_i(u) p_i s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}
$$ $$
can be thought of a representation of the inside of the convex hull generated by can be thought of a representation of the inside of the convex hull generated by
the control points where each point can be accessed by the right $u \in [0,1[$. the control points where each point can be accessed by the right $u \in [0,1[^d$.
\begin{figure}[!ht] \begin{figure}[!ht]
\begin{center} \begin{center}
@ -138,9 +138,9 @@ corresponding deformation to generate a deformed objet}
\label{fig:bspline} \label{fig:bspline}
\end{figure} \end{figure}
In the example in figure \ref{fig:bspline}, the control--points are indicated as In the 1--dimensional example in figure \ref{fig:bspline}, the control--points
red dots and the color-gradient should hint at the $u$--values ranging from are indicated as red dots and the color-gradient should hint at the $u$--values
$0$ to $1$. ranging from $0$ to $1$.
We now define a \acf{FFD} by the following: We now define a \acf{FFD} by the following:
Given an arbitrary number of points $p_i$ alongside a line, we map a scalar Given an arbitrary number of points $p_i$ alongside a line, we map a scalar
@ -299,6 +299,7 @@ that terminates the optimization.
Biologically speaking the set $I$ corresponds to the set of possible *Genotypes* Biologically speaking the set $I$ corresponds to the set of possible *Genotypes*
while $M$ represents the possible observable *Phenotypes*. while $M$ represents the possible observable *Phenotypes*.
\improvement[inline]{Erklären, was das ist. Quellen!}
The main algorithm just repeats the following steps: The main algorithm just repeats the following steps:
@ -318,20 +319,20 @@ The main algorithm just repeats the following steps:
of $\mu$ individuals. of $\mu$ individuals.
All these functions can (and mostly do) have a lot of hidden parameters that All these functions can (and mostly do) have a lot of hidden parameters that
can be changed over time. One can for example start off with a high can be changed over time.
mutation--rate that cools off over time (i.e. by lowering the variance of a
gaussian noise). \improvement[inline]{Genauer: Welche? Wo? Wieso? ...}
<!--One can for example start off with a high
mutation rate that cools off over time (i.e. by lowering the variance of a
gaussian noise).-->
## Advantages of evolutionary algorithms ## Advantages of evolutionary algorithms
\label{sec:back:evogood} \label{sec:back:evogood}
The main advantage of evolutionary algorithms is the ability to find optima of The main advantage of evolutionary algorithms is the ability to find optima of
general functions just with the help of a given fitness--function. With this general functions just with the help of a given fitness--function. Components
most problems of simple gradient--based procedures, which often target the same and techniques for evolutionary algorithms are specifically known to
error--function which measures the fitness, as an evolutionary algorithm, but can
easily get stuck in local optima.
Components and techniques for evolutionary algorithms are specifically known to
help with different problems arising in the domain of help with different problems arising in the domain of
optimization\cite{weise2012evolutionary}. An overview of the typical problems optimization\cite{weise2012evolutionary}. An overview of the typical problems
are shown in figure \ref{fig:probhard}. are shown in figure \ref{fig:probhard}.
@ -345,11 +346,14 @@ are shown in figure \ref{fig:probhard}.
Most of the advantages stem from the fact that a gradient--based procedure has Most of the advantages stem from the fact that a gradient--based procedure has
only one point of observation from where it evaluates the next steps, whereas an only one point of observation from where it evaluates the next steps, whereas an
evolutionary strategy starts with a population of guessed solutions. Because an evolutionary strategy starts with a population of guessed solutions. Because an
evolutionary strategy modifies the solution randomly, keeps the best solutions evolutionary strategy modifies the solution randomly, keeping the best solutions
and purges the worst, it can also target multiple different hypothesis at the and purging the worst, it can also target multiple different hypothesis at the
same time where the local optima die out in the face of other, better same time where the local optima die out in the face of other, better
candidates. candidates.
\improvement[inline]{Verweis auf MO-CMA etc. Vielleicht auch etwas
ausführlicher.}
If an analytic best solution exists and is easily computable (i.e. because the If an analytic best solution exists and is easily computable (i.e. because the
error--function is convex) an evolutionary algorithm is not the right choice. error--function is convex) an evolutionary algorithm is not the right choice.
Although both converge to the same solution, the analytic one is usually faster. Although both converge to the same solution, the analytic one is usually faster.
@ -357,8 +361,9 @@ Although both converge to the same solution, the analytic one is usually faster.
But in reality many problems have no analytic solution, because the problem is But in reality many problems have no analytic solution, because the problem is
either not convex or there are so many parameters that an analytic solution either not convex or there are so many parameters that an analytic solution
(mostly meaning the equivalence to an exhaustive search) is computationally not (mostly meaning the equivalence to an exhaustive search) is computationally not
feasible. Here evolutionary optimization has one more advantage as you can at feasible. Here evolutionary optimization has one more advantage as one can at
least get suboptimal solutions fast, which then refine over time. least get suboptimal solutions fast, which then refine over time and still
converge to the same solution.
## Criteria for the evolvability of linear deformations ## Criteria for the evolvability of linear deformations
\label{sec:intro:rvi} \label{sec:intro:rvi}
@ -366,26 +371,26 @@ least get suboptimal solutions fast, which then refine over time.
As we have established in chapter \ref{sec:back:ffd}, we can describe a As we have established in chapter \ref{sec:back:ffd}, we can describe a
deformation by the formula deformation by the formula
$$ $$
V = UP \vec{V} = \vec{U}\vec{P}
$$ $$
where $V$ is a $n \times d$ matrix of vertices, $U$ are the (during where $\vec{V}$ is a $n \times d$ matrix of vertices, $\vec{U}$ are the (during
parametrization) calculated deformation--coefficients and $P$ is a $m \times d$ matrix parametrization) calculated deformation--coefficients and $P$ is a $m \times d$ matrix
of control--points that we interact with during deformation. of control--points that we interact with during deformation.
We can also think of the deformation in terms of differences from the original We can also think of the deformation in terms of differences from the original
coordinates coordinates
$$ $$
\Delta V = U \cdot \Delta P \Delta \vec{V} = \vec{U} \cdot \Delta \vec{P}
$$ $$
which is isomorphic to the former due to the linear correlation in the which is isomorphic to the former due to the linear correlation in the
deformation. One can see in this way, that the way the deformation behaves lies deformation. One can see in this way, that the way the deformation behaves lies
solely in the entries of $U$, which is why the three criteria focus on this. solely in the entries of $\vec{U}$, which is why the three criteria focus on this.
### Variability ### Variability
In \cite{anrichterEvol} *variability* is defined as In \cite{anrichterEvol} *variability* is defined as
$$V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n},$$ $$\mathrm{variability}(\vec{U}) := \frac{\mathrm{rank}(\vec{U})}{n},$$
whereby $\vec{U}$ is the $n \times m$ deformation--Matrix \unsure{Nicht $(n\cdot d) \times m$? Wegen $u,v,w$?} used to map the $m$ whereby $\vec{U}$ is the $n \times m$ deformation--Matrix used to map the $m$
control points onto the $n$ vertices. control points onto the $n$ vertices.
Given $n = m$, an identical number of control--points and vertices, this Given $n = m$, an identical number of control--points and vertices, this
@ -395,10 +400,20 @@ the solution is to trivially move every control--point onto a target--point.
In praxis the value of $V(\vec{U})$ is typically $\ll 1$, because as In praxis the value of $V(\vec{U})$ is typically $\ll 1$, because as
there are only few control--points for many vertices, so $m \ll n$. there are only few control--points for many vertices, so $m \ll n$.
This criterion should correlate to the degrees of freedom the given
parametrization has. This can be seen from the fact, that
$\mathrm{rank}(\vec{U})$ is limited by $\min(m,n)$ and --- as $n$ is constant
--- can never exceed $n$.
The rank itself is also interesting, as control--points could theoretically be
placed on top of each other or be linear dependent in another way --- but will
in both cases lower the rank below the number of control--points $m$ and are
thus measurable by the *variability*.
### Regularity ### Regularity
*Regularity* is defined\cite{anrichterEvol} as *Regularity* is defined\cite{anrichterEvol} as
$$R(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}$$ $$\mathrm{regularity}(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}$$
where $\sigma_{min}$ and $\sigma_{max}$ are the smallest and greatest right singular where $\sigma_{min}$ and $\sigma_{max}$ are the smallest and greatest right singular
value of the deformation--matrix $\vec{U}$. value of the deformation--matrix $\vec{U}$.
@ -416,17 +431,19 @@ the notion of locality\cite{weise2012evolutionary,thorhauer2014locality}.
### Improvement Potential ### Improvement Potential
In contrast to the general nature of *variability* and *regularity*, which are In contrast to the general nature of *variability* and *regularity*, which are
agnostic of the fitness--function at hand the third criterion should reflect a agnostic of the fitness--function at hand, the third criterion should reflect a
notion of potential. notion of the potential for optimization, taking a guess into account.
As during optimization some kind of gradient $g$ is available to suggest a Most of the times some kind of gradient $g$ is available to suggest a
direction worth pursuing we use this to guess how much change can be achieved in direction worth pursuing; either from a previous iteration or by educated
guessing. We use this to guess how much change can be achieved in
the given direction. the given direction.
The definition for an *improvement potential* $P$ is\cite{anrichterEvol}: The definition for an *improvement potential* $P$ is\cite{anrichterEvol}:
$$ $$
P(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec{G}\|^2_F \mathrm{potential}(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec{G}\|^2_F
$$ $$
\unsure[inline]{ist das $^2$ richtig?}
given some approximate $n \times d$ fitness--gradient $\vec{G}$, normalized to given some approximate $n \times d$ fitness--gradient $\vec{G}$, normalized to
$\|\vec{G}\|_F = 1$, whereby $\|\cdot\|_F$ denotes the Frobenius--Norm. $\|\vec{G}\|_F = 1$, whereby $\|\cdot\|_F$ denotes the Frobenius--Norm.
@ -482,7 +499,9 @@ $$
$$ $$
and do a gradient--descend to approximate the value of $u$ up to an $\epsilon$ of $0.0001$. and do a gradient--descend to approximate the value of $u$ up to an $\epsilon$ of $0.0001$.
For this we use the Gauss--Newton algorithm\cite{gaussNewton} as the solution to For this we use the Gauss--Newton algorithm\cite{gaussNewton}
\todo[inline]{rewrite. falsch und wischi-waschi. Least squares?}
as the solution to
this problem may not be deterministic, because we usually have way more vertices this problem may not be deterministic, because we usually have way more vertices
than control points ($\#v~\gg~\#c$). than control points ($\#v~\gg~\#c$).
@ -548,12 +567,27 @@ $$J(Err(u,v,w)) \cdot \Delta \left( \begin{array}{c} u \\ v \\ w \end{array} \ri
and use Cramers rule for inverting the small Jacobian and solving this system of and use Cramers rule for inverting the small Jacobian and solving this system of
linear equations. linear equations.
As there is no strict upper bound of the number of iterations for this
algorithm, we just iterate it long enough to be within the given
$\epsilon$--error above. This takes --- depending on the shape of the object and
the grid --- about $3$ to $5$ iterations that we observed in practice.
Another issue that we observed in our implementation is, that multiple local
optima may exist on self--intersecting grids. We solve this problem by defining
self--intersecting grids to be *invalid* and do not test any of them.
This is not such a big problem as it sounds at first, as self--intersections
mean, that control--points being further away from a given vertex have more
influence over the deformation than control--points closer to this vertex. Also
this contradicts the notion of locality that we want to achieve and deemed
beneficial for a good behaviour of the evolutionary algorithm.
## Deformation Grid ## Deformation Grid
\label{sec:impl:grid} \label{sec:impl:grid}
As mentioned in chapter \ref{sec:back:evo}, the way of choosing the As mentioned in chapter \ref{sec:back:evo}, the way of choosing the
representation to map the general problem (mesh--fitting/optimization in our representation to map the general problem (mesh--fitting/optimization in our
case) into a parameter-space it very important for the quality and runtime of case) into a parameter-space is very important for the quality and runtime of
evolutionary algorithms\cite{Rothlauf2006}. evolutionary algorithms\cite{Rothlauf2006}.
Because our control--points are arranged in a grid, we can accurately represent Because our control--points are arranged in a grid, we can accurately represent
@ -561,10 +595,10 @@ each vertex--point inside the grids volume with proper B--Spline--coefficients
between $[0,1[$ and --- as a consequence --- we have to embed our object into it between $[0,1[$ and --- as a consequence --- we have to embed our object into it
(or create constant "dummy"-points outside). (or create constant "dummy"-points outside).
The great advantage of B--Splines is the locality, direct impact of each The great advantage of B--Splines is the local, direct impact of each
control point without having a $1:1$--correlation, and a smooth deformation. control point without having a $1:1$--correlation, and a smooth deformation.
While the advantages are great, the issues arise from the problem to decide While the advantages are great, the issues arise from the problem to decide
where to place the control--points and how many. where to place the control--points and how many to place at all.
\begin{figure}[!tbh] \begin{figure}[!tbh]
\centering \centering
@ -578,8 +612,8 @@ control--points.}
\end{figure} \end{figure}
One would normally think, that the more control--points you add, the better the One would normally think, that the more control--points you add, the better the
result will be, but this is not the case for our B--Splines. Given any point $p$ result will be, but this is not the case for our B--Splines. Given any point
only the $2 \cdot (d-1)$ control--points contribute to the parametrization of $\vec{p}$ only the $2 \cdot (d-1)$ control--points contribute to the parametrization of
that point^[Normally these are $d-1$ to each side, but at the boundaries the that point^[Normally these are $d-1$ to each side, but at the boundaries the
number gets increased to the inside to meet the required smoothness]. number gets increased to the inside to meet the required smoothness].
This means, that a high resolution can have many control-points that are not This means, that a high resolution can have many control-points that are not
@ -587,29 +621,37 @@ contributing to any point on the surface and are thus completely irrelevant to
the solution. the solution.
We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the four red We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the four red
central points are not relevant for the parametrization of the circle. central points are not relevant for the parametrization of the circle. This
leads to artefacts in the deformation--matrix $\vec{U}$, as the columns
corresponding to those control--points are $0$.
\unsure[inline]{erwähnen, dass man aus $\vec{D}$ einfach die Null--Spalten This leads to useless increased complexity, as the parameters corresponding to
entfernen kann?} those points will never have any effect, but a naive algorithm will still try to
optimize them yielding numeric artefacts in the best and non--terminating or
ill--defined solutions^[One example would be, when parts of an algorithm depend
on the inverse of the minimal right singular value leading to a division by $0$.]
at worst.
One can of course neglect those columns and their corresponding control--points,
but this raises the question why they were introduced in the first place. We
will address this in a special scenario in \ref{sec:res:3d:var}.
For our tests we chose different uniformly sized grids and added noise For our tests we chose different uniformly sized grids and added noise
onto each control-point^[For the special case of the outer layer we only applied onto each control-point^[For the special case of the outer layer we only applied
noise away from the object, so the object is still confined in the convex hull noise away from the object, so the object is still confined in the convex hull
of the control--points.] to simulate different starting-conditions. of the control--points.] to simulate different starting-conditions.
\unsure[inline]{verweis auf DM--FFD?} # Scenarios for testing evolvability criteria using \ac{FFD}
# Scenarios for testing evolvability criteria using \acf{FFD}
\label{sec:eval} \label{sec:eval}
In our experiments we use the same two testing--scenarios, that were also used In our experiments we use the same two testing--scenarios, that were also used
by \cite{anrichterEvol}. The first scenario deforms a plane into a shape by \cite{anrichterEvol}. The first scenario deforms a plane into a shape
originally defined in \cite{giannelli2012thb}, where we setup control-points in originally defined in \cite{giannelli2012thb}, where we setup control-points in
a 2--dimensional manner merely deform in the height--coordinate to get the a 2--dimensional manner and merely deform in the height--coordinate to get the
resulting shape. resulting shape.
In the second scenario we increase the degrees of freedom significantly by using In the second scenario we increase the degrees of freedom significantly by using
a 3--dimensional control--grid to deform a sphere into a face. So each control a 3--dimensional control--grid to deform a sphere into a face, so each control
point has three degrees of freedom in contrast to first scenario. point has three degrees of freedom in contrast to first scenario.
## Test Scenario: 1D Function Approximation ## Test Scenario: 1D Function Approximation
@ -642,10 +684,10 @@ As the starting-plane we used the same shape, but set all
$z$--coordinates to $0$, yielding a flat plane, which is partially already $z$--coordinates to $0$, yielding a flat plane, which is partially already
correct. correct.
Regarding the *fitness--function* $f(\vec{p})$, we use the very simple approach Regarding the *fitness--function* $\mathrm{f}(\vec{p})$, we use the very simple approach
of calculating the squared distances for each corresponding vertex of calculating the squared distances for each corresponding vertex
\begin{equation} \begin{equation}
\textrm{f(\vec{p})} = \sum_{i=1}^{n} \|(\vec{Up})_i - t_i\|_2^2 = \|\vec{Up} - \vec{t}\|^2 \rightarrow \min \mathrm{f}(\vec{p}) = \sum_{i=1}^{n} \|(\vec{Up})_i - t_i\|_2^2 = \|\vec{Up} - \vec{t}\|^2 \rightarrow \min
\end{equation} \end{equation}
where $t_i$ are the respective target--vertices to the parametrized where $t_i$ are the respective target--vertices to the parametrized
source--vertices^[The parametrization is encoded in $\vec{U}$ and the initial source--vertices^[The parametrization is encoded in $\vec{U}$ and the initial
@ -662,7 +704,7 @@ the correct gradient in which the evolutionary optimizer should move.
\label{sec:test:3dfa} \label{sec:test:3dfa}
Opposed to the 1--dimensional scenario before, the 3--dimensional scenario is Opposed to the 1--dimensional scenario before, the 3--dimensional scenario is
much more complex --- not only because we have more degrees of freedom on each much more complex --- not only because we have more degrees of freedom on each
control point, but also because the *fitness--function* we will use has no known control point, but also, because the *fitness--function* we will use has no known
analytic solution and multiple local minima. analytic solution and multiple local minima.
\begin{figure}[ht] \begin{figure}[ht]
@ -683,12 +725,13 @@ these Models can be seen in figure \ref{fig:3dtarget}.
Opposed to the 1D--case we cannot map the source and target--vertices in a Opposed to the 1D--case we cannot map the source and target--vertices in a
one--to--one--correspondence, which we especially need for the approximation of one--to--one--correspondence, which we especially need for the approximation of
the fitting--error. Hence we state that the error of one vertex is the distance the fitting--error. Hence we state that the error of one vertex is the distance
to the closest vertex of the other model. to the closest vertex of the other model and sum up the error from the
respective source and target.
We therefore define the *fitness--function* to be: We therefore define the *fitness--function* to be:
\begin{equation} \begin{equation}
f(\vec{P}) = \frac{1}{n} \underbrace{\sum_{i=1}^n \|\vec{c_T(s_i)} - \mathrm{f}(\vec{P}) = \frac{1}{n} \underbrace{\sum_{i=1}^n \|\vec{c_T(s_i)} -
\vec{s_i}\|_2^2}_{\textrm{source-to-target--distance}} \vec{s_i}\|_2^2}_{\textrm{source-to-target--distance}}
+ \frac{1}{m} \underbrace{\sum_{i=1}^m \|\vec{c_S(t_i)} - + \frac{1}{m} \underbrace{\sum_{i=1}^m \|\vec{c_S(t_i)} -
\vec{t_i}\|_2^2}_{\textrm{target-to-source--distance}} \vec{t_i}\|_2^2}_{\textrm{target-to-source--distance}}
@ -711,9 +754,10 @@ As regularization-term we add a weighted Laplacian of the deformation that has
been used before by Aschenbach et al.\cite[Section 3.2]{aschenbach2015} on been used before by Aschenbach et al.\cite[Section 3.2]{aschenbach2015} on
similar models and was shown to lead to a more precise fit. The Laplacian similar models and was shown to lead to a more precise fit. The Laplacian
\begin{equation} \begin{equation}
\textrm{regularization}(\vec{P}) = \frac{1}{\sum_i A_i} \sum_{i=1}^n A_i \cdot \left( \sum_{\vec{s_j} \in \mathcal{N}(\vec{s_i})} w_j \cdot \|\Delta \vec{s_j} - \Delta \vec{\overline{s}_j}\|^2 \right) \mathrm{regularization}(\vec{P}) = \frac{1}{\sum_i A_i} \sum_{i=1}^n A_i \cdot \left( \sum_{\vec{s_j} \in \mathcal{N}(\vec{s_i})} w_j \cdot \|\Delta \vec{s_j} - \Delta \vec{\overline{s}_j}\|^2 \right)
\label{eq:reg3d} \label{eq:reg3d}
\end{equation} \end{equation}
\unsure[inline]{was ist $\vec{\overline{s}_j}$? Zentrum? eigentlich $s_i$?}
is determined by the cotangent weighted displacement $w_j$ of the to $s_i$ is determined by the cotangent weighted displacement $w_j$ of the to $s_i$
connected vertices $\mathcal{N}(s_i)$ and $A_i$ is the Voronoi--area of the corresponding vertex connected vertices $\mathcal{N}(s_i)$ and $A_i$ is the Voronoi--area of the corresponding vertex
$\vec{s_i}$. We leave out the $\vec{R}_i$--term from the original paper as our $\vec{s_i}$. We leave out the $\vec{R}_i$--term from the original paper as our
@ -731,19 +775,20 @@ To compare our results to the ones given by Richter et al.\cite{anrichterEvol},
we also use Spearman's rank correlation coefficient. Opposed to other popular we also use Spearman's rank correlation coefficient. Opposed to other popular
coefficients, like the Pearson correlation coefficient, which measures a linear coefficients, like the Pearson correlation coefficient, which measures a linear
relationship between variables, the Spearmans's coefficient assesses \glqq how relationship between variables, the Spearmans's coefficient assesses \glqq how
well an arbitrary monotonic function can descripbe the relationship between two well an arbitrary monotonic function can describe the relationship between two
variables, without making any assumptions about the frequency distribution of variables, without making any assumptions about the frequency distribution of
the variables\grqq\cite{hauke2011comparison}. the variables\grqq\cite{hauke2011comparison}.
As we don't have any prior knowledge if any of the criteria is linear and we are As we don't have any prior knowledge if any of the criteria is linear and we are
just interested in a monotonic relation between the criteria and their just interested in a monotonic relation between the criteria and their
predictive power, the Spearman's coefficient seems to fit out scenario best. predictive power, the Spearman's coefficient seems to fit out scenario best and
was also used before by Richter et al.\cite{anrichterEvol}
For interpretation of these values we follow the same interpretation used in For interpretation of these values we follow the same interpretation used in
\cite{anrichterEvol}, based on \cite{weir2015spearman}: The coefficient \cite{anrichterEvol}, based on \cite{weir2015spearman}: The coefficient
intervals $r_S \in [0,0.2[$, $[0.2,0.4[$, $[0.4,0.6[$, $[0.6,0.8[$, and $[0.8,1]$ are intervals $r_S \in [0,0.2[$, $[0.2,0.4[$, $[0.4,0.6[$, $[0.6,0.8[$, and $[0.8,1]$ are
classified as *very weak*, *weak*, *moderate*, *strong* and *very strong*. We classified as *very weak*, *weak*, *moderate*, *strong* and *very strong*. We
interpret p--values smaller than $0.1$ as *significant* and cut off the interpret p--values smaller than $0.01$ as *significant* and cut off the
precision of p--values after four decimal digits (thus often having a p--value precision of p--values after four decimal digits (thus often having a p--value
of $0$ given for p--values $< 10^{-4}$). of $0$ given for p--values $< 10^{-4}$).
<!-- </> --> <!-- </> -->
@ -772,7 +817,9 @@ $$
\vec{g}_{\textrm{d}} = \frac{\vec{g}_{\textrm{c}} + \mathbb{1}}{\|\vec{g}_{\textrm{c}} + \mathbb{1}\|} \vec{g}_{\textrm{d}} = \frac{\vec{g}_{\textrm{c}} + \mathbb{1}}{\|\vec{g}_{\textrm{c}} + \mathbb{1}\|}
$$ $$
where $\mathbb{1}$ is the vector consisting of $1$ in every dimension and where $\mathbb{1}$ is the vector consisting of $1$ in every dimension and
$\vec{g}_\textrm{c} = \vec{p^{*}}$ the calculated correct gradient. $\vec{g}_\textrm{c} = \vec{p^{*}} - \vec{p}$ the calculated correct gradient. As
we always start with a gradient of $\mathbb{0}$ this shortens to
$\vec{g}_\textrm{c} = \vec{p^{*}}$.
\begin{figure}[ht] \begin{figure}[ht]
\begin{center} \begin{center}
@ -787,11 +834,7 @@ We then set up a regular 2--dimensional grid around the object with the desired
grid resolutions. To generate a testcase we then move the grid--vertices grid resolutions. To generate a testcase we then move the grid--vertices
randomly inside the x--y--plane. As self-intersecting grids get tricky to solve randomly inside the x--y--plane. As self-intersecting grids get tricky to solve
with our implemented newtons--method we avoid the generation of such with our implemented newtons--method we avoid the generation of such
self--intersecting grids for our testcases. self--intersecting grids for our testcases (see section \ref{3dffd}).
This is a reasonable thing to do, as self-intersecting grids violate our desired
property of locality, as the then farther away control--point has more influence
over some vertices as the next-closer.
To achieve that we select a uniform distributed number $r \in [-0.25,0.25]$ per To achieve that we select a uniform distributed number $r \in [-0.25,0.25]$ per
dimension and shrink the distance to the neighbours (the smaller neighbour for dimension and shrink the distance to the neighbours (the smaller neighbour for
@ -810,20 +853,22 @@ analytical solution to the given problem--set. We use this to experimentally
evaluate the quality criteria we introduced before. As an evolutional evaluate the quality criteria we introduced before. As an evolutional
optimization is partially a random process, we use the analytical solution as a optimization is partially a random process, we use the analytical solution as a
stopping-criteria. We measure the convergence speed as number of iterations the stopping-criteria. We measure the convergence speed as number of iterations the
evolutional algorithm needed to get within $1.05\%$ of the optimal solution. evolutional algorithm needed to get within $1.05 \times$ of the optimal solution.
We used different regular grids that we manipulated as explained in Section We used different regular grids that we manipulated as explained in Section
\ref{sec:proc:1d} with a different number of control points. As our grids have \ref{sec:proc:1d} with a different number of control points. As our grids have
to be the product of two integers, we compared a $5 \times 5$--grid with $25$ to be the product of two integers, we compared a $5 \times 5$--grid with $25$
control--points to a $4 \times 7$ and $7 \times 4$--grid with $28$ control--points to a $4 \times 7$ and $7 \times 4$--grid with $28$
control--points. This was done to measure the impact an \glqq improper\grqq control--points. This was done to measure the impact an \glqq improper\grqq \
setup could have and how well this is displayed in the criteria we are setup could have and how well this is displayed in the criteria we are
examining. examining.
Additionally we also measured the effect of increasing the total resolution of Additionally we also measured the effect of increasing the total resolution of
the grid by taking a closer look at $5 \times 5$, $7 \times 7$ and $10 \times 10$ grids. the grid by taking a closer look at $5 \times 5$, $7 \times 7$ and $10 \times 10$ grids.
\begin{figure}[ht] ### Variability
\begin{figure}[tbh]
\centering \centering
\includegraphics[width=0.7\textwidth]{img/evolution1d/variability_boxplot.png} \includegraphics[width=0.7\textwidth]{img/evolution1d/variability_boxplot.png}
\caption[1D Fitting Errors for various grids]{The squared error for the various \caption[1D Fitting Errors for various grids]{The squared error for the various
@ -832,8 +877,6 @@ Note that $7 \times 4$ and $4 \times 7$ have the same number of control--points.
\label{fig:1dvar} \label{fig:1dvar}
\end{figure} \end{figure}
### Variability
Variability should characterize the potential for design space exploration and Variability should characterize the potential for design space exploration and
is defined in terms of the normalized rank of the deformation matrix $\vec{U}$: is defined in terms of the normalized rank of the deformation matrix $\vec{U}$:
$V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}$, whereby $n$ is the number of $V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n}$, whereby $n$ is the number of
@ -844,11 +887,12 @@ grid), we have merely plotted the errors in the boxplot in figure
It is also noticeable, that although the $7 \times 4$ and $4 \times 7$ grids It is also noticeable, that although the $7 \times 4$ and $4 \times 7$ grids
have a higher variability, they perform not better than the $5 \times 5$ grid. have a higher variability, they perform not better than the $5 \times 5$ grid.
Also the $7 \times 4$ and $4 \times 7$ grids differ distinctly from each other, Also the $7 \times 4$ and $4 \times 7$ grids differ distinctly from each other
although they have the same number of control--points. This is an indication the with a mean$\pm$sigma of $233.09 \pm 12.32$ for the former and $286.32 \pm 22.36$ for the
impact a proper or improper grid--setup can have. We do not draw scientific latter, although they have the same number of control--points. This is an
conclusions from these findings, as more research on non-squared grids seem indication of an impact a proper or improper grid--setup can have. We do not
necessary.\todo{machen wir die noch? :D} draw scientific conclusions from these findings, as more research on non-squared
grids seem necessary.
Leaving the issue of the grid--layout aside we focused on grids having the same Leaving the issue of the grid--layout aside we focused on grids having the same
number of prototypes in every dimension. For the $5 \times 5$, $7 \times 7$ and number of prototypes in every dimension. For the $5 \times 5$, $7 \times 7$ and
@ -857,7 +901,7 @@ between the variability and the evolutionary error.
### Regularity ### Regularity
\begin{figure}[ht] \begin{figure}[tbh]
\centering \centering
\includegraphics[width=\textwidth]{img/evolution1d/55_to_1010_steps.png} \includegraphics[width=\textwidth]{img/evolution1d/55_to_1010_steps.png}
\caption[Improvement potential and regularity vs. steps]{\newline \caption[Improvement potential and regularity vs. steps]{\newline
@ -952,12 +996,13 @@ control--points.}
For the next step we then halve the regularization--impact $\lambda$ (starting For the next step we then halve the regularization--impact $\lambda$ (starting
at $1$) of our *fitness--function* (\ref{eq:fit3d}) and calculate the next at $1$) of our *fitness--function* (\ref{eq:fit3d}) and calculate the next
incremental solution $\vec{P^{*}} = \vec{U^+}\vec{T}$ with the updated incremental solution $\vec{P^{*}} = \vec{U^+}\vec{T}$ with the updated
correspondences to get our next target--error. We repeat this process as long as correspondences (again, mapping each vertex to its closest neighbour in the
the target--error keeps decreasing and use the number of these iterations as respective other model) to get our next target--error. We repeat this process as
measure of the convergence speed. As the resulting evolutional error without long as the target--error keeps decreasing and use the number of these
regularization is in the numeric range of $\approx 100$, whereas the iterations as measure of the convergence speed. As the resulting evolutional
regularization is numerically $\approx 7000$ we need at least $10$ to $15$ iterations error without regularization is in the numeric range of $\approx 100$, whereas
until the regularization--effect wears off. the regularization is numerically $\approx 7000$ we need at least $10$ to $15$
iterations until the regularization--effect wears off.
The grid we use for our experiments is just very coarse due to computational The grid we use for our experiments is just very coarse due to computational
limitations. We are not interested in a good reconstruction, but an estimate if limitations. We are not interested in a good reconstruction, but an estimate if
@ -965,7 +1010,7 @@ the mentioned evolvability criteria are good.
In figure \ref{fig:setup3d} we show an example setup of the scene with a In figure \ref{fig:setup3d} we show an example setup of the scene with a
$4\times 4\times 4$--grid. Identical to the 1--dimensional scenario before, we create a $4\times 4\times 4$--grid. Identical to the 1--dimensional scenario before, we create a
regular grid and move the control-points \todo{wie?} random between their regular grid and move the control-points \improvement{Beschreiben wie} random between their
neighbours, but in three instead of two dimensions^[Again, we flip the signs for neighbours, but in three instead of two dimensions^[Again, we flip the signs for
the edges, if necessary to have the object still in the convex hull.]. the edges, if necessary to have the object still in the convex hull.].
@ -1009,6 +1054,7 @@ control--points.}
\end{figure} \end{figure}
### Variability ### Variability
\label{sec:res:3d:var}
\begin{table}[tbh] \begin{table}[tbh]
\centering \centering
@ -1046,13 +1092,14 @@ deformation--matrix.
\centering \centering
\includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png} \includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png}
\caption[Histogram of ranks of high--resolution deformation--matrices]{ \caption[Histogram of ranks of high--resolution deformation--matrices]{
Histogram of ranks of various $10 \times 10 \times 10$ grids. Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$
control--points each.
} }
\label{fig:histrank3d} \label{fig:histrank3d}
\end{figure} \end{figure}
Overall the correlation between variability and fitness--error were Overall the correlation between variability and fitness--error were
*significantly* and showed a *very strong* correlation in all our tests. *significant* and showed a *very strong* correlation in all our tests.
The detailed correlation--coefficients are given in table \ref{tab:3dvar} The detailed correlation--coefficients are given in table \ref{tab:3dvar}
alongside their p--values. alongside their p--values.
@ -1111,21 +1158,20 @@ between regularity and number of iterations for the 3D fitting scenario.
Displayed are the negated Spearman coefficients with the corresponding p--values Displayed are the negated Spearman coefficients with the corresponding p--values
in brackets for various given grids ($\mathrm{X} \in [4,5,7], \mathrm{Y} \in [4,5,6]$). in brackets for various given grids ($\mathrm{X} \in [4,5,7], \mathrm{Y} \in [4,5,6]$).
\newline Note: Not significant results are marked in \textcolor{red}{red}.} \newline Note: Not significant results are marked in \textcolor{red}{red}.}
\label{tab:3dvar} \label{tab:3dreg}
\end{table} \end{table}
Opposed to the predictions of variability our test on regularity gave a mixed Opposed to the predictions of variability our test on regularity gave a mixed
result --- similar to the 1D--case. result --- similar to the 1D--case.
In half scenarios we have a *significant*, but *weak* to *moderate* correlation In roughly half of the scenarios we have a *significant*, but *weak* to *moderate*
between regularity and number of iterations. On the other hand in the scenarios correlation between regularity and number of iterations. On the other hand in
where we increased the number of control--points, namely $125$ for the the scenarios where we increased the number of control--points, namely $125$ for
$5 \times 5 \times 5$ grid and $216$ for the $6 \times 6 \times 6$ grid we found the $5 \times 5 \times 5$ grid and $216$ for the $6 \times 6 \times 6$ grid we found
a *significant*, but *weak* anti--correlation, which seem to contradict the a *significant*, but *weak* **anti**--correlation when taking all three tests into
findings/trends for the sets with $64$, $80$, and $112$ control--points (first account^[Displayed as $Y \times Y \times Y$], which seem to contradict the
two rows of table \ref{tab:3dvar}). findings/trends for the sets with $64$, $80$, and $112$ control--points
(first two rows of table \ref{tab:3dreg}).
Taking all results together we only find a *very weak*, but *significant* link Taking all results together we only find a *very weak*, but *significant* link
between regularity and the number of iterations needed for the algorithm to between regularity and the number of iterations needed for the algorithm to
@ -1135,21 +1181,76 @@ converge.
\centering \centering
\includegraphics[width=\textwidth]{img/evolution3d/regularity_montage.png} \includegraphics[width=\textwidth]{img/evolution3d/regularity_montage.png}
\caption[Regularity for different 3D--grids]{ \caption[Regularity for different 3D--grids]{
**BLINDTEXT** Plots of regularity against number of iterations for various scenarios together
} with a linear fit to indicate trends.}
\label{fig:resreg3d} \label{fig:resreg3d}
\end{figure} \end{figure}
As can be seen from figure \ref{fig:resreg3d}, we can observe\todo{things}. As can be seen from figure \ref{fig:resreg3d}, we can observe that increasing
the number of control--points helps the convergence--speeds. The
regularity--criterion first behaves as we would like to, but then switches to
behave exactly opposite to our expectations, as can be seen in the first three
plots. While the number of control--points increases from red to green to blue
and the number of iterations decreases, the regularity seems to increase at
first, but then decreases again on higher grid--resolutions.
This can be an artefact of the definition of regularity, as it is defined by the
inverse condition--number of the deformation--matrix $\vec{U}$, being the
fraction $\frac{\sigma_{\mathrm{min}}}{\sigma_{\mathrm{max}}}$ between the
least and greatest right singular value.
As we observed in the previous section, we cannot
guarantee that each control--point has an effect (see figure \ref{fig:histrank3d})
and so a small minimal right singular value occurring on higher
grid--resolutions seems likely the problem.
Adding to this we also noted, that in the case of the $10 \times 10 \times
10$--grid the regularity was always $0$, as a non--contributing control-point
yields a $0$--column in the deformation--matrix, thus letting
$\sigma_\mathrm{min} = 0$. A better definition for regularity (i.e. using the
smallest non--zero right singular value) could solve this particular issue, but
not fix the trend we noticed above.
### Improvement Potential ### Improvement Potential
\begin{table}[tbh]
\centering
\begin{tabular}{c|c|c|c}
& $5 \times 4 \times 4$ & $7 \times 4 \times 4$ & $\mathrm{X} \times 4 \times 4$ \\
\cline{2-4}
& 0.3 (0.0023) & \textcolor{red}{0.23} (0.0233) & 0.89 (0) \B \\
\cline{2-4}
\multicolumn{4}{c}{} \\[-1.4em]
\hline
$4 \times 4 \times 4$ & $4 \times 4 \times 5$ & $4 \times 4 \times 7$ & $4 \times 4 \times \mathrm{X}$ \T \\
\hline
0.5 (0) & 0.38 (0) & 0.32 (0.0012) & 0.9 (0) \B \\
\hline
\multicolumn{4}{c}{} \\[-1.4em]
\cline{2-4}
& $5 \times 5 \times 5$ & $6 \times 6 \times 6$ & $\mathrm{Y} \times \mathrm{Y} \times \mathrm{Y}$ \T \\
\cline{2-4}
& 0.47 (0) & \textcolor{red}{-0.01} (0.8803) & 0.89 (0) \B \\
\cline{2-4}
\multicolumn{4}{c}{} \\[-1.4em]
\cline{2-4}
\multicolumn{3}{c}{} & all: 0.95 (0) \T
\end{tabular}
\caption[Correlation between improvement--potential and fitting--error for 3D]{Correlation
between improvement--potential and fitting--error for the 3D fitting scenario.
Displayed are the negated Spearman coefficients with the corresponding p--values
in brackets for various given grids ($\mathrm{X} \in [4,5,7], \mathrm{Y} \in [4,5,6]$).
\newline Note: Not significant results are marked in \textcolor{red}{red}.}
\label{tab:3dimp}
\end{table}
\begin{figure}[!htb] \begin{figure}[!htb]
\centering \centering
\includegraphics[width=\textwidth]{img/evolution3d/improvement_montage.png} \includegraphics[width=\textwidth]{img/evolution3d/improvement_montage.png}
\caption[Improvement potential for different 3D--grids]{ \caption[Improvement potential for different 3D--grids]{
**BLINDTEXT** Plots of improvement potential against error given by our fitness--function
} after convergence together with a linear fit of each of the plotted data to
indicate trends.}
\label{fig:resimp3d} \label{fig:resimp3d}
\end{figure} \end{figure}
@ -1160,4 +1261,5 @@ As can be seen from figure \ref{fig:resreg3d}, we can observe\todo{things}.
- Regularity ist kacke für unser setup. Bessere Vorschläge? EW/EV? - Regularity ist kacke für unser setup. Bessere Vorschläge? EW/EV?
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt \improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.} Direktlinks des Autors.\newline
Außerdem bricht url über Seitengrenzen den Seitenspiegel.}

Binary file not shown.

View File

@ -210,18 +210,19 @@ evolution is strongly tied to the notion of
the problem has serious implications on the convergence speed and the the problem has serious implications on the convergence speed and the
quality of the solution\cite{Rothlauf2006}. However, there is no quality of the solution\cite{Rothlauf2006}. However, there is no
consensus on how \emph{evolvability} is defined and the meaning varies consensus on how \emph{evolvability} is defined and the meaning varies
from context to context\cite{richter2015evolvability}, so there is need from context to context\cite{richter2015evolvability}. As a consequence
for some criteria we can measure, so that we are able to compare there is need for some criteria we can measure, so that we are able to
different representations to learn and improve upon these. compare different representations to learn and improve upon these.
One example of such a general representation of an object is to generate One example of such a general representation of an object is to generate
random points and represent vertices of an object as distances to these random points and represent vertices of an object as distances to these
points --- for example via \acf{RBF}. If one (or the algorithm) would points --- for example via \acf{RBF}. If one (or the algorithm) would
move such a point the object will get deformed locally (due to the move such a point the object will get deformed only locally (due to the
\ac{RBF}). As this results in a simple mapping from the parameter-space \ac{RBF}). As this results in a simple mapping from the parameter-space
onto the object one can try out different representations of the same onto the object one can try out different representations of the same
object and evaluate the \emph{evolvability}. This is exactly what object and evaluate which criteria may be suited to describe this notion
Richter et al.\cite{anrichterEvol} have done. of \emph{evolvability}. This is exactly what Richter et
al.\cite{anrichterEvol} have done.
As we transfer the results of Richter et al.\cite{anrichterEvol} from As we transfer the results of Richter et al.\cite{anrichterEvol} from
using \acf{RBF} as a representation to manipulate geometric objects to using \acf{RBF} as a representation to manipulate geometric objects to
@ -244,18 +245,17 @@ for a one--dimensional line (in \ref{sec:back:ffd}) and discuss why this
is a sensible deformation function (in \ref{sec:back:ffdgood}). Then we is a sensible deformation function (in \ref{sec:back:ffdgood}). Then we
establish some background--knowledge of evolutionary algorithms (in establish some background--knowledge of evolutionary algorithms (in
\ref{sec:back:evo}) and why this is useful in our domain (in \ref{sec:back:evo}) and why this is useful in our domain (in
\ref{sec:back:evogood}). In a third step we take a look at the \ref{sec:back:evogood}) followed by the definition of the different
definition of the different evolvability criteria established in evolvability criteria established in \cite{anrichterEvol} (in
\cite{anrichterEvol}. \ref {sec:back:rvi}).
In Chapter \ref{sec:impl} we take a look at our implementation of In Chapter \ref{sec:impl} we take a look at our implementation of
\ac{FFD} and the adaptation for 3D--meshes that were used. \ac{FFD} and the adaptation for 3D--meshes that were used. Next, in
Chapter \ref{sec:eval}, we describe the different scenarios we use to
Next, in Chapter \ref{sec:eval}, we describe the different scenarios we evaluate the different evolvability--criteria incorporating all aspects
use to evaluate the different evolvability--criteria incorporating all introduced in Chapter \ref{sec:back}. Following that, we evaluate the
aspects introduced in Chapter \ref{sec:back}. Following that, we results in Chapter \ref{sec:res} with further on discussion, summary and
evaluate the results in Chapter \ref{sec:res} with further on discussion outlook in Chapter \ref{sec:dis}.
in Chapter \ref{sec:dis}.
\chapter{Background}\label{background} \chapter{Background}\label{background}
@ -275,10 +275,10 @@ The main idea of \ac{FFD} is to create a function
\(s : [0,1[^d \mapsto \mathbb{R}^d\) that spans a certain part of a \(s : [0,1[^d \mapsto \mathbb{R}^d\) that spans a certain part of a
vector--space and is only linearly parametrized by some special control vector--space and is only linearly parametrized by some special control
points \(p_i\) and an constant attribution--function \(a_i(u)\), so \[ points \(p_i\) and an constant attribution--function \(a_i(u)\), so \[
s(u) = \sum_i a_i(u) p_i s(\vec{u}) = \sum_i a_i(\vec{u}) \vec{p_i}
\] can be thought of a representation of the inside of the convex hull \] can be thought of a representation of the inside of the convex hull
generated by the control points where each point can be accessed by the generated by the control points where each point can be accessed by the
right \(u \in [0,1[\). right \(u \in [0,1[^d\).
\begin{figure}[!ht] \begin{figure}[!ht]
\begin{center} \begin{center}
@ -289,9 +289,9 @@ corresponding deformation to generate a deformed objet}
\label{fig:bspline} \label{fig:bspline}
\end{figure} \end{figure}
In the example in figure~\ref{fig:bspline}, the control--points are In the 1--dimensional example in figure~\ref{fig:bspline}, the
indicated as red dots and the color-gradient should hint at the control--points are indicated as red dots and the color-gradient should
\(u\)--values ranging from \(0\) to \(1\). hint at the \(u\)--values ranging from \(0\) to \(1\).
We now define a \acf{FFD} by the following:\\ We now define a \acf{FFD} by the following:\\
Given an arbitrary number of points \(p_i\) alongside a line, we map a Given an arbitrary number of points \(p_i\) alongside a line, we map a
@ -458,7 +458,7 @@ space \(M\) (usually \(M = \mathbb{R}\)) along a convergence--function
Biologically speaking the set \(I\) corresponds to the set of possible Biologically speaking the set \(I\) corresponds to the set of possible
\emph{Genotypes} while \(M\) represents the possible observable \emph{Genotypes} while \(M\) represents the possible observable
\emph{Phenotypes}. \emph{Phenotypes}. \improvement[inline]{Erklären, was das ist. Quellen!}
The main algorithm just repeats the following steps: The main algorithm just repeats the following steps:
@ -486,9 +486,9 @@ The main algorithm just repeats the following steps:
\end{itemize} \end{itemize}
All these functions can (and mostly do) have a lot of hidden parameters All these functions can (and mostly do) have a lot of hidden parameters
that can be changed over time. One can for example start off with a high that can be changed over time.
mutation--rate that cools off over time (i.e.~by lowering the variance
of a gaussian noise). \improvement[inline]{Genauer: Welche? Wo? Wieso? ...}
\section{Advantages of evolutionary \section{Advantages of evolutionary
algorithms}\label{advantages-of-evolutionary-algorithms} algorithms}\label{advantages-of-evolutionary-algorithms}
@ -497,15 +497,10 @@ algorithms}\label{advantages-of-evolutionary-algorithms}
The main advantage of evolutionary algorithms is the ability to find The main advantage of evolutionary algorithms is the ability to find
optima of general functions just with the help of a given optima of general functions just with the help of a given
fitness--function. With this most problems of simple gradient--based fitness--function. Components and techniques for evolutionary algorithms
procedures, which often target the same error--function which measures are specifically known to help with different problems arising in the
the fitness, as an evolutionary algorithm, but can easily get stuck in domain of optimization\cite{weise2012evolutionary}. An overview of the
local optima. typical problems are shown in figure \ref{fig:probhard}.
Components and techniques for evolutionary algorithms are specifically
known to help with different problems arising in the domain of
optimization\cite{weise2012evolutionary}. An overview of the typical
problems are shown in figure \ref{fig:probhard}.
\begin{figure}[!ht] \begin{figure}[!ht]
\includegraphics[width=\textwidth]{img/weise_fig3.png} \includegraphics[width=\textwidth]{img/weise_fig3.png}
@ -517,10 +512,13 @@ Most of the advantages stem from the fact that a gradient--based
procedure has only one point of observation from where it evaluates the procedure has only one point of observation from where it evaluates the
next steps, whereas an evolutionary strategy starts with a population of next steps, whereas an evolutionary strategy starts with a population of
guessed solutions. Because an evolutionary strategy modifies the guessed solutions. Because an evolutionary strategy modifies the
solution randomly, keeps the best solutions and purges the worst, it can solution randomly, keeping the best solutions and purging the worst, it
also target multiple different hypothesis at the same time where the can also target multiple different hypothesis at the same time where the
local optima die out in the face of other, better candidates. local optima die out in the face of other, better candidates.
\improvement[inline]{Verweis auf MO-CMA etc. Vielleicht auch etwas
ausführlicher.}
If an analytic best solution exists and is easily computable If an analytic best solution exists and is easily computable
(i.e.~because the error--function is convex) an evolutionary algorithm (i.e.~because the error--function is convex) an evolutionary algorithm
is not the right choice. Although both converge to the same solution, is not the right choice. Although both converge to the same solution,
@ -530,8 +528,9 @@ But in reality many problems have no analytic solution, because the
problem is either not convex or there are so many parameters that an problem is either not convex or there are so many parameters that an
analytic solution (mostly meaning the equivalence to an exhaustive analytic solution (mostly meaning the equivalence to an exhaustive
search) is computationally not feasible. Here evolutionary optimization search) is computationally not feasible. Here evolutionary optimization
has one more advantage as you can at least get suboptimal solutions has one more advantage as one can at least get suboptimal solutions
fast, which then refine over time. fast, which then refine over time and still converge to the same
solution.
\section{Criteria for the evolvability of linear \section{Criteria for the evolvability of linear
deformations}\label{criteria-for-the-evolvability-of-linear-deformations} deformations}\label{criteria-for-the-evolvability-of-linear-deformations}
@ -540,27 +539,26 @@ deformations}\label{criteria-for-the-evolvability-of-linear-deformations}
As we have established in chapter \ref{sec:back:ffd}, we can describe a As we have established in chapter \ref{sec:back:ffd}, we can describe a
deformation by the formula \[ deformation by the formula \[
V = UP \vec{V} = \vec{U}\vec{P}
\] where \(V\) is a \(n \times d\) matrix of vertices, \(U\) are the \] where \(\vec{V}\) is a \(n \times d\) matrix of vertices, \(\vec{U}\)
(during parametrization) calculated deformation--coefficients and \(P\) are the (during parametrization) calculated deformation--coefficients
is a \(m \times d\) matrix of control--points that we interact with and \(P\) is a \(m \times d\) matrix of control--points that we interact
during deformation. with during deformation.
We can also think of the deformation in terms of differences from the We can also think of the deformation in terms of differences from the
original coordinates \[ original coordinates \[
\Delta V = U \cdot \Delta P \Delta \vec{V} = \vec{U} \cdot \Delta \vec{P}
\] which is isomorphic to the former due to the linear correlation in \] which is isomorphic to the former due to the linear correlation in
the deformation. One can see in this way, that the way the deformation the deformation. One can see in this way, that the way the deformation
behaves lies solely in the entries of \(U\), which is why the three behaves lies solely in the entries of \(\vec{U}\), which is why the
criteria focus on this. three criteria focus on this.
\subsection{Variability}\label{variability} \subsection{Variability}\label{variability}
In \cite{anrichterEvol} \emph{variability} is defined as In \cite{anrichterEvol} \emph{variability} is defined as
\[V(\vec{U}) := \frac{\textrm{rank}(\vec{U})}{n},\] whereby \(\vec{U}\) \[\mathrm{variability}(\vec{U}) := \frac{\mathrm{rank}(\vec{U})}{n},\]
is the \(n \times m\) deformation--Matrix whereby \(\vec{U}\) is the \(n \times m\) deformation--Matrix used to
\unsure{Nicht $(n\cdot d) \times m$? Wegen $u,v,w$?} used to map the map the \(m\) control points onto the \(n\) vertices.
\(m\) control points onto the \(n\) vertices.
Given \(n = m\), an identical number of control--points and vertices, Given \(n = m\), an identical number of control--points and vertices,
this quotient will be \(=1\) if all control points are independent of this quotient will be \(=1\) if all control points are independent of
@ -570,10 +568,21 @@ onto a target--point.
In praxis the value of \(V(\vec{U})\) is typically \(\ll 1\), because as In praxis the value of \(V(\vec{U})\) is typically \(\ll 1\), because as
there are only few control--points for many vertices, so \(m \ll n\). there are only few control--points for many vertices, so \(m \ll n\).
This criterion should correlate to the degrees of freedom the given
parametrization has. This can be seen from the fact, that
\(\mathrm{rank}(\vec{U})\) is limited by \(\min(m,n)\) and --- as \(n\)
is constant --- can never exceed \(n\).
The rank itself is also interesting, as control--points could
theoretically be placed on top of each other or be linear dependent in
another way --- but will in both cases lower the rank below the number
of control--points \(m\) and are thus measurable by the
\emph{variability}.
\subsection{Regularity}\label{regularity} \subsection{Regularity}\label{regularity}
\emph{Regularity} is defined\cite{anrichterEvol} as \emph{Regularity} is defined\cite{anrichterEvol} as
\[R(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}\] \[\mathrm{regularity}(\vec{U}) := \frac{1}{\kappa(\vec{U})} = \frac{\sigma_{min}}{\sigma_{max}}\]
where \(\sigma_{min}\) and \(\sigma_{max}\) are the smallest and where \(\sigma_{min}\) and \(\sigma_{max}\) are the smallest and
greatest right singular value of the deformation--matrix \(\vec{U}\). greatest right singular value of the deformation--matrix \(\vec{U}\).
@ -593,18 +602,21 @@ locality\cite{weise2012evolutionary,thorhauer2014locality}.
\subsection{Improvement Potential}\label{improvement-potential} \subsection{Improvement Potential}\label{improvement-potential}
In contrast to the general nature of \emph{variability} and In contrast to the general nature of \emph{variability} and
\emph{regularity}, which are agnostic of the fitness--function at hand \emph{regularity}, which are agnostic of the fitness--function at hand,
the third criterion should reflect a notion of potential. the third criterion should reflect a notion of the potential for
optimization, taking a guess into account.
As during optimization some kind of gradient \(g\) is available to Most of the times some kind of gradient \(g\) is available to suggest a
suggest a direction worth pursuing we use this to guess how much change direction worth pursuing; either from a previous iteration or by
can be achieved in the given direction. educated guessing. We use this to guess how much change can be achieved
in the given direction.
The definition for an \emph{improvement potential} \(P\) The definition for an \emph{improvement potential} \(P\)
is\cite{anrichterEvol}: \[ is\cite{anrichterEvol}: \[
P(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec{G}\|^2_F \mathrm{potential}(\vec{U}) := 1 - \|(\vec{1} - \vec{UU}^+)\vec{G}\|^2_F
\] given some approximate \(n \times d\) fitness--gradient \(\vec{G}\), \] \unsure[inline]{ist das $^2$ richtig?} given some approximate
normalized to \(\|\vec{G}\|_F = 1\), whereby \(\|\cdot\|_F\) denotes the \(n \times d\) fitness--gradient \(\vec{G}\), normalized to
\(\|\vec{G}\|_F = 1\), whereby \(\|\cdot\|_F\) denotes the
Frobenius--Norm. Frobenius--Norm.
\chapter{\texorpdfstring{Implementation of \chapter{\texorpdfstring{Implementation of
@ -658,7 +670,8 @@ v_x \overset{!}{=} \sum_i N_{i,d,\tau_i}(u) c_i
\] and do a gradient--descend to approximate the value of \(u\) up to an \] and do a gradient--descend to approximate the value of \(u\) up to an
\(\epsilon\) of \(0.0001\). \(\epsilon\) of \(0.0001\).
For this we use the Gauss--Newton algorithm\cite{gaussNewton} as the For this we use the Gauss--Newton algorithm\cite{gaussNewton}
\todo[inline]{rewrite. falsch und wischi-waschi. Least squares?} as the
solution to this problem may not be deterministic, because we usually solution to this problem may not be deterministic, because we usually
have way more vertices than control points (\(\#v~\gg~\#c\)). have way more vertices than control points (\(\#v~\gg~\#c\)).
@ -727,13 +740,31 @@ With the Gauss--Newton algorithm we iterate via the formula
and use Cramers rule for inverting the small Jacobian and solving this and use Cramers rule for inverting the small Jacobian and solving this
system of linear equations. system of linear equations.
As there is no strict upper bound of the number of iterations for this
algorithm, we just iterate it long enough to be within the given
\(\epsilon\)--error above. This takes --- depending on the shape of the
object and the grid --- about \(3\) to \(5\) iterations that we observed
in practice.
Another issue that we observed in our implementation is, that multiple
local optima may exist on self--intersecting grids. We solve this
problem by defining self--intersecting grids to be \emph{invalid} and do
not test any of them.
This is not such a big problem as it sounds at first, as
self--intersections mean, that control--points being further away from a
given vertex have more influence over the deformation than
control--points closer to this vertex. Also this contradicts the notion
of locality that we want to achieve and deemed beneficial for a good
behaviour of the evolutionary algorithm.
\section{Deformation Grid}\label{deformation-grid} \section{Deformation Grid}\label{deformation-grid}
\label{sec:impl:grid} \label{sec:impl:grid}
As mentioned in chapter \ref{sec:back:evo}, the way of choosing the As mentioned in chapter \ref{sec:back:evo}, the way of choosing the
representation to map the general problem (mesh--fitting/optimization in representation to map the general problem (mesh--fitting/optimization in
our case) into a parameter-space it very important for the quality and our case) into a parameter-space is very important for the quality and
runtime of evolutionary algorithms\cite{Rothlauf2006}. runtime of evolutionary algorithms\cite{Rothlauf2006}.
Because our control--points are arranged in a grid, we can accurately Because our control--points are arranged in a grid, we can accurately
@ -742,10 +773,11 @@ B--Spline--coefficients between \([0,1[\) and --- as a consequence ---
we have to embed our object into it (or create constant ``dummy''-points we have to embed our object into it (or create constant ``dummy''-points
outside). outside).
The great advantage of B--Splines is the locality, direct impact of each The great advantage of B--Splines is the local, direct impact of each
control point without having a \(1:1\)--correlation, and a smooth control point without having a \(1:1\)--correlation, and a smooth
deformation. While the advantages are great, the issues arise from the deformation. While the advantages are great, the issues arise from the
problem to decide where to place the control--points and how many. problem to decide where to place the control--points and how many to
place at all.
\begin{figure}[!tbh] \begin{figure}[!tbh]
\centering \centering
@ -760,7 +792,7 @@ control--points.}
One would normally think, that the more control--points you add, the One would normally think, that the more control--points you add, the
better the result will be, but this is not the case for our B--Splines. better the result will be, but this is not the case for our B--Splines.
Given any point \(p\) only the \(2 \cdot (d-1)\) control--points Given any point \(\vec{p}\) only the \(2 \cdot (d-1)\) control--points
contribute to the parametrization of that point\footnote{Normally these contribute to the parametrization of that point\footnote{Normally these
are \(d-1\) to each side, but at the boundaries the number gets are \(d-1\) to each side, but at the boundaries the number gets
increased to the inside to meet the required smoothness}. This means, increased to the inside to meet the required smoothness}. This means,
@ -770,10 +802,21 @@ irrelevant to the solution.
We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the We illustrate this phenomenon in figure \ref{fig:enoughCP}, where the
four red central points are not relevant for the parametrization of the four red central points are not relevant for the parametrization of the
circle. circle. This leads to artefacts in the deformation--matrix \(\vec{U}\),
as the columns corresponding to those control--points are \(0\).
\unsure[inline]{erwähnen, dass man aus $\vec{D}$ einfach die Null--Spalten This leads to useless increased complexity, as the parameters
entfernen kann?} corresponding to those points will never have any effect, but a naive
algorithm will still try to optimize them yielding numeric artefacts in
the best and non--terminating or ill--defined solutions\footnote{One
example would be, when parts of an algorithm depend on the inverse of
the minimal right singular value leading to a division by \(0\).} at
worst.
One can of course neglect those columns and their corresponding
control--points, but this raises the question why they were introduced
in the first place. We will address this in a special scenario in
\ref{sec:res:3d:var}.
For our tests we chose different uniformly sized grids and added noise For our tests we chose different uniformly sized grids and added noise
onto each control-point\footnote{For the special case of the outer layer onto each control-point\footnote{For the special case of the outer layer
@ -781,23 +824,21 @@ onto each control-point\footnote{For the special case of the outer layer
confined in the convex hull of the control--points.} to simulate confined in the convex hull of the control--points.} to simulate
different starting-conditions. different starting-conditions.
\unsure[inline]{verweis auf DM--FFD?}
\chapter{\texorpdfstring{Scenarios for testing evolvability criteria \chapter{\texorpdfstring{Scenarios for testing evolvability criteria
using using
\acf{FFD}}{Scenarios for testing evolvability criteria using }}\label{scenarios-for-testing-evolvability-criteria-using} \ac{FFD}}{Scenarios for testing evolvability criteria using }}\label{scenarios-for-testing-evolvability-criteria-using}
\label{sec:eval} \label{sec:eval}
In our experiments we use the same two testing--scenarios, that were In our experiments we use the same two testing--scenarios, that were
also used by \cite{anrichterEvol}. The first scenario deforms a plane also used by \cite{anrichterEvol}. The first scenario deforms a plane
into a shape originally defined in \cite{giannelli2012thb}, where we into a shape originally defined in \cite{giannelli2012thb}, where we
setup control-points in a 2--dimensional manner merely deform in the setup control-points in a 2--dimensional manner and merely deform in the
height--coordinate to get the resulting shape. height--coordinate to get the resulting shape.
In the second scenario we increase the degrees of freedom significantly In the second scenario we increase the degrees of freedom significantly
by using a 3--dimensional control--grid to deform a sphere into a face. by using a 3--dimensional control--grid to deform a sphere into a face,
So each control point has three degrees of freedom in contrast to first so each control point has three degrees of freedom in contrast to first
scenario. scenario.
\section{Test Scenario: 1D Function \section{Test Scenario: 1D Function
@ -835,12 +876,12 @@ As the starting-plane we used the same shape, but set all
\(z\)--coordinates to \(0\), yielding a flat plane, which is partially \(z\)--coordinates to \(0\), yielding a flat plane, which is partially
already correct. already correct.
Regarding the \emph{fitness--function} \(f(\vec{p})\), we use the very Regarding the \emph{fitness--function} \(\mathrm{f}(\vec{p})\), we use
simple approach of calculating the squared distances for each the very simple approach of calculating the squared distances for each
corresponding vertex corresponding vertex
\begin{equation} \begin{equation}
\textrm{f(\vec{p})} = \sum_{i=1}^{n} \|(\vec{Up})_i - t_i\|_2^2 = \|\vec{Up} - \vec{t}\|^2 \rightarrow \min \mathrm{f}(\vec{p}) = \sum_{i=1}^{n} \|(\vec{Up})_i - t_i\|_2^2 = \|\vec{Up} - \vec{t}\|^2 \rightarrow \min
\end{equation} \end{equation}
where \(t_i\) are the respective target--vertices to the parametrized where \(t_i\) are the respective target--vertices to the parametrized
@ -862,9 +903,9 @@ Approximation}\label{test-scenario-3d-function-approximation}
\label{sec:test:3dfa} Opposed to the 1--dimensional scenario before, the \label{sec:test:3dfa} Opposed to the 1--dimensional scenario before, the
3--dimensional scenario is much more complex --- not only because we 3--dimensional scenario is much more complex --- not only because we
have more degrees of freedom on each control point, but also because the have more degrees of freedom on each control point, but also, because
\emph{fitness--function} we will use has no known analytic solution and the \emph{fitness--function} we will use has no known analytic solution
multiple local minima. and multiple local minima.
\begin{figure}[ht] \begin{figure}[ht]
\begin{center} \begin{center}
@ -884,12 +925,13 @@ Both of these Models can be seen in figure \ref{fig:3dtarget}.
Opposed to the 1D--case we cannot map the source and target--vertices in Opposed to the 1D--case we cannot map the source and target--vertices in
a one--to--one--correspondence, which we especially need for the a one--to--one--correspondence, which we especially need for the
approximation of the fitting--error. Hence we state that the error of approximation of the fitting--error. Hence we state that the error of
one vertex is the distance to the closest vertex of the other model. one vertex is the distance to the closest vertex of the other model and
sum up the error from the respective source and target.
We therefore define the \emph{fitness--function} to be: We therefore define the \emph{fitness--function} to be:
\begin{equation} \begin{equation}
f(\vec{P}) = \frac{1}{n} \underbrace{\sum_{i=1}^n \|\vec{c_T(s_i)} - \mathrm{f}(\vec{P}) = \frac{1}{n} \underbrace{\sum_{i=1}^n \|\vec{c_T(s_i)} -
\vec{s_i}\|_2^2}_{\textrm{source-to-target--distance}} \vec{s_i}\|_2^2}_{\textrm{source-to-target--distance}}
+ \frac{1}{m} \underbrace{\sum_{i=1}^m \|\vec{c_S(t_i)} - + \frac{1}{m} \underbrace{\sum_{i=1}^m \|\vec{c_S(t_i)} -
\vec{t_i}\|_2^2}_{\textrm{target-to-source--distance}} \vec{t_i}\|_2^2}_{\textrm{target-to-source--distance}}
@ -916,10 +958,11 @@ al.\cite[Section 3.2]{aschenbach2015} on similar models and was shown to
lead to a more precise fit. The Laplacian lead to a more precise fit. The Laplacian
\begin{equation} \begin{equation}
\textrm{regularization}(\vec{P}) = \frac{1}{\sum_i A_i} \sum_{i=1}^n A_i \cdot \left( \sum_{\vec{s_j} \in \mathcal{N}(\vec{s_i})} w_j \cdot \|\Delta \vec{s_j} - \Delta \vec{\overline{s}_j}\|^2 \right) \mathrm{regularization}(\vec{P}) = \frac{1}{\sum_i A_i} \sum_{i=1}^n A_i \cdot \left( \sum_{\vec{s_j} \in \mathcal{N}(\vec{s_i})} w_j \cdot \|\Delta \vec{s_j} - \Delta \vec{\overline{s}_j}\|^2 \right)
\label{eq:reg3d} \label{eq:reg3d}
\end{equation} \end{equation}
\unsure[inline]{was ist $\vec{\overline{s}_j}$? Zentrum? eigentlich $s_i$?}
is determined by the cotangent weighted displacement \(w_j\) of the to is determined by the cotangent weighted displacement \(w_j\) of the to
\(s_i\) connected vertices \(\mathcal{N}(s_i)\) and \(A_i\) is the \(s_i\) connected vertices \(\mathcal{N}(s_i)\) and \(A_i\) is the
Voronoi--area of the corresponding vertex \(\vec{s_i}\). We leave out Voronoi--area of the corresponding vertex \(\vec{s_i}\). We leave out
@ -941,21 +984,22 @@ al.\cite{anrichterEvol}, we also use Spearman's rank correlation
coefficient. Opposed to other popular coefficients, like the Pearson coefficient. Opposed to other popular coefficients, like the Pearson
correlation coefficient, which measures a linear relationship between correlation coefficient, which measures a linear relationship between
variables, the Spearmans's coefficient assesses \glqq how well an variables, the Spearmans's coefficient assesses \glqq how well an
arbitrary monotonic function can descripbe the relationship between two arbitrary monotonic function can describe the relationship between two
variables, without making any assumptions about the frequency variables, without making any assumptions about the frequency
distribution of the variables\grqq\cite{hauke2011comparison}. distribution of the variables\grqq\cite{hauke2011comparison}.
As we don't have any prior knowledge if any of the criteria is linear As we don't have any prior knowledge if any of the criteria is linear
and we are just interested in a monotonic relation between the criteria and we are just interested in a monotonic relation between the criteria
and their predictive power, the Spearman's coefficient seems to fit out and their predictive power, the Spearman's coefficient seems to fit out
scenario best. scenario best and was also used before by Richter et
al.\cite{anrichterEvol}
For interpretation of these values we follow the same interpretation For interpretation of these values we follow the same interpretation
used in \cite{anrichterEvol}, based on \cite{weir2015spearman}: The used in \cite{anrichterEvol}, based on \cite{weir2015spearman}: The
coefficient intervals \(r_S \in [0,0.2[\), \([0.2,0.4[\), \([0.4,0.6[\), coefficient intervals \(r_S \in [0,0.2[\), \([0.2,0.4[\), \([0.4,0.6[\),
\([0.6,0.8[\), and \([0.8,1]\) are classified as \emph{very weak}, \([0.6,0.8[\), and \([0.8,1]\) are classified as \emph{very weak},
\emph{weak}, \emph{moderate}, \emph{strong} and \emph{very strong}. We \emph{weak}, \emph{moderate}, \emph{strong} and \emph{very strong}. We
interpret p--values smaller than \(0.1\) as \emph{significant} and cut interpret p--values smaller than \(0.01\) as \emph{significant} and cut
off the precision of p--values after four decimal digits (thus often off the precision of p--values after four decimal digits (thus often
having a p--value of \(0\) given for p--values \(< 10^{-4}\)). having a p--value of \(0\) given for p--values \(< 10^{-4}\)).
@ -984,8 +1028,9 @@ use as guess for the \emph{improvement potential}. To check we also
consider a distorted gradient \(\vec{g}_{\textrm{d}}\) \[ consider a distorted gradient \(\vec{g}_{\textrm{d}}\) \[
\vec{g}_{\textrm{d}} = \frac{\vec{g}_{\textrm{c}} + \mathbb{1}}{\|\vec{g}_{\textrm{c}} + \mathbb{1}\|} \vec{g}_{\textrm{d}} = \frac{\vec{g}_{\textrm{c}} + \mathbb{1}}{\|\vec{g}_{\textrm{c}} + \mathbb{1}\|}
\] where \(\mathbb{1}\) is the vector consisting of \(1\) in every \] where \(\mathbb{1}\) is the vector consisting of \(1\) in every
dimension and \(\vec{g}_\textrm{c} = \vec{p^{*}}\) the calculated dimension and \(\vec{g}_\textrm{c} = \vec{p^{*}} - \vec{p}\) the
correct gradient. calculated correct gradient. As we always start with a gradient of
\(\mathbb{0}\) this shortens to \(\vec{g}_\textrm{c} = \vec{p^{*}}\).
\begin{figure}[ht] \begin{figure}[ht]
\begin{center} \begin{center}
@ -1000,11 +1045,8 @@ We then set up a regular 2--dimensional grid around the object with the
desired grid resolutions. To generate a testcase we then move the desired grid resolutions. To generate a testcase we then move the
grid--vertices randomly inside the x--y--plane. As self-intersecting grid--vertices randomly inside the x--y--plane. As self-intersecting
grids get tricky to solve with our implemented newtons--method we avoid grids get tricky to solve with our implemented newtons--method we avoid
the generation of such self--intersecting grids for our testcases. the generation of such self--intersecting grids for our testcases (see
section \ref{3dffd}).
This is a reasonable thing to do, as self-intersecting grids violate our
desired property of locality, as the then farther away control--point
has more influence over some vertices as the next-closer.
To achieve that we select a uniform distributed number To achieve that we select a uniform distributed number
\(r \in [-0.25,0.25]\) per dimension and shrink the distance to the \(r \in [-0.25,0.25]\) per dimension and shrink the distance to the
@ -1026,22 +1068,23 @@ to experimentally evaluate the quality criteria we introduced before. As
an evolutional optimization is partially a random process, we use the an evolutional optimization is partially a random process, we use the
analytical solution as a stopping-criteria. We measure the convergence analytical solution as a stopping-criteria. We measure the convergence
speed as number of iterations the evolutional algorithm needed to get speed as number of iterations the evolutional algorithm needed to get
within \(1.05\%\) of the optimal solution. within \(1.05 \times\) of the optimal solution.
We used different regular grids that we manipulated as explained in We used different regular grids that we manipulated as explained in
Section \ref{sec:proc:1d} with a different number of control points. As Section \ref{sec:proc:1d} with a different number of control points. As
our grids have to be the product of two integers, we compared a our grids have to be the product of two integers, we compared a
\(5 \times 5\)--grid with \(25\) control--points to a \(4 \times 7\) and \(5 \times 5\)--grid with \(25\) control--points to a \(4 \times 7\) and
\(7 \times 4\)--grid with \(28\) control--points. This was done to \(7 \times 4\)--grid with \(28\) control--points. This was done to
measure the impact an \glqq improper\grqq measure the impact an \glqq improper\grqq ~ setup could have and how
setup could have and how well this is displayed in the criteria we are well this is displayed in the criteria we are examining.
examining.
Additionally we also measured the effect of increasing the total Additionally we also measured the effect of increasing the total
resolution of the grid by taking a closer look at \(5 \times 5\), resolution of the grid by taking a closer look at \(5 \times 5\),
\(7 \times 7\) and \(10 \times 10\) grids. \(7 \times 7\) and \(10 \times 10\) grids.
\begin{figure}[ht] \subsection{Variability}\label{variability-1}
\begin{figure}[tbh]
\centering \centering
\includegraphics[width=0.7\textwidth]{img/evolution1d/variability_boxplot.png} \includegraphics[width=0.7\textwidth]{img/evolution1d/variability_boxplot.png}
\caption[1D Fitting Errors for various grids]{The squared error for the various \caption[1D Fitting Errors for various grids]{The squared error for the various
@ -1050,8 +1093,6 @@ Note that $7 \times 4$ and $4 \times 7$ have the same number of control--points.
\label{fig:1dvar} \label{fig:1dvar}
\end{figure} \end{figure}
\subsection{Variability}\label{variability-1}
Variability should characterize the potential for design space Variability should characterize the potential for design space
exploration and is defined in terms of the normalized rank of the exploration and is defined in terms of the normalized rank of the
deformation matrix \(\vec{U}\): deformation matrix \(\vec{U}\):
@ -1063,11 +1104,12 @@ plotted the errors in the boxplot in figure \ref{fig:1dvar}
It is also noticeable, that although the \(7 \times 4\) and It is also noticeable, that although the \(7 \times 4\) and
\(4 \times 7\) grids have a higher variability, they perform not better \(4 \times 7\) grids have a higher variability, they perform not better
than the \(5 \times 5\) grid. Also the \(7 \times 4\) and \(4 \times 7\) than the \(5 \times 5\) grid. Also the \(7 \times 4\) and \(4 \times 7\)
grids differ distinctly from each other, although they have the same grids differ distinctly from each other with a mean\(\pm\)sigma of
number of control--points. This is an indication the impact a proper or \(233.09 \pm 12.32\) for the former and \(286.32 \pm 22.36\) for the
improper grid--setup can have. We do not draw scientific conclusions latter, although they have the same number of control--points. This is
from these findings, as more research on non-squared grids seem an indication of an impact a proper or improper grid--setup can have. We
necessary.\todo{machen wir die noch? :D} do not draw scientific conclusions from these findings, as more research
on non-squared grids seem necessary.
Leaving the issue of the grid--layout aside we focused on grids having Leaving the issue of the grid--layout aside we focused on grids having
the same number of prototypes in every dimension. For the the same number of prototypes in every dimension. For the
@ -1077,7 +1119,7 @@ variability and the evolutionary error.
\subsection{Regularity}\label{regularity-1} \subsection{Regularity}\label{regularity-1}
\begin{figure}[ht] \begin{figure}[tbh]
\centering \centering
\includegraphics[width=\textwidth]{img/evolution1d/55_to_1010_steps.png} \includegraphics[width=\textwidth]{img/evolution1d/55_to_1010_steps.png}
\caption[Improvement potential and regularity vs. steps]{\newline \caption[Improvement potential and regularity vs. steps]{\newline
@ -1185,14 +1227,15 @@ control--points.}
For the next step we then halve the regularization--impact \(\lambda\) For the next step we then halve the regularization--impact \(\lambda\)
(starting at \(1\)) of our \emph{fitness--function} (\ref{eq:fit3d}) and (starting at \(1\)) of our \emph{fitness--function} (\ref{eq:fit3d}) and
calculate the next incremental solution calculate the next incremental solution
\(\vec{P^{*}} = \vec{U^+}\vec{T}\) with the updated correspondences to \(\vec{P^{*}} = \vec{U^+}\vec{T}\) with the updated correspondences
get our next target--error. We repeat this process as long as the (again, mapping each vertex to its closest neighbour in the respective
target--error keeps decreasing and use the number of these iterations as other model) to get our next target--error. We repeat this process as
measure of the convergence speed. As the resulting evolutional error long as the target--error keeps decreasing and use the number of these
without regularization is in the numeric range of \(\approx 100\), iterations as measure of the convergence speed. As the resulting
whereas the regularization is numerically \(\approx 7000\) we need at evolutional error without regularization is in the numeric range of
least \(10\) to \(15\) iterations until the regularization--effect wears \(\approx 100\), whereas the regularization is numerically
off. \(\approx 7000\) we need at least \(10\) to \(15\) iterations until the
regularization--effect wears off.
The grid we use for our experiments is just very coarse due to The grid we use for our experiments is just very coarse due to
computational limitations. We are not interested in a good computational limitations. We are not interested in a good
@ -1201,10 +1244,10 @@ are good.
In figure \ref{fig:setup3d} we show an example setup of the scene with a In figure \ref{fig:setup3d} we show an example setup of the scene with a
\(4\times 4\times 4\)--grid. Identical to the 1--dimensional scenario \(4\times 4\times 4\)--grid. Identical to the 1--dimensional scenario
before, we create a regular grid and move the control-points \todo{wie?} before, we create a regular grid and move the control-points
random between their neighbours, but in three instead of two \improvement{Beschreiben wie} random between their neighbours, but in
dimensions\footnote{Again, we flip the signs for the edges, if necessary three instead of two dimensions\footnote{Again, we flip the signs for
to have the object still in the convex hull.}. the edges, if necessary to have the object still in the convex hull.}.
\begin{figure}[!htb] \begin{figure}[!htb]
\includegraphics[width=\textwidth]{img/3d_grid_resolution.png} \includegraphics[width=\textwidth]{img/3d_grid_resolution.png}
@ -1250,6 +1293,8 @@ control--points.}
\subsection{Variability}\label{variability-2} \subsection{Variability}\label{variability-2}
\label{sec:res:3d:var}
\begin{table}[tbh] \begin{table}[tbh]
\centering \centering
\begin{tabular}{c|c|c|c} \begin{tabular}{c|c|c|c}
@ -1288,13 +1333,14 @@ the variability via the rank of the deformation--matrix.
\centering \centering
\includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png} \includegraphics[width=0.8\textwidth]{img/evolution3d/variability2_boxplot.png}
\caption[Histogram of ranks of high--resolution deformation--matrices]{ \caption[Histogram of ranks of high--resolution deformation--matrices]{
Histogram of ranks of various $10 \times 10 \times 10$ grids. Histogram of ranks of various $10 \times 10 \times 10$ grids with $1000$
control--points each.
} }
\label{fig:histrank3d} \label{fig:histrank3d}
\end{figure} \end{figure}
Overall the correlation between variability and fitness--error were Overall the correlation between variability and fitness--error were
\emph{significantly} and showed a \emph{very strong} correlation in all \emph{significant} and showed a \emph{very strong} correlation in all
our tests. The detailed correlation--coefficients are given in table our tests. The detailed correlation--coefficients are given in table
\ref{tab:3dvar} alongside their p--values. \ref{tab:3dvar} alongside their p--values.
@ -1354,20 +1400,22 @@ between regularity and number of iterations for the 3D fitting scenario.
Displayed are the negated Spearman coefficients with the corresponding p--values Displayed are the negated Spearman coefficients with the corresponding p--values
in brackets for various given grids ($\mathrm{X} \in [4,5,7], \mathrm{Y} \in [4,5,6]$). in brackets for various given grids ($\mathrm{X} \in [4,5,7], \mathrm{Y} \in [4,5,6]$).
\newline Note: Not significant results are marked in \textcolor{red}{red}.} \newline Note: Not significant results are marked in \textcolor{red}{red}.}
\label{tab:3dvar} \label{tab:3dreg}
\end{table} \end{table}
Opposed to the predictions of variability our test on regularity gave a Opposed to the predictions of variability our test on regularity gave a
mixed result --- similar to the 1D--case. mixed result --- similar to the 1D--case.
In half scenarios we have a \emph{significant}, but \emph{weak} to In roughly half of the scenarios we have a \emph{significant}, but
\emph{moderate} correlation between regularity and number of iterations. \emph{weak} to \emph{moderate} correlation between regularity and number
On the other hand in the scenarios where we increased the number of of iterations. On the other hand in the scenarios where we increased the
control--points, namely \(125\) for the \(5 \times 5 \times 5\) grid and number of control--points, namely \(125\) for the
\(216\) for the \(6 \times 6 \times 6\) grid we found a \(5 \times 5 \times 5\) grid and \(216\) for the \(6 \times 6 \times 6\)
\emph{significant}, but \emph{weak} anti--correlation, which seem to grid we found a \emph{significant}, but \emph{weak}
\textbf{anti}--correlation when taking all three tests into
account\footnote{Displayed as \(Y \times Y \times Y\)}, which seem to
contradict the findings/trends for the sets with \(64\), \(80\), and contradict the findings/trends for the sets with \(64\), \(80\), and
\(112\) control--points (first two rows of table \ref{tab:3dvar}). \(112\) control--points (first two rows of table \ref{tab:3dreg}).
Taking all results together we only find a \emph{very weak}, but Taking all results together we only find a \emph{very weak}, but
\emph{significant} link between regularity and the number of iterations \emph{significant} link between regularity and the number of iterations
@ -1377,22 +1425,79 @@ needed for the algorithm to converge.
\centering \centering
\includegraphics[width=\textwidth]{img/evolution3d/regularity_montage.png} \includegraphics[width=\textwidth]{img/evolution3d/regularity_montage.png}
\caption[Regularity for different 3D--grids]{ \caption[Regularity for different 3D--grids]{
**BLINDTEXT** Plots of regularity against number of iterations for various scenarios together
} with a linear fit to indicate trends.}
\label{fig:resreg3d} \label{fig:resreg3d}
\end{figure} \end{figure}
As can be seen from figure \ref{fig:resreg3d}, we can As can be seen from figure \ref{fig:resreg3d}, we can observe that
observe\todo{things}. increasing the number of control--points helps the convergence--speeds.
The regularity--criterion first behaves as we would like to, but then
switches to behave exactly opposite to our expectations, as can be seen
in the first three plots. While the number of control--points increases
from red to green to blue and the number of iterations decreases, the
regularity seems to increase at first, but then decreases again on
higher grid--resolutions.
This can be an artefact of the definition of regularity, as it is
defined by the inverse condition--number of the deformation--matrix
\(\vec{U}\), being the fraction
\(\frac{\sigma_{\mathrm{min}}}{\sigma_{\mathrm{max}}}\) between the
least and greatest right singular value.
As we observed in the previous section, we cannot guarantee that each
control--point has an effect (see figure \ref{fig:histrank3d}) and so a
small minimal right singular value occurring on higher grid--resolutions
seems likely the problem.
Adding to this we also noted, that in the case of the
\(10 \times 10 \times 10\)--grid the regularity was always \(0\), as a
non--contributing control-point yields a \(0\)--column in the
deformation--matrix, thus letting \(\sigma_\mathrm{min} = 0\). A better
definition for regularity (i.e.~using the smallest non--zero right
singular value) could solve this particular issue, but not fix the trend
we noticed above.
\subsection{Improvement Potential}\label{improvement-potential-2} \subsection{Improvement Potential}\label{improvement-potential-2}
\begin{table}[tbh]
\centering
\begin{tabular}{c|c|c|c}
& $5 \times 4 \times 4$ & $7 \times 4 \times 4$ & $\mathrm{X} \times 4 \times 4$ \\
\cline{2-4}
& 0.3 (0.0023) & \textcolor{red}{0.23} (0.0233) & 0.89 (0) \B \\
\cline{2-4}
\multicolumn{4}{c}{} \\[-1.4em]
\hline
$4 \times 4 \times 4$ & $4 \times 4 \times 5$ & $4 \times 4 \times 7$ & $4 \times 4 \times \mathrm{X}$ \T \\
\hline
0.5 (0) & 0.38 (0) & 0.32 (0.0012) & 0.9 (0) \B \\
\hline
\multicolumn{4}{c}{} \\[-1.4em]
\cline{2-4}
& $5 \times 5 \times 5$ & $6 \times 6 \times 6$ & $\mathrm{Y} \times \mathrm{Y} \times \mathrm{Y}$ \T \\
\cline{2-4}
& 0.47 (0) & \textcolor{red}{-0.01} (0.8803) & 0.89 (0) \B \\
\cline{2-4}
\multicolumn{4}{c}{} \\[-1.4em]
\cline{2-4}
\multicolumn{3}{c}{} & all: 0.95 (0) \T
\end{tabular}
\caption[Correlation between improvement--potential and fitting--error for 3D]{Correlation
between improvement--potential and fitting--error for the 3D fitting scenario.
Displayed are the negated Spearman coefficients with the corresponding p--values
in brackets for various given grids ($\mathrm{X} \in [4,5,7], \mathrm{Y} \in [4,5,6]$).
\newline Note: Not significant results are marked in \textcolor{red}{red}.}
\label{tab:3dimp}
\end{table}
\begin{figure}[!htb] \begin{figure}[!htb]
\centering \centering
\includegraphics[width=\textwidth]{img/evolution3d/improvement_montage.png} \includegraphics[width=\textwidth]{img/evolution3d/improvement_montage.png}
\caption[Improvement potential for different 3D--grids]{ \caption[Improvement potential for different 3D--grids]{
**BLINDTEXT** Plots of improvement potential against error given by our fitness--function
} after convergence together with a linear fit of each of the plotted data to
indicate trends.}
\label{fig:resimp3d} \label{fig:resimp3d}
\end{figure} \end{figure}
@ -1407,7 +1512,8 @@ observe\todo{things}.
\end{itemize} \end{itemize}
\improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt \improvement[inline]{Bibliotheksverzeichnis links anpassen. DOI überschreibt
Direktlinks des Autors.} Direktlinks des Autors.\newline
Außerdem bricht url über Seitengrenzen den Seitenspiegel.}
% \backmatter % \backmatter
\cleardoublepage \cleardoublepage

View File

@ -0,0 +1,26 @@
#!/bin/bash
# regularity,variability,improvement,"Evolution error",steps
# 6.57581e-05,0.00592209,0.622392,113.016,2368
if [[ -f "$2" ]]; then
R -q --slave --vanilla <<EOF
print("================ Analyzing $2")
#library(Hmisc)
DF=as.matrix(read.csv("$2",header=TRUE))
print("Mean:")
mean(DF[,$1])
print("Median:")
median(DF[,$1])
print("Sigma:")
sd(DF[,$1])
print("Range:")
range(DF[,$1])
EOF
else
echo "Usage: $0 <column> <Filename.csv>"
fi

View File

@ -1,4 +1,4 @@
"5x5","7x4","4x7","7x7","10x10" "5x5","4x7","7x4","7x7","10x10"
218.554,280.917,211.096,126.241,15.0742 218.554,280.917,211.096,126.241,15.0742
215.888,315.729,233.828,110.962,19.0281 215.888,315.729,233.828,110.962,19.0281
274.375,264.639,205.276,125.853,11.8948 274.375,264.639,205.276,125.853,11.8948

1 5x5 4x7 7x4 7x7 10x10
2 218.554 211.096 280.917 280.917 211.096 126.241 15.0742
3 215.888 233.828 315.729 315.729 233.828 110.962 19.0281
4 274.375 205.276 264.639 264.639 205.276 125.853 11.8948

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.1 KiB

After

Width:  |  Height:  |  Size: 5.1 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Sun Oct 1 20:12:40 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 FIT: data read from "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.992 1.000
******************************************************************************* *******************************************************************************
Sun Oct 1 20:12:40 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 FIT: data read from "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -1.000 1.000
******************************************************************************* *******************************************************************************
Sun Oct 1 20:12:40 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 FIT: data read from "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4
@ -136,3 +136,49 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 1.39893e+06 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707119
initial set of free parameter values
aaaa = 1
bbbb = 1
After 3 iterations the fit converged.
final sum of squares of residuals : 2447.69
rel. change during last iteration : -3.53005e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 4.99764
variance of residuals (reduced chisquare) = WSSR/ndf : 24.9765
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.69981 +/- 9.656e+16 (5.681e+18%)
bbbb = 119.169 +/- 5.718e+14 (4.798e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000

View File

@ -270,3 +270,69 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
Iteration 0
WSSR : 1.39893e+06 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707119
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 2482.26 delta(WSSR)/WSSR : -562.573
delta(WSSR) : -1.39645e+06 limit for stopping : 1e-05
lambda : 0.0707119
resultant parameter values
aaaa = 1.69632
bbbb = 118.581
/
Iteration 2
WSSR : 2447.69 delta(WSSR)/WSSR : -0.0141217
delta(WSSR) : -34.5656 limit for stopping : 1e-05
lambda : 0.00707119
resultant parameter values
aaaa = 1.69981
bbbb = 119.169
/
Iteration 3
WSSR : 2447.69 delta(WSSR)/WSSR : -3.53005e-11
delta(WSSR) : -8.64047e-08 limit for stopping : 1e-05
lambda : 0.000707119
resultant parameter values
aaaa = 1.69981
bbbb = 119.169
After 3 iterations the fit converged.
final sum of squares of residuals : 2447.69
rel. change during last iteration : -3.53005e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 4.99764
variance of residuals (reduced chisquare) = WSSR/ndf : 24.9765
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.69981 +/- 9.656e+16 (5.681e+18%)
bbbb = 119.169 +/- 5.718e+14 (4.798e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000
Warning: empty x range [0.00592209:0.00592209], adjusting to [0.00586287:0.00598131]

View File

@ -2,19 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 via a,b fit f(x) "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20170926_3dFit_4x4x4_100times_regularity-vs-steps.png" set output "20170926_3dFit_4x4x4_100times_regularity-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 via aa,bb fit g(x) "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20170926_3dFit_4x4x4_100times_improvement-vs-steps.png" set output "20170926_3dFit_4x4x4_100times_improvement-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20170926_3dFit_4x4x4_100times_improvement-vs-evo-error.png" set output "20170926_3dFit_4x4x4_100times_improvement-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20170926_3dFit_4x4x4_100times_variability-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Sun Oct 1 20:12:42 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 FIT: data read from "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.970 1.000
******************************************************************************* *******************************************************************************
Sun Oct 1 20:12:42 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 FIT: data read from "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -1.000 1.000
******************************************************************************* *******************************************************************************
Sun Oct 1 20:12:42 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 FIT: data read from "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4
@ -136,3 +136,49 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 582860 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707154
initial set of free parameter values
aaaa = 1
bbbb = 1
After 3 iterations the fit converged.
final sum of squares of residuals : 4883.49
rel. change during last iteration : -7.32216e-12
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.05915
variance of residuals (reduced chisquare) = WSSR/ndf : 49.8315
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.87923 +/- 6.796e+16 (3.616e+18%)
bbbb = 77.0146 +/- 7.861e+14 (1.021e+15%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000

View File

@ -226,3 +226,69 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
Iteration 0
WSSR : 582860 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707154
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 4897.8 delta(WSSR)/WSSR : -118.005
delta(WSSR) : -577962 limit for stopping : 1e-05
lambda : 0.0707154
resultant parameter values
aaaa = 1.87486
bbbb = 76.6364
/
Iteration 2
WSSR : 4883.49 delta(WSSR)/WSSR : -0.00292946
delta(WSSR) : -14.306 limit for stopping : 1e-05
lambda : 0.00707154
resultant parameter values
aaaa = 1.87923
bbbb = 77.0146
/
Iteration 3
WSSR : 4883.49 delta(WSSR)/WSSR : -7.32216e-12
delta(WSSR) : -3.57577e-08 limit for stopping : 1e-05
lambda : 0.000707154
resultant parameter values
aaaa = 1.87923
bbbb = 77.0146
After 3 iterations the fit converged.
final sum of squares of residuals : 4883.49
rel. change during last iteration : -7.32216e-12
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.05915
variance of residuals (reduced chisquare) = WSSR/ndf : 49.8315
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.87923 +/- 6.796e+16 (3.616e+18%)
bbbb = 77.0146 +/- 7.861e+14 (1.021e+15%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000
Warning: empty x range [0.0115666:0.0115666], adjusting to [0.0114509:0.0116823]

View File

@ -2,19 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 via a,b fit f(x) "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20170926_3dFit_5x5x5_100times_regularity-vs-steps.png" set output "20170926_3dFit_5x5x5_100times_regularity-vs-steps.png"
plot "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 via aa,bb fit g(x) "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20170926_3dFit_5x5x5_100times_improvement-vs-steps.png" set output "20170926_3dFit_5x5x5_100times_improvement-vs-steps.png"
plot "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20170926_3dFit_5x5x5_100times_improvement-vs-evo-error.png" set output "20170926_3dFit_5x5x5_100times_improvement-vs-evo-error.png"
plot "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20170926_3dFit_5x5x5_100times_variability-vs-evo-error.png"
plot "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 6.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Sat Oct 7 11:48:52 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 FIT: data read from "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.986 1.000
******************************************************************************* *******************************************************************************
Sat Oct 7 11:48:52 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 FIT: data read from "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -1.000 1.000
******************************************************************************* *******************************************************************************
Sat Oct 7 11:48:52 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 FIT: data read from "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4
@ -136,3 +136,49 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 1.04253e+06 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707126
initial set of free parameter values
aaaa = 1
bbbb = 1
After 3 iterations the fit converged.
final sum of squares of residuals : 4497.91
rel. change during last iteration : -1.42792e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.77474
variance of residuals (reduced chisquare) = WSSR/ndf : 45.8971
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.75417 +/- 1.135e+17 (6.469e+18%)
bbbb = 102.878 +/- 8.4e+14 (8.165e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000

View File

@ -270,3 +270,69 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
Iteration 0
WSSR : 1.04253e+06 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707126
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 4523.61 delta(WSSR)/WSSR : -229.464
delta(WSSR) : -1.03801e+06 limit for stopping : 1e-05
lambda : 0.0707126
resultant parameter values
aaaa = 1.75041
bbbb = 102.371
/
Iteration 2
WSSR : 4497.91 delta(WSSR)/WSSR : -0.00571226
delta(WSSR) : -25.6932 limit for stopping : 1e-05
lambda : 0.00707126
resultant parameter values
aaaa = 1.75417
bbbb = 102.878
/
Iteration 3
WSSR : 4497.91 delta(WSSR)/WSSR : -1.42792e-11
delta(WSSR) : -6.42267e-08 limit for stopping : 1e-05
lambda : 0.000707126
resultant parameter values
aaaa = 1.75417
bbbb = 102.878
After 3 iterations the fit converged.
final sum of squares of residuals : 4497.91
rel. change during last iteration : -1.42792e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.77474
variance of residuals (reduced chisquare) = WSSR/ndf : 45.8971
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.75417 +/- 1.135e+17 (6.469e+18%)
bbbb = 102.878 +/- 8.4e+14 (8.165e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000
Warning: empty x range [0.00740261:0.00740261], adjusting to [0.00732858:0.00747664]

View File

@ -2,19 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 via a,b fit f(x) "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171005_3dFit_4x4x5_100times_regularity-vs-steps.png" set output "20171005_3dFit_4x4x5_100times_regularity-vs-steps.png"
plot "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black" plot "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 via aa,bb fit g(x) "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171005_3dFit_4x4x5_100times_improvement-vs-steps.png" set output "20171005_3dFit_4x4x5_100times_improvement-vs-steps.png"
plot "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black" plot "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20171005_3dFit_4x4x5_100times_improvement-vs-evo-error.png" set output "20171005_3dFit_4x4x5_100times_improvement-vs-evo-error.png"
plot "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black" plot "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20171005_3dFit_4x4x5_100times_variability-vs-evo-error.png"
plot "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Sat Oct 7 11:48:58 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 FIT: data read from "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.972 1.000
******************************************************************************* *******************************************************************************
Sat Oct 7 11:48:58 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 FIT: data read from "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -1.000 1.000
******************************************************************************* *******************************************************************************
Sat Oct 7 11:48:58 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 FIT: data read from "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4
@ -136,3 +136,49 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 716707 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707145
initial set of free parameter values
aaaa = 1
bbbb = 1
After 3 iterations the fit converged.
final sum of squares of residuals : 5014.73
rel. change during last iteration : -8.78131e-12
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.15337
variance of residuals (reduced chisquare) = WSSR/ndf : 51.1707
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.87421 +/- 6.575e+16 (3.508e+18%)
bbbb = 85.3528 +/- 6.814e+14 (7.983e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000

View File

@ -259,3 +259,69 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
Iteration 0
WSSR : 716707 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707145
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 5032.35 delta(WSSR)/WSSR : -141.42
delta(WSSR) : -711675 limit for stopping : 1e-05
lambda : 0.0707145
resultant parameter values
aaaa = 1.86986
bbbb = 84.9331
/
Iteration 2
WSSR : 5014.73 delta(WSSR)/WSSR : -0.00351279
delta(WSSR) : -17.6157 limit for stopping : 1e-05
lambda : 0.00707145
resultant parameter values
aaaa = 1.87421
bbbb = 85.3528
/
Iteration 3
WSSR : 5014.73 delta(WSSR)/WSSR : -8.78131e-12
delta(WSSR) : -4.40359e-08 limit for stopping : 1e-05
lambda : 0.000707145
resultant parameter values
aaaa = 1.87421
bbbb = 85.3528
After 3 iterations the fit converged.
final sum of squares of residuals : 5014.73
rel. change during last iteration : -8.78131e-12
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.15337
variance of residuals (reduced chisquare) = WSSR/ndf : 51.1707
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.87421 +/- 6.575e+16 (3.508e+18%)
bbbb = 85.3528 +/- 6.814e+14 (7.983e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000
Warning: empty x range [0.0103637:0.0103637], adjusting to [0.0102601:0.0104673]

View File

@ -2,19 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 via a,b fit f(x) "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171005_3dFit_7x4x4_100times_regularity-vs-steps.png" set output "20171005_3dFit_7x4x4_100times_regularity-vs-steps.png"
plot "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black" plot "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 via aa,bb fit g(x) "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171005_3dFit_7x4x4_100times_improvement-vs-steps.png" set output "20171005_3dFit_7x4x4_100times_improvement-vs-steps.png"
plot "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black" plot "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20171005_3dFit_7x4x4_100times_improvement-vs-evo-error.png" set output "20171005_3dFit_7x4x4_100times_improvement-vs-evo-error.png"
plot "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black" plot "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20171005_3dFit_7x4x4_100times_variability-vs-evo-error.png"
plot "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 6.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Sat Oct 7 12:11:35 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171007_3dFit_all.csv" every ::1 using 1:5 FIT: data read from "20171007_3dFit_all.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.945 1.000
******************************************************************************* *******************************************************************************
Sat Oct 7 12:11:35 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171007_3dFit_all.csv" every ::1 using 3:5 FIT: data read from "20171007_3dFit_all.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -0.997 1.000
******************************************************************************* *******************************************************************************
Sat Oct 7 12:11:35 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171007_3dFit_all.csv" every ::1 using 3:4 FIT: data read from "20171007_3dFit_all.csv" every ::1 using 3:4
@ -139,7 +139,7 @@ bbb -0.997 1.000
******************************************************************************* *******************************************************************************
Sat Oct 7 12:11:35 2017 Fri Oct 27 14:09:07 2017
FIT: data read from "20171007_3dFit_all.csv" every ::1 using 2:4 FIT: data read from "20171007_3dFit_all.csv" every ::1 using 2:4

View File

@ -2,25 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "20171007_3dFit_all.csv" every ::1 using 1:5 via a,b fit f(x) "20171007_3dFit_all.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171007_3dFit_all_regularity-vs-steps.png" set output "20171007_3dFit_all_regularity-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "20170926_3dFit_4x4x4_100times.csv", "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 title "20170926_3dFit_5x5x5_100times.csv", "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 title "20171005_3dFit_4x4x5_100times.csv", "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 title "20171005_3dFit_7x4x4_100times.csv", f(x) title "lin. fit" lc rgb "black" plot "20171007_3dFit_all.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "20171007_3dFit_all.csv" every ::1 using 3:5 via aa,bb fit g(x) "20171007_3dFit_all.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171007_3dFit_all_improvement-vs-steps.png" set output "20171007_3dFit_all_improvement-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "20170926_3dFit_4x4x4_100times.csv", "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 title "20170926_3dFit_5x5x5_100times.csv", "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 title "20171005_3dFit_4x4x5_100times.csv", "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 title "20171005_3dFit_7x4x4_100times.csv", g(x) title "lin. fit" lc rgb "black" plot "20171007_3dFit_all.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "20171007_3dFit_all.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "20171007_3dFit_all.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20171007_3dFit_all_improvement-vs-evo-error.png" set output "20171007_3dFit_all_improvement-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "20170926_3dFit_4x4x4_100times.csv", "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 title "20170926_3dFit_5x5x5_100times.csv", "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 title "20171005_3dFit_4x4x5_100times.csv", "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 title "20171005_3dFit_7x4x4_100times.csv", h(x) title "lin. fit" lc rgb "black" plot "20171007_3dFit_all.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "20171007_3dFit_all.csv" every ::1 using 2:4 via aaaa,bbbb fit i(x) "20171007_3dFit_all.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20171007_3dFit_all_variability-vs-evo-error.png" set output "20171007_3dFit_all_variability-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "20170926_3dFit_4x4x4_100times.csv", "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4 title "20170926_3dFit_5x5x5_100times.csv", "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4 title "20171005_3dFit_4x4x5_100times.csv", "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4 title "20171005_3dFit_7x4x4_100times.csv", i(x) title "lin. fit" lc rgb "black" plot "20171007_3dFit_all.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.1 KiB

After

Width:  |  Height:  |  Size: 6.1 KiB

View File

@ -0,0 +1,184 @@
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171013_3dFit_4x4x7_100times.csv" every ::1 using 1:5
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: f(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 4.17059e+08 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707107
initial set of free parameter values
a = 1
b = 1
After 7 iterations the fit converged.
final sum of squares of residuals : 1.87465e+07
rel. change during last iteration : -3.45833e-09
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 437.368
variance of residuals (reduced chisquare) = WSSR/ndf : 191291
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -1.04278e+07 +/- 2.15e+06 (20.62%)
b = 2804.5 +/- 174.4 (6.22%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.968 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:5
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: g(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 4.16784e+08 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.860178
initial set of free parameter values
aa = 1
bb = 1
After 4 iterations the fit converged.
final sum of squares of residuals : 2.30326e+07
rel. change during last iteration : -2.46431e-06
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 484.795
variance of residuals (reduced chisquare) = WSSR/ndf : 235026
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = 7203.94 +/- 7544 (104.7%)
bb = -3004.38 +/- 5226 (173.9%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: h(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 770224 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.860178
initial set of free parameter values
aaa = 1
bbb = 1
After 5 iterations the fit converged.
final sum of squares of residuals : 3831.77
rel. change during last iteration : -2.81552e-12
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.25298
variance of residuals (reduced chisquare) = WSSR/ndf : 39.0997
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -284.393 +/- 97.31 (34.22%)
bbb = 286.203 +/- 67.4 (23.55%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171013_3dFit_4x4x7_100times.csv" every ::1 using 2:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 782212 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707145
initial set of free parameter values
aaaa = 1
bbbb = 1
After 3 iterations the fit converged.
final sum of squares of residuals : 4165.76
rel. change during last iteration : -1.15562e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.51979
variance of residuals (reduced chisquare) = WSSR/ndf : 42.5077
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.91405 +/- 1.245e+17 (6.505e+18%)
bbbb = 89.1974 +/- 1.29e+15 (1.447e+15%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000

View File

@ -0,0 +1,338 @@
Iteration 0
WSSR : 4.17059e+08 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707107
initial set of free parameter values
a = 1
b = 1
/
Iteration 1
WSSR : 2.32566e+07 delta(WSSR)/WSSR : -16.9329
delta(WSSR) : -3.93802e+08 limit for stopping : 1e-05
lambda : 0.0707107
resultant parameter values
a = 0.291954
b = 1975.6
/
Iteration 2
WSSR : 2.32468e+07 delta(WSSR)/WSSR : -0.000422514
delta(WSSR) : -9822.08 limit for stopping : 1e-05
lambda : 0.00707107
resultant parameter values
a = -86.0202
b = 1985.48
/
Iteration 3
WSSR : 2.32393e+07 delta(WSSR)/WSSR : -0.000320176
delta(WSSR) : -7440.69 limit for stopping : 1e-05
lambda : 0.000707107
resultant parameter values
a = -8710.18
b = 1986.15
/
Iteration 4
WSSR : 2.25787e+07 delta(WSSR)/WSSR : -0.0292598
delta(WSSR) : -660649 limit for stopping : 1e-05
lambda : 7.07107e-05
resultant parameter values
a = -805199
b = 2048.71
/
Iteration 5
WSSR : 1.87911e+07 delta(WSSR)/WSSR : -0.201566
delta(WSSR) : -3.78764e+06 limit for stopping : 1e-05
lambda : 7.07107e-06
resultant parameter values
a = -9.39061e+06
b = 2723.04
/
Iteration 6
WSSR : 1.87465e+07 delta(WSSR)/WSSR : -0.00237512
delta(WSSR) : -44525.2 limit for stopping : 1e-05
lambda : 7.07107e-07
resultant parameter values
a = -1.04266e+07
b = 2804.4
/
Iteration 7
WSSR : 1.87465e+07 delta(WSSR)/WSSR : -3.45833e-09
delta(WSSR) : -0.0648317 limit for stopping : 1e-05
lambda : 7.07107e-08
resultant parameter values
a = -1.04278e+07
b = 2804.5
After 7 iterations the fit converged.
final sum of squares of residuals : 1.87465e+07
rel. change during last iteration : -3.45833e-09
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 437.368
variance of residuals (reduced chisquare) = WSSR/ndf : 191291
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -1.04278e+07 +/- 2.15e+06 (20.62%)
b = 2804.5 +/- 174.4 (6.22%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.968 1.000
Iteration 0
WSSR : 4.16784e+08 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.860178
initial set of free parameter values
aa = 1
bb = 1
/
Iteration 1
WSSR : 2.32036e+07 delta(WSSR)/WSSR : -16.962
delta(WSSR) : -3.9358e+08 limit for stopping : 1e-05
lambda : 0.0860178
resultant parameter values
aa = 948.6
bb = 1318.67
/
Iteration 2
WSSR : 2.31176e+07 delta(WSSR)/WSSR : -0.00372074
delta(WSSR) : -86014.6 limit for stopping : 1e-05
lambda : 0.00860178
resultant parameter values
aa = 2665.07
bb = 139.584
/
Iteration 3
WSSR : 2.30326e+07 delta(WSSR)/WSSR : -0.00369116
delta(WSSR) : -85017 limit for stopping : 1e-05
lambda : 0.000860178
resultant parameter values
aa = 7086.74
bb = -2923.19
/
Iteration 4
WSSR : 2.30326e+07 delta(WSSR)/WSSR : -2.46431e-06
delta(WSSR) : -56.7593 limit for stopping : 1e-05
lambda : 8.60178e-05
resultant parameter values
aa = 7203.94
bb = -3004.38
After 4 iterations the fit converged.
final sum of squares of residuals : 2.30326e+07
rel. change during last iteration : -2.46431e-06
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 484.795
variance of residuals (reduced chisquare) = WSSR/ndf : 235026
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = 7203.94 +/- 7544 (104.7%)
bb = -3004.38 +/- 5226 (173.9%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -1.000 1.000
Iteration 0
WSSR : 770224 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.860178
initial set of free parameter values
aaa = 1
bbb = 1
/
Iteration 1
WSSR : 4287.26 delta(WSSR)/WSSR : -178.654
delta(WSSR) : -765937 limit for stopping : 1e-05
lambda : 0.0860178
resultant parameter values
aaa = 40.5365
bbb = 60.6977
/
Iteration 2
WSSR : 4061.95 delta(WSSR)/WSSR : -0.0554706
delta(WSSR) : -225.318 limit for stopping : 1e-05
lambda : 0.00860178
resultant parameter values
aaa = -48.3014
bbb = 122.669
/
Iteration 3
WSSR : 3831.93 delta(WSSR)/WSSR : -0.0600269
delta(WSSR) : -230.019 limit for stopping : 1e-05
lambda : 0.000860178
resultant parameter values
aaa = -278.294
bbb = 281.979
/
Iteration 4
WSSR : 3831.77 delta(WSSR)/WSSR : -4.00769e-05
delta(WSSR) : -0.153566 limit for stopping : 1e-05
lambda : 8.60178e-05
resultant parameter values
aaa = -284.391
bbb = 286.202
/
Iteration 5
WSSR : 3831.77 delta(WSSR)/WSSR : -2.81552e-12
delta(WSSR) : -1.07884e-08 limit for stopping : 1e-05
lambda : 8.60178e-06
resultant parameter values
aaa = -284.393
bbb = 286.203
After 5 iterations the fit converged.
final sum of squares of residuals : 3831.77
rel. change during last iteration : -2.81552e-12
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.25298
variance of residuals (reduced chisquare) = WSSR/ndf : 39.0997
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -284.393 +/- 97.31 (34.22%)
bbb = 286.203 +/- 67.4 (23.55%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -1.000 1.000
Iteration 0
WSSR : 782212 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707145
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 4185.02 delta(WSSR)/WSSR : -185.908
delta(WSSR) : -778027 limit for stopping : 1e-05
lambda : 0.0707145
resultant parameter values
aaaa = 1.9095
bbbb = 88.7586
/
Iteration 2
WSSR : 4165.76 delta(WSSR)/WSSR : -0.00462295
delta(WSSR) : -19.2581 limit for stopping : 1e-05
lambda : 0.00707145
resultant parameter values
aaaa = 1.91405
bbbb = 89.1974
/
Iteration 3
WSSR : 4165.76 delta(WSSR)/WSSR : -1.15562e-11
delta(WSSR) : -4.81405e-08 limit for stopping : 1e-05
lambda : 0.000707145
resultant parameter values
aaaa = 1.91405
bbbb = 89.1974
After 3 iterations the fit converged.
final sum of squares of residuals : 4165.76
rel. change during last iteration : -1.15562e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.51979
variance of residuals (reduced chisquare) = WSSR/ndf : 42.5077
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.91405 +/- 1.245e+17 (6.505e+18%)
bbbb = 89.1974 +/- 1.29e+15 (1.447e+15%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000
Warning: empty x range [0.0103637:0.0103637], adjusting to [0.0102601:0.0104673]

View File

@ -0,0 +1,26 @@
set datafile separator ","
f(x)=a*x+b
fit f(x) "20171013_3dFit_4x4x7_100times.csv" every ::1 using 1:5 via a,b
set terminal png
set xlabel 'Regularity'
set ylabel 'Number of iterations'
set output "20171013_3dFit_4x4x7_100times_regularity-vs-steps.png"
plot "20171013_3dFit_4x4x7_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb
fit g(x) "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'Improvement potential'
set ylabel 'Number of iterations'
set output "20171013_3dFit_4x4x7_100times_improvement-vs-steps.png"
plot "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb
fit h(x) "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'Improvement potential'
set ylabel 'error given by fitness-function'
set output "20171013_3dFit_4x4x7_100times_improvement-vs-evo-error.png"
plot "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20171013_3dFit_4x4x7_100times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20171013_3dFit_4x4x7_100times_variability-vs-evo-error.png"
plot "20171013_3dFit_4x4x7_100times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

View File

@ -0,0 +1,184 @@
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171013_3dFit_5x4x4_100times.csv" every ::1 using 1:5
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: f(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 8.41899e+07 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707107
initial set of free parameter values
a = 1
b = 1
After 7 iterations the fit converged.
final sum of squares of residuals : 8.72636e+06
rel. change during last iteration : -9.16821e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 298.403
variance of residuals (reduced chisquare) = WSSR/ndf : 89044.5
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -1.15579e+06 +/- 1.468e+06 (127%)
b = 1020.47 +/- 194.2 (19.03%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.988 1.000
*******************************************************************************
Fri Oct 27 14:09:07 2017
FIT: data read from "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:5
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: g(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 8.40737e+07 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.850561
initial set of free parameter values
aa = 1
bb = 1
After 4 iterations the fit converged.
final sum of squares of residuals : 7.83163e+06
rel. change during last iteration : -4.61976e-06
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 282.692
variance of residuals (reduced chisquare) = WSSR/ndf : 79914.6
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = 10438.1 +/- 3028 (29%)
bb = -6107.9 +/- 2024 (33.14%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: h(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 997151 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.850561
initial set of free parameter values
aaa = 1
bbb = 1
After 4 iterations the fit converged.
final sum of squares of residuals : 4984.54
rel. change during last iteration : -5.99309e-06
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.1318
variance of residuals (reduced chisquare) = WSSR/ndf : 50.8626
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -241.373 +/- 76.38 (31.64%)
bbb = 262.595 +/- 51.06 (19.44%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "20171013_3dFit_5x4x4_100times.csv" every ::1 using 2:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 1.01036e+06 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707126
initial set of free parameter values
aaaa = 1
bbbb = 1
After 3 iterations the fit converged.
final sum of squares of residuals : 5492.5
rel. change during last iteration : -1.13205e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.48638
variance of residuals (reduced chisquare) = WSSR/ndf : 56.0459
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.74202 +/- 9.406e+16 (5.4e+18%)
bbbb = 101.237 +/- 6.963e+14 (6.878e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000

View File

@ -0,0 +1,327 @@
Iteration 0
WSSR : 8.41899e+07 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707107
initial set of free parameter values
a = 1
b = 1
/
Iteration 1
WSSR : 8.78343e+06 delta(WSSR)/WSSR : -8.58509
delta(WSSR) : -7.54065e+07 limit for stopping : 1e-05
lambda : 0.0707107
resultant parameter values
a = 1.01743
b = 865.06
/
Iteration 2
WSSR : 8.78156e+06 delta(WSSR)/WSSR : -0.000212651
delta(WSSR) : -1867.41 limit for stopping : 1e-05
lambda : 0.00707107
resultant parameter values
a = -8.53399
b = 869.381
/
Iteration 3
WSSR : 8.78147e+06 delta(WSSR)/WSSR : -1.03771e-05
delta(WSSR) : -91.1264 limit for stopping : 1e-05
lambda : 0.000707107
resultant parameter values
a = -962.938
b = 869.506
/
Iteration 4
WSSR : 8.77338e+06 delta(WSSR)/WSSR : -0.000922382
delta(WSSR) : -8092.41 limit for stopping : 1e-05
lambda : 7.07107e-05
resultant parameter values
a = -89117.8
b = 881.03
/
Iteration 5
WSSR : 8.72691e+06 delta(WSSR)/WSSR : -0.00532471
delta(WSSR) : -46468.3 limit for stopping : 1e-05
lambda : 7.07107e-06
resultant parameter values
a = -1.04065e+06
b = 1005.42
/
Iteration 6
WSSR : 8.72636e+06 delta(WSSR)/WSSR : -6.27725e-05
delta(WSSR) : -547.775 limit for stopping : 1e-05
lambda : 7.07107e-07
resultant parameter values
a = -1.15565e+06
b = 1020.45
/
Iteration 7
WSSR : 8.72636e+06 delta(WSSR)/WSSR : -9.16821e-11
delta(WSSR) : -0.000800051 limit for stopping : 1e-05
lambda : 7.07107e-08
resultant parameter values
a = -1.15579e+06
b = 1020.47
After 7 iterations the fit converged.
final sum of squares of residuals : 8.72636e+06
rel. change during last iteration : -9.16821e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 298.403
variance of residuals (reduced chisquare) = WSSR/ndf : 89044.5
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -1.15579e+06 +/- 1.468e+06 (127%)
b = 1020.47 +/- 194.2 (19.03%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.988 1.000
Iteration 0
WSSR : 8.40737e+07 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.850561
initial set of free parameter values
aa = 1
bb = 1
/
Iteration 1
WSSR : 8.69722e+06 delta(WSSR)/WSSR : -8.66674
delta(WSSR) : -7.53765e+07 limit for stopping : 1e-05
lambda : 0.0850561
resultant parameter values
aa = 483.004
bb = 542.6
/
Iteration 2
WSSR : 8.08871e+06 delta(WSSR)/WSSR : -0.0752288
delta(WSSR) : -608504 limit for stopping : 1e-05
lambda : 0.00850561
resultant parameter values
aa = 5007.98
bb = -2477.97
/
Iteration 3
WSSR : 7.83167e+06 delta(WSSR)/WSSR : -0.0328215
delta(WSSR) : -257047 limit for stopping : 1e-05
lambda : 0.000850561
resultant parameter values
aa = 10373.7
bb = -6064.85
/
Iteration 4
WSSR : 7.83163e+06 delta(WSSR)/WSSR : -4.61976e-06
delta(WSSR) : -36.1803 limit for stopping : 1e-05
lambda : 8.50561e-05
resultant parameter values
aa = 10438.1
bb = -6107.9
After 4 iterations the fit converged.
final sum of squares of residuals : 7.83163e+06
rel. change during last iteration : -4.61976e-06
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 282.692
variance of residuals (reduced chisquare) = WSSR/ndf : 79914.6
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = 10438.1 +/- 3028 (29%)
bb = -6107.9 +/- 2024 (33.14%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -1.000 1.000
Iteration 0
WSSR : 997151 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.850561
initial set of free parameter values
aaa = 1
bbb = 1
/
Iteration 1
WSSR : 5722.22 delta(WSSR)/WSSR : -173.259
delta(WSSR) : -991429 limit for stopping : 1e-05
lambda : 0.0850561
resultant parameter values
aaa = 44.3933
bbb = 71.0688
/
Iteration 2
WSSR : 5196.8 delta(WSSR)/WSSR : -0.101105
delta(WSSR) : -525.422 limit for stopping : 1e-05
lambda : 0.00850561
resultant parameter values
aaa = -85.3429
bbb = 158.291
/
Iteration 3
WSSR : 4984.57 delta(WSSR)/WSSR : -0.0425783
delta(WSSR) : -212.235 limit for stopping : 1e-05
lambda : 0.000850561
resultant parameter values
aaa = -239.522
bbb = 261.358
/
Iteration 4
WSSR : 4984.54 delta(WSSR)/WSSR : -5.99309e-06
delta(WSSR) : -0.0298728 limit for stopping : 1e-05
lambda : 8.50561e-05
resultant parameter values
aaa = -241.373
bbb = 262.595
After 4 iterations the fit converged.
final sum of squares of residuals : 4984.54
rel. change during last iteration : -5.99309e-06
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.1318
variance of residuals (reduced chisquare) = WSSR/ndf : 50.8626
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -241.373 +/- 76.38 (31.64%)
bbb = 262.595 +/- 51.06 (19.44%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -1.000 1.000
Iteration 0
WSSR : 1.01036e+06 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707126
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 5517.37 delta(WSSR)/WSSR : -182.123
delta(WSSR) : -1.00484e+06 limit for stopping : 1e-05
lambda : 0.0707126
resultant parameter values
aaaa = 1.73833
bbbb = 100.739
/
Iteration 2
WSSR : 5492.5 delta(WSSR)/WSSR : -0.00452841
delta(WSSR) : -24.8723 limit for stopping : 1e-05
lambda : 0.00707126
resultant parameter values
aaaa = 1.74202
bbbb = 101.237
/
Iteration 3
WSSR : 5492.5 delta(WSSR)/WSSR : -1.13205e-11
delta(WSSR) : -6.21776e-08 limit for stopping : 1e-05
lambda : 0.000707126
resultant parameter values
aaaa = 1.74202
bbbb = 101.237
After 3 iterations the fit converged.
final sum of squares of residuals : 5492.5
rel. change during last iteration : -1.13205e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.48638
variance of residuals (reduced chisquare) = WSSR/ndf : 56.0459
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 1.74202 +/- 9.406e+16 (5.4e+18%)
bbbb = 101.237 +/- 6.963e+14 (6.878e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000
Warning: empty x range [0.00740261:0.00740261], adjusting to [0.00732858:0.00747664]

View File

@ -0,0 +1,26 @@
set datafile separator ","
f(x)=a*x+b
fit f(x) "20171013_3dFit_5x4x4_100times.csv" every ::1 using 1:5 via a,b
set terminal png
set xlabel 'Regularity'
set ylabel 'Number of iterations'
set output "20171013_3dFit_5x4x4_100times_regularity-vs-steps.png"
plot "20171013_3dFit_5x4x4_100times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb
fit g(x) "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'Improvement potential'
set ylabel 'Number of iterations'
set output "20171013_3dFit_5x4x4_100times_improvement-vs-steps.png"
plot "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb
fit h(x) "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'Improvement potential'
set ylabel 'error given by fitness-function'
set output "20171013_3dFit_5x4x4_100times_improvement-vs-evo-error.png"
plot "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20171013_3dFit_5x4x4_100times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20171013_3dFit_5x4x4_100times_variability-vs-evo-error.png"
plot "20171013_3dFit_5x4x4_100times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Mon Oct 23 12:06:26 2017 Fri Oct 27 14:09:08 2017
FIT: data read from "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 FIT: data read from "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.995 1.000
******************************************************************************* *******************************************************************************
Mon Oct 23 12:06:26 2017 Fri Oct 27 14:09:08 2017
FIT: data read from "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 FIT: data read from "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -1.000 1.000
******************************************************************************* *******************************************************************************
Mon Oct 23 12:06:26 2017 Fri Oct 27 14:09:08 2017
FIT: data read from "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 FIT: data read from "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4
@ -136,3 +136,49 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "20171021-evolution3D_6x6_100Times.csv" every ::1 using 2:4
format = x:z
#datapoints = 110
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 423824 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707248
initial set of free parameter values
aaaa = 1
bbbb = 1
After 3 iterations the fit converged.
final sum of squares of residuals : 3576.05
rel. change during last iteration : -4.97138e-12
degrees of freedom (FIT_NDF) : 108
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 5.75426
variance of residuals (reduced chisquare) = WSSR/ndf : 33.1115
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 2.2349 +/- 2.531e+16 (1.133e+18%)
bbbb = 62.785 +/- 5.059e+14 (8.058e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000

View File

@ -226,3 +226,69 @@ correlation matrix of the fit parameters:
aaa bbb aaa bbb
aaa 1.000 aaa 1.000
bbb -1.000 1.000 bbb -1.000 1.000
Iteration 0
WSSR : 423824 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.707248
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 3584.65 delta(WSSR)/WSSR : -117.233
delta(WSSR) : -420239 limit for stopping : 1e-05
lambda : 0.0707248
resultant parameter values
aaaa = 2.22931
bbbb = 62.5054
/
Iteration 2
WSSR : 3576.05 delta(WSSR)/WSSR : -0.00240612
delta(WSSR) : -8.6044 limit for stopping : 1e-05
lambda : 0.00707248
resultant parameter values
aaaa = 2.2349
bbbb = 62.785
/
Iteration 3
WSSR : 3576.05 delta(WSSR)/WSSR : -4.97138e-12
delta(WSSR) : -1.77779e-08 limit for stopping : 1e-05
lambda : 0.000707248
resultant parameter values
aaaa = 2.2349
bbbb = 62.785
After 3 iterations the fit converged.
final sum of squares of residuals : 3576.05
rel. change during last iteration : -4.97138e-12
degrees of freedom (FIT_NDF) : 108
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 5.75426
variance of residuals (reduced chisquare) = WSSR/ndf : 33.1115
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = 2.2349 +/- 2.531e+16 (1.133e+18%)
bbbb = 62.785 +/- 5.059e+14 (8.058e+14%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -1.000 1.000
Warning: empty x range [0.019987:0.019987], adjusting to [0.0197871:0.0201869]

View File

@ -2,19 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 via a,b fit f(x) "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171021-evolution3D_6x6_100Times_regularity-vs-steps.png" set output "20171021-evolution3D_6x6_100Times_regularity-vs-steps.png"
plot "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black" plot "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 via aa,bb fit g(x) "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171021-evolution3D_6x6_100Times_improvement-vs-steps.png" set output "20171021-evolution3D_6x6_100Times_improvement-vs-steps.png"
plot "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black" plot "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20171021-evolution3D_6x6_100Times_improvement-vs-evo-error.png" set output "20171021-evolution3D_6x6_100Times_improvement-vs-evo-error.png"
plot "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black" plot "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20171021-evolution3D_6x6_100Times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20171021-evolution3D_6x6_100Times_variability-vs-evo-error.png"
plot "20171021-evolution3D_6x6_100Times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.3 KiB

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.4 KiB

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.1 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

View File

@ -1,184 +1,10 @@
******************************************************************************* *******************************************************************************
Wed Oct 25 16:01:21 2017 Fri Oct 27 14:09:08 2017
FIT: data read from "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 1:5 FIT: data read from "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 1:5
format = x:z format = x:z
#datapoints = 6 BREAK: No data to fit
residuals are weighted equally (unit weight)
function used for fitting: f(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 9.03463 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.800174
initial set of free parameter values
a = 1
b = 1
After 4 iterations the fit converged.
final sum of squares of residuals : 0.760112
rel. change during last iteration : -7.81424e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.435922
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190028
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -0.500504 +/- 0.4333 (86.58%)
b = 0.50226 +/- 0.2295 (45.7%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.632 1.000
*******************************************************************************
Wed Oct 25 16:01:21 2017
FIT: data read from "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:5
format = x:z
#datapoints = 6
residuals are weighted equally (unit weight)
function used for fitting: g(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 9.042 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.80039
initial set of free parameter values
aa = 1
bb = 1
After 4 iterations the fit converged.
final sum of squares of residuals : 0.760537
rel. change during last iteration : -7.73688e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.436044
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190134
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = -0.499395 +/- 0.4329 (86.68%)
bb = 0.502057 +/- 0.2296 (45.72%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -0.631 1.000
*******************************************************************************
Wed Oct 25 16:01:21 2017
FIT: data read from "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:4
format = x:z
#datapoints = 6
residuals are weighted equally (unit weight)
function used for fitting: h(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 9.04152 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.80039
initial set of free parameter values
aaa = 1
bbb = 1
After 4 iterations the fit converged.
final sum of squares of residuals : 0.763537
rel. change during last iteration : -7.73556e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.436903
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190884
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -0.501106 +/- 0.4337 (86.55%)
bbb = 0.503355 +/- 0.23 (45.7%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -0.631 1.000
*******************************************************************************
Wed Oct 25 16:01:21 2017
FIT: data read from "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 2:4
format = x:z
#datapoints = 6
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 9.04263 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.800411
initial set of free parameter values
aaaa = 1
bbbb = 1
After 4 iterations the fit converged.
final sum of squares of residuals : 0.763697
rel. change during last iteration : -7.7194e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.436949
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190924
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = -0.50098 +/- 0.4338 (86.59%)
bbbb = 0.50338 +/- 0.2301 (45.71%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -0.632 1.000

View File

@ -1,304 +1,3 @@
No data to fit
"20171025-evolution3D_10x10x10_noFit.gnuplot.script", line 3:
Iteration 0
WSSR : 9.03463 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.800174
initial set of free parameter values
a = 1
b = 1
/
Iteration 1
WSSR : 1.04136 delta(WSSR)/WSSR : -7.67579
delta(WSSR) : -7.99327 limit for stopping : 1e-05
lambda : 0.0800174
resultant parameter values
a = 0.00294917
b = 0.398082
/
Iteration 2
WSSR : 0.760123 delta(WSSR)/WSSR : -0.36999
delta(WSSR) : -0.281238 limit for stopping : 1e-05
lambda : 0.00800174
resultant parameter values
a = -0.497122
b = 0.501019
/
Iteration 3
WSSR : 0.760112 delta(WSSR)/WSSR : -1.53218e-05
delta(WSSR) : -1.16463e-05 limit for stopping : 1e-05
lambda : 0.000800174
resultant parameter values
a = -0.500504
b = 0.50226
/
Iteration 4
WSSR : 0.760112 delta(WSSR)/WSSR : -7.81424e-14
delta(WSSR) : -5.93969e-14 limit for stopping : 1e-05
lambda : 8.00174e-05
resultant parameter values
a = -0.500504
b = 0.50226
After 4 iterations the fit converged.
final sum of squares of residuals : 0.760112
rel. change during last iteration : -7.81424e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.435922
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190028
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -0.500504 +/- 0.4333 (86.58%)
b = 0.50226 +/- 0.2295 (45.7%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.632 1.000
Iteration 0
WSSR : 9.042 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.80039
initial set of free parameter values
aa = 1
bb = 1
/
Iteration 1
WSSR : 1.04131 delta(WSSR)/WSSR : -7.68331
delta(WSSR) : -8.0007 limit for stopping : 1e-05
lambda : 0.080039
resultant parameter values
aa = 0.00287365
bb = 0.398135
/
Iteration 2
WSSR : 0.760548 delta(WSSR)/WSSR : -0.369155
delta(WSSR) : -0.28076 limit for stopping : 1e-05
lambda : 0.0080039
resultant parameter values
aa = -0.496029
bb = 0.50082
/
Iteration 3
WSSR : 0.760537 delta(WSSR)/WSSR : -1.52182e-05
delta(WSSR) : -1.1574e-05 limit for stopping : 1e-05
lambda : 0.00080039
resultant parameter values
aa = -0.499395
bb = 0.502057
/
Iteration 4
WSSR : 0.760537 delta(WSSR)/WSSR : -7.73688e-14
delta(WSSR) : -5.88418e-14 limit for stopping : 1e-05
lambda : 8.0039e-05
resultant parameter values
aa = -0.499395
bb = 0.502057
After 4 iterations the fit converged.
final sum of squares of residuals : 0.760537
rel. change during last iteration : -7.73688e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.436044
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190134
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = -0.499395 +/- 0.4329 (86.68%)
bb = 0.502057 +/- 0.2296 (45.72%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -0.631 1.000
Iteration 0
WSSR : 9.04152 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.80039
initial set of free parameter values
aaa = 1
bbb = 1
/
Iteration 1
WSSR : 1.04503 delta(WSSR)/WSSR : -7.65191
delta(WSSR) : -7.99648 limit for stopping : 1e-05
lambda : 0.080039
resultant parameter values
aaa = 0.00194603
bbb = 0.399071
/
Iteration 2
WSSR : 0.763548 delta(WSSR)/WSSR : -0.36865
delta(WSSR) : -0.281482 limit for stopping : 1e-05
lambda : 0.0080039
resultant parameter values
aaa = -0.497734
bbb = 0.502116
/
Iteration 3
WSSR : 0.763537 delta(WSSR)/WSSR : -1.52098e-05
delta(WSSR) : -1.16133e-05 limit for stopping : 1e-05
lambda : 0.00080039
resultant parameter values
aaa = -0.501106
bbb = 0.503355
/
Iteration 4
WSSR : 0.763537 delta(WSSR)/WSSR : -7.73556e-14
delta(WSSR) : -5.90639e-14 limit for stopping : 1e-05
lambda : 8.0039e-05
resultant parameter values
aaa = -0.501106
bbb = 0.503355
After 4 iterations the fit converged.
final sum of squares of residuals : 0.763537
rel. change during last iteration : -7.73556e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.436903
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190884
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -0.501106 +/- 0.4337 (86.55%)
bbb = 0.503355 +/- 0.23 (45.7%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -0.631 1.000
Iteration 0
WSSR : 9.04263 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 0.800411
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 1.04513 delta(WSSR)/WSSR : -7.65212
delta(WSSR) : -7.99749 limit for stopping : 1e-05
lambda : 0.0800411
resultant parameter values
aaaa = 0.00204362
bbbb = 0.399044
/
Iteration 2
WSSR : 0.763709 delta(WSSR)/WSSR : -0.368499
delta(WSSR) : -0.281426 limit for stopping : 1e-05
lambda : 0.00800411
resultant parameter values
aaaa = -0.497607
bbbb = 0.50214
/
Iteration 3
WSSR : 0.763697 delta(WSSR)/WSSR : -1.52103e-05
delta(WSSR) : -1.16161e-05 limit for stopping : 1e-05
lambda : 0.000800411
resultant parameter values
aaaa = -0.50098
bbbb = 0.50338
/
Iteration 4
WSSR : 0.763697 delta(WSSR)/WSSR : -7.7194e-14
delta(WSSR) : -5.89528e-14 limit for stopping : 1e-05
lambda : 8.00411e-05
resultant parameter values
aaaa = -0.50098
bbbb = 0.50338
After 4 iterations the fit converged.
final sum of squares of residuals : 0.763697
rel. change during last iteration : -7.7194e-14
degrees of freedom (FIT_NDF) : 4
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.436949
variance of residuals (reduced chisquare) = WSSR/ndf : 0.190924
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = -0.50098 +/- 0.4338 (86.59%)
bbbb = 0.50338 +/- 0.2301 (45.71%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -0.632 1.000

View File

@ -2,25 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 1:5 via a,b fit f(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171025-evolution3D_10x10x10_noFit_regularity-vs-steps.png" set output "20171025-evolution3D_10x10x10_noFit_regularity-vs-steps.png"
plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black" plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:5 via aa,bb fit g(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "20171025-evolution3D_10x10x10_noFit_improvement-vs-steps.png" set output "20171025-evolution3D_10x10x10_noFit_improvement-vs-steps.png"
plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black" plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20171025-evolution3D_10x10x10_noFit_improvement-vs-evo-error.png" set output "20171025-evolution3D_10x10x10_noFit_improvement-vs-evo-error.png"
plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black" plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 2:4 via aaaa,bbbb fit i(x) "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'error given by fitness-function'
set output "20171025-evolution3D_10x10x10_noFit_variability-vs-evo-error.png" set output "20171025-evolution3D_10x10x10_noFit_variability-vs-evo-error.png"
plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black" plot "20171025-evolution3D_10x10x10_noFit.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

View File

@ -0,0 +1,10 @@
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 1:5
format = x:z
BREAK: No data to fit

View File

@ -0,0 +1,3 @@
No data to fit
"20171025-evolution3D_10x10x10_noFit_100Times.gnuplot.script", line 3:

View File

@ -0,0 +1,26 @@
set datafile separator ","
f(x)=a*x+b
fit f(x) "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 1:5 via a,b
set terminal png
set xlabel 'Regularity'
set ylabel 'Number of iterations'
set output "20171025-evolution3D_10x10x10_noFit_100Times_regularity-vs-steps.png"
plot "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb
fit g(x) "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 3:5 via aa,bb
set xlabel 'Improvement potential'
set ylabel 'Number of iterations'
set output "20171025-evolution3D_10x10x10_noFit_100Times_improvement-vs-steps.png"
plot "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb
fit h(x) "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'Improvement potential'
set ylabel 'error given by fitness-function'
set output "20171025-evolution3D_10x10x10_noFit_100Times_improvement-vs-evo-error.png"
plot "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "20171025-evolution3D_10x10x10_noFit_100Times_variability-vs-evo-error.png"
plot "20171025-evolution3D_10x10x10_noFit_100Times.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:24 2017 Fri Oct 27 14:11:51 2017
FIT: data read from "4x4xX.csv" every ::1 using 1:5 FIT: data read from "4x4xX.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.938 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:24 2017 Fri Oct 27 14:11:51 2017
FIT: data read from "4x4xX.csv" every ::1 using 3:5 FIT: data read from "4x4xX.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -0.999 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:24 2017 Fri Oct 27 14:11:51 2017
FIT: data read from "4x4xX.csv" every ::1 using 3:4 FIT: data read from "4x4xX.csv" every ::1 using 3:4
@ -139,7 +139,7 @@ bbb -0.999 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:24 2017 Fri Oct 27 14:11:51 2017
FIT: data read from "4x4xX.csv" every ::1 using 2:4 FIT: data read from "4x4xX.csv" every ::1 using 2:4

View File

@ -2,25 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "4x4xX.csv" every ::1 using 1:5 via a,b fit f(x) "4x4xX.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "4x4xX_regularity-vs-steps.png" set output "4x4xX_regularity-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 1:5 title "4x4x7" pt 2, f(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 1:5 title "4x4x7" pt 2, f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "4x4xX.csv" every ::1 using 3:5 via aa,bb fit g(x) "4x4xX.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "4x4xX_improvement-vs-steps.png" set output "4x4xX_improvement-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:5 title "4x4x7" pt 2, g(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:5 title "4x4x7" pt 2, g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "4x4xX.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "4x4xX.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "4x4xX_improvement-vs-evo-error.png" set output "4x4xX_improvement-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:4 title "4x4x7" pt 2, h(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:4 title "4x4x7" pt 2, h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "4x4xX.csv" every ::1 using 2:4 via aaaa,bbbb fit i(x) "4x4xX.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "4x4xX_variability-vs-evo-error.png" set output "4x4xX_variability-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 2:4 title "4x4x7" pt 2, i(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 2:4 title "4x4x7" pt 2, i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

After

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 KiB

After

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.8 KiB

After

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.0 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:30 2017 Fri Oct 27 14:12:05 2017
FIT: data read from "Xx4x4.csv" every ::1 using 1:5 FIT: data read from "Xx4x4.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.934 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:30 2017 Fri Oct 27 14:12:05 2017
FIT: data read from "Xx4x4.csv" every ::1 using 3:5 FIT: data read from "Xx4x4.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -0.999 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:30 2017 Fri Oct 27 14:12:05 2017
FIT: data read from "Xx4x4.csv" every ::1 using 3:4 FIT: data read from "Xx4x4.csv" every ::1 using 3:4
@ -139,7 +139,7 @@ bbb -0.999 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:30 2017 Fri Oct 27 14:12:05 2017
FIT: data read from "Xx4x4.csv" every ::1 using 2:4 FIT: data read from "Xx4x4.csv" every ::1 using 2:4

View File

@ -2,25 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "Xx4x4.csv" every ::1 using 1:5 via a,b fit f(x) "Xx4x4.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "Xx4x4_regularity-vs-steps.png" set output "Xx4x4_regularity-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 1:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 title "7x4x4" pt 2, f(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 1:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 title "7x4x4" pt 2, f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "Xx4x4.csv" every ::1 using 3:5 via aa,bb fit g(x) "Xx4x4.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "Xx4x4_improvement-vs-steps.png" set output "Xx4x4_improvement-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 title "7x4x4" pt 2, g(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 title "7x4x4" pt 2, g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "Xx4x4.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "Xx4x4.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "Xx4x4_improvement-vs-evo-error.png" set output "Xx4x4_improvement-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 title "7x4x4" pt 2, h(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 title "7x4x4" pt 2, h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "Xx4x4.csv" every ::1 using 2:4 via aaaa,bbbb fit i(x) "Xx4x4.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "Xx4x4_variability-vs-evo-error.png" set output "Xx4x4_variability-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 2:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4 title "7x4x4" pt 2, i(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 2:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4 title "7x4x4" pt 2, i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

After

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.5 KiB

After

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.8 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:34 2017 Fri Oct 27 14:12:17 2017
FIT: data read from "YxYxY.csv" every ::1 using 1:5 FIT: data read from "YxYxY.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.937 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:34 2017 Fri Oct 27 14:12:17 2017
FIT: data read from "YxYxY.csv" every ::1 using 3:5 FIT: data read from "YxYxY.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -0.994 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:34 2017 Fri Oct 27 14:12:17 2017
FIT: data read from "YxYxY.csv" every ::1 using 3:4 FIT: data read from "YxYxY.csv" every ::1 using 3:4
@ -139,7 +139,7 @@ bbb -0.994 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:14:34 2017 Fri Oct 27 14:12:17 2017
FIT: data read from "YxYxY.csv" every ::1 using 2:4 FIT: data read from "YxYxY.csv" every ::1 using 2:4

View File

@ -2,25 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "YxYxY.csv" every ::1 using 1:5 via a,b fit f(x) "YxYxY.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "YxYxY_regularity-vs-steps.png" set output "YxYxY_regularity-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 title "6x6x6" pt 2, f(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 title "6x6x6" pt 2, f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "YxYxY.csv" every ::1 using 3:5 via aa,bb fit g(x) "YxYxY.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "YxYxY_improvement-vs-steps.png" set output "YxYxY_improvement-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 title "6x6x6" pt 2, g(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 title "6x6x6" pt 2, g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "YxYxY.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "YxYxY.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "YxYxY_improvement-vs-evo-error.png" set output "YxYxY_improvement-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 title "6x6x6" pt 2, h(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 title "6x6x6" pt 2, h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "YxYxY.csv" every ::1 using 2:4 via aaaa,bbbb fit i(x) "YxYxY.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "YxYxY_variability-vs-evo-error.png" set output "YxYxY_variability-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 2:4 title "6x6x6" pt 2, i(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 2:4 title "6x6x6" pt 2, i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.0 KiB

After

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.6 KiB

After

Width:  |  Height:  |  Size: 7.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.2 KiB

After

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.9 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

View File

@ -1,7 +1,7 @@
******************************************************************************* *******************************************************************************
Wed Oct 25 19:09:05 2017 Fri Oct 27 14:12:27 2017
FIT: data read from "all.csv" every ::1 using 1:5 FIT: data read from "all.csv" every ::1 using 1:5
@ -47,7 +47,7 @@ b -0.932 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:09:05 2017 Fri Oct 27 14:12:27 2017
FIT: data read from "all.csv" every ::1 using 3:5 FIT: data read from "all.csv" every ::1 using 3:5
@ -93,7 +93,7 @@ bb -0.995 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:09:05 2017 Fri Oct 27 14:12:27 2017
FIT: data read from "all.csv" every ::1 using 3:4 FIT: data read from "all.csv" every ::1 using 3:4
@ -139,7 +139,7 @@ bbb -0.995 1.000
******************************************************************************* *******************************************************************************
Wed Oct 25 19:09:05 2017 Fri Oct 27 14:12:27 2017
FIT: data read from "all.csv" every ::1 using 2:4 FIT: data read from "all.csv" every ::1 using 2:4

View File

@ -2,25 +2,25 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "all.csv" every ::1 using 1:5 via a,b fit f(x) "all.csv" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "all_regularity-vs-steps.png" set output "all_regularity-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 1:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 1:5 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 title "6x6x6" pt 2, f(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 1:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 1:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 1:5 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 1:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 1:5 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 1:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 1:5 title "6x6x6" pt 2, f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "all.csv" every ::1 using 3:5 via aa,bb fit g(x) "all.csv" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "all_improvement-vs-steps.png" set output "all_improvement-vs-steps.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:5 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 title "6x6x6" pt 2, g(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:5 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:5 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:5 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:5 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:5 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:5 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:5 title "6x6x6" pt 2, g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "all.csv" every ::1 using 3:4 via aaa,bbb fit h(x) "all.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "all_improvement-vs-evo-error.png" set output "all_improvement-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:4 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 title "6x6x6" pt 2, h(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 3:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 3:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 3:4 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 3:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 3:4 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 3:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 3:4 title "6x6x6" pt 2, h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "all.csv" every ::1 using 2:4 via aaaa,bbbb fit i(x) "all.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "all_variability-vs-evo-error.png" set output "all_variability-vs-evo-error.png"
plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 2:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 2:4 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 2:4 title "6x6x6" pt 2, i(x) title "lin. fit" lc rgb "black" plot "20170926_3dFit_4x4x4_100times.csv" every ::1 using 2:4 title "4x4x4" pt 2, "20171013_3dFit_5x4x4_100times.csv" every ::1 using 2:4 title "5x4x4" pt 2, "20171005_3dFit_7x4x4_100times.csv" every ::1 using 2:4 title "7x4x4" pt 2, "20171005_3dFit_4x4x5_100times.csv" every ::1 using 2:4 title "4x4x5" pt 2, "20171013_3dFit_4x4x7_100times.csv" every ::1 using 2:4 title "4x4x7" pt 2, "20170926_3dFit_5x5x5_100times.csv" every ::1 using 2:4 title "5x5x5" pt 2, "20171021-evolution3D_6x6_100Times.csv" every ::1 using 2:4 title "6x6x6" pt 2, i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.1 KiB

After

Width:  |  Height:  |  Size: 7.7 KiB

View File

@ -10,8 +10,8 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "$data" every ::1 using 1:5 via a,b fit f(x) "$data" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "${png}_regularity-vs-steps.png" set output "${png}_regularity-vs-steps.png"
plot \ plot \
"$2" every ::1 using 1:5 title "$3" pt 2, \ "$2" every ::1 using 1:5 title "$3" pt 2, \
@ -20,8 +20,8 @@ plot \
f(x) title "lin. fit" lc rgb "black" f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "$data" every ::1 using 3:5 via aa,bb fit g(x) "$data" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "${png}_improvement-vs-steps.png" set output "${png}_improvement-vs-steps.png"
plot \ plot \
"$2" every ::1 using 3:5 title "$3" pt 2, \ "$2" every ::1 using 3:5 title "$3" pt 2, \
@ -30,8 +30,8 @@ plot \
g(x) title "lin. fit" lc rgb "black" g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "$data" every ::1 using 3:4 via aaa,bbb fit h(x) "$data" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "${png}_improvement-vs-evo-error.png" set output "${png}_improvement-vs-evo-error.png"
plot \ plot \
"$2" every ::1 using 3:4 title "$3" pt 2, \ "$2" every ::1 using 3:4 title "$3" pt 2, \
@ -40,8 +40,8 @@ plot \
h(x) title "lin. fit" lc rgb "black" h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "$data" every ::1 using 2:4 via aaaa,bbbb fit i(x) "$data" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "${png}_variability-vs-evo-error.png" set output "${png}_variability-vs-evo-error.png"
plot \ plot \
"$2" every ::1 using 2:4 title "$3" pt 2, \ "$2" every ::1 using 2:4 title "$3" pt 2, \

View File

@ -10,8 +10,8 @@ set datafile separator ","
f(x)=a*x+b f(x)=a*x+b
fit f(x) "$data" every ::1 using 1:5 via a,b fit f(x) "$data" every ::1 using 1:5 via a,b
set terminal png set terminal png
set xlabel 'regularity' set xlabel 'Regularity'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "${png}_regularity-vs-steps.png" set output "${png}_regularity-vs-steps.png"
plot \ plot \
"$2" every ::1 using 1:5 title "$3" pt 2, \ "$2" every ::1 using 1:5 title "$3" pt 2, \
@ -24,8 +24,8 @@ plot \
f(x) title "lin. fit" lc rgb "black" f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb g(x)=aa*x+bb
fit g(x) "$data" every ::1 using 3:5 via aa,bb fit g(x) "$data" every ::1 using 3:5 via aa,bb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'steps' set ylabel 'Number of iterations'
set output "${png}_improvement-vs-steps.png" set output "${png}_improvement-vs-steps.png"
plot \ plot \
"$2" every ::1 using 3:5 title "$3" pt 2, \ "$2" every ::1 using 3:5 title "$3" pt 2, \
@ -38,8 +38,8 @@ plot \
g(x) title "lin. fit" lc rgb "black" g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb h(x)=aaa*x+bbb
fit h(x) "$data" every ::1 using 3:4 via aaa,bbb fit h(x) "$data" every ::1 using 3:4 via aaa,bbb
set xlabel 'improvement potential' set xlabel 'Improvement potential'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "${png}_improvement-vs-evo-error.png" set output "${png}_improvement-vs-evo-error.png"
plot \ plot \
"$2" every ::1 using 3:4 title "$3" pt 2, \ "$2" every ::1 using 3:4 title "$3" pt 2, \
@ -52,8 +52,8 @@ plot \
h(x) title "lin. fit" lc rgb "black" h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb i(x)=aaaa*x+bbbb
fit i(x) "$data" every ::1 using 2:4 via aaaa,bbbb fit i(x) "$data" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'variability' set xlabel 'Variability'
set ylabel 'evolution error' set ylabel 'Error given by fitness-function'
set output "${png}_variability-vs-evo-error.png" set output "${png}_variability-vs-evo-error.png"
plot \ plot \
"$2" every ::1 using 2:4 title "$3" pt 2, \ "$2" every ::1 using 2:4 title "$3" pt 2, \

View File

@ -0,0 +1,184 @@
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "errors.csv" every ::1 using 1:5
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: f(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 129069 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 84.3477
initial set of free parameter values
a = 1
b = 1
After 6 iterations the fit converged.
final sum of squares of residuals : 4993.5
rel. change during last iteration : -5.46363e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.13821
variance of residuals (reduced chisquare) = WSSR/ndf : 50.9541
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -0.0931363 +/- 0.1443 (154.9%)
b = 96.4721 +/- 17.21 (17.84%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.999 1.000
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "errors.csv" every ::1 using 3:5
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: g(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 38697.6 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 71.7898
initial set of free parameter values
aa = 1
bb = 1
After 6 iterations the fit converged.
final sum of squares of residuals : 5010.73
rel. change during last iteration : -1.443e-13
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.15052
variance of residuals (reduced chisquare) = WSSR/ndf : 51.1299
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = 0.0270058 +/- 0.09648 (357.3%)
bb = 82.6379 +/- 9.795 (11.85%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -0.997 1.000
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "errors.csv" every ::1 using 3:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: h(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 27023.7 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 71.7898
initial set of free parameter values
aaa = 1
bbb = 1
After 6 iterations the fit converged.
final sum of squares of residuals : 4159.2
rel. change during last iteration : -2.19108e-13
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.51466
variance of residuals (reduced chisquare) = WSSR/ndf : 42.4408
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -0.0345469 +/- 0.0879 (254.4%)
bbb = 92.7152 +/- 8.924 (9.625%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -0.997 1.000
*******************************************************************************
Fri Oct 27 14:09:08 2017
FIT: data read from "errors.csv" every ::1 using 2:4
format = x:z
#datapoints = 100
residuals are weighted equally (unit weight)
function used for fitting: i(x)
fitted parameters initialized with current variable values
Iteration 0
WSSR : 30294.4 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 72.9129
initial set of free parameter values
aaaa = 1
bbbb = 1
After 6 iterations the fit converged.
final sum of squares of residuals : 4165.22
rel. change during last iteration : -6.01785e-13
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.51938
variance of residuals (reduced chisquare) = WSSR/ndf : 42.5023
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = -0.0109066 +/- 0.09721 (891.3%)
bbbb = 90.3395 +/- 10.02 (11.09%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -0.998 1.000

View File

@ -0,0 +1,392 @@
Iteration 0
WSSR : 129069 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 84.3477
initial set of free parameter values
a = 1
b = 1
/
Iteration 1
WSSR : 6564.6 delta(WSSR)/WSSR : -18.6613
delta(WSSR) : -122504 limit for stopping : 1e-05
lambda : 8.43477
resultant parameter values
a = 0.708029
b = 0.999863
/
Iteration 2
WSSR : 6554.01 delta(WSSR)/WSSR : -0.00161544
delta(WSSR) : -10.5876 limit for stopping : 1e-05
lambda : 0.843477
resultant parameter values
a = 0.70464
b = 1.23013
/
Iteration 3
WSSR : 6005.48 delta(WSSR)/WSSR : -0.0913382
delta(WSSR) : -548.53 limit for stopping : 1e-05
lambda : 0.0843477
resultant parameter values
a = 0.549306
b = 19.7746
/
Iteration 4
WSSR : 4995.1 delta(WSSR)/WSSR : -0.202276
delta(WSSR) : -1010.39 limit for stopping : 1e-05
lambda : 0.00843477
resultant parameter values
a = -0.067621
b = 93.426
/
Iteration 5
WSSR : 4993.5 delta(WSSR)/WSSR : -0.000319669
delta(WSSR) : -1.59627 limit for stopping : 1e-05
lambda : 0.000843477
resultant parameter values
a = -0.0931258
b = 96.4708
/
Iteration 6
WSSR : 4993.5 delta(WSSR)/WSSR : -5.46363e-11
delta(WSSR) : -2.72827e-07 limit for stopping : 1e-05
lambda : 8.43477e-05
resultant parameter values
a = -0.0931363
b = 96.4721
After 6 iterations the fit converged.
final sum of squares of residuals : 4993.5
rel. change during last iteration : -5.46363e-11
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.13821
variance of residuals (reduced chisquare) = WSSR/ndf : 50.9541
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = -0.0931363 +/- 0.1443 (154.9%)
b = 96.4721 +/- 17.21 (17.84%)
correlation matrix of the fit parameters:
a b
a 1.000
b -0.999 1.000
Iteration 0
WSSR : 38697.6 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 71.7898
initial set of free parameter values
aa = 1
bb = 1
/
Iteration 1
WSSR : 8562.61 delta(WSSR)/WSSR : -3.51936
delta(WSSR) : -30134.9 limit for stopping : 1e-05
lambda : 7.17898
resultant parameter values
aa = 0.829791
bb = 1.00677
/
Iteration 2
WSSR : 8489.56 delta(WSSR)/WSSR : -0.00860526
delta(WSSR) : -73.0548 limit for stopping : 1e-05
lambda : 0.717898
resultant parameter values
aa = 0.820734
bb = 1.84213
/
Iteration 3
WSSR : 5851.66 delta(WSSR)/WSSR : -0.450794
delta(WSSR) : -2637.9 limit for stopping : 1e-05
lambda : 0.0717898
resultant parameter values
aa = 0.41725
bb = 42.9138
/
Iteration 4
WSSR : 5010.8 delta(WSSR)/WSSR : -0.167809
delta(WSSR) : -840.86 limit for stopping : 1e-05
lambda : 0.00717898
resultant parameter values
aa = 0.030744
bb = 82.2573
/
Iteration 5
WSSR : 5010.73 delta(WSSR)/WSSR : -1.54001e-05
delta(WSSR) : -0.0771658 limit for stopping : 1e-05
lambda : 0.000717898
resultant parameter values
aa = 0.0270062
bb = 82.6378
/
Iteration 6
WSSR : 5010.73 delta(WSSR)/WSSR : -1.443e-13
delta(WSSR) : -7.23048e-10 limit for stopping : 1e-05
lambda : 7.17898e-05
resultant parameter values
aa = 0.0270058
bb = 82.6379
After 6 iterations the fit converged.
final sum of squares of residuals : 5010.73
rel. change during last iteration : -1.443e-13
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 7.15052
variance of residuals (reduced chisquare) = WSSR/ndf : 51.1299
Final set of parameters Asymptotic Standard Error
======================= ==========================
aa = 0.0270058 +/- 0.09648 (357.3%)
bb = 82.6379 +/- 9.795 (11.85%)
correlation matrix of the fit parameters:
aa bb
aa 1.000
bb -0.997 1.000
Iteration 0
WSSR : 27023.7 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 71.7898
initial set of free parameter values
aaa = 1
bbb = 1
/
Iteration 1
WSSR : 8641.55 delta(WSSR)/WSSR : -2.12719
delta(WSSR) : -18382.2 limit for stopping : 1e-05
lambda : 7.17898
resultant parameter values
aaa = 0.867036
bbb = 1.00818
/
Iteration 2
WSSR : 8549.83 delta(WSSR)/WSSR : -0.0107272
delta(WSSR) : -91.716 limit for stopping : 1e-05
lambda : 0.717898
resultant parameter values
aaa = 0.857153
bbb = 1.94665
/
Iteration 3
WSSR : 5220.55 delta(WSSR)/WSSR : -0.637727
delta(WSSR) : -3329.29 limit for stopping : 1e-05
lambda : 0.0717898
resultant parameter values
aaa = 0.403866
bbb = 48.0879
/
Iteration 4
WSSR : 4159.3 delta(WSSR)/WSSR : -0.255151
delta(WSSR) : -1061.25 limit for stopping : 1e-05
lambda : 0.00717898
resultant parameter values
aaa = -0.0303472
bbb = 92.2877
/
Iteration 5
WSSR : 4159.2 delta(WSSR)/WSSR : -2.34157e-05
delta(WSSR) : -0.0973908 limit for stopping : 1e-05
lambda : 0.000717898
resultant parameter values
aaa = -0.0345465
bbb = 92.7151
/
Iteration 6
WSSR : 4159.2 delta(WSSR)/WSSR : -2.19108e-13
delta(WSSR) : -9.11314e-10 limit for stopping : 1e-05
lambda : 7.17898e-05
resultant parameter values
aaa = -0.0345469
bbb = 92.7152
After 6 iterations the fit converged.
final sum of squares of residuals : 4159.2
rel. change during last iteration : -2.19108e-13
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.51466
variance of residuals (reduced chisquare) = WSSR/ndf : 42.4408
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaa = -0.0345469 +/- 0.0879 (254.4%)
bbb = 92.7152 +/- 8.924 (9.625%)
correlation matrix of the fit parameters:
aaa bbb
aaa 1.000
bbb -0.997 1.000
Iteration 0
WSSR : 30294.4 delta(WSSR)/WSSR : 0
delta(WSSR) : 0 limit for stopping : 1e-05
lambda : 72.9129
initial set of free parameter values
aaaa = 1
bbbb = 1
/
Iteration 1
WSSR : 7542.11 delta(WSSR)/WSSR : -3.0167
delta(WSSR) : -22752.3 limit for stopping : 1e-05
lambda : 7.29129
resultant parameter values
aaaa = 0.854383
bbbb = 1.0057
/
Iteration 2
WSSR : 7488.45 delta(WSSR)/WSSR : -0.00716584
delta(WSSR) : -53.661 limit for stopping : 1e-05
lambda : 0.729129
resultant parameter values
aaaa = 0.84683
bbbb = 1.71093
/
Iteration 3
WSSR : 5195.8 delta(WSSR)/WSSR : -0.44125
delta(WSSR) : -2292.65 limit for stopping : 1e-05
lambda : 0.0729129
resultant parameter values
aaaa = 0.466748
bbbb = 40.9842
/
Iteration 4
WSSR : 4165.38 delta(WSSR)/WSSR : -0.247377
delta(WSSR) : -1030.42 limit for stopping : 1e-05
lambda : 0.00729129
resultant parameter values
aaaa = -0.00497837
bbbb = 89.7269
/
Iteration 5
WSSR : 4165.22 delta(WSSR)/WSSR : -3.81126e-05
delta(WSSR) : -0.158748 limit for stopping : 1e-05
lambda : 0.000729129
resultant parameter values
aaaa = -0.0109059
bbbb = 90.3394
/
Iteration 6
WSSR : 4165.22 delta(WSSR)/WSSR : -6.01785e-13
delta(WSSR) : -2.50657e-09 limit for stopping : 1e-05
lambda : 7.29129e-05
resultant parameter values
aaaa = -0.0109066
bbbb = 90.3395
After 6 iterations the fit converged.
final sum of squares of residuals : 4165.22
rel. change during last iteration : -6.01785e-13
degrees of freedom (FIT_NDF) : 98
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 6.51938
variance of residuals (reduced chisquare) = WSSR/ndf : 42.5023
Final set of parameters Asymptotic Standard Error
======================= ==========================
aaaa = -0.0109066 +/- 0.09721 (891.3%)
bbbb = 90.3395 +/- 10.02 (11.09%)
correlation matrix of the fit parameters:
aaaa bbbb
aaaa 1.000
bbbb -0.998 1.000

View File

@ -0,0 +1,26 @@
set datafile separator ","
f(x)=a*x+b
fit f(x) "errors.csv" every ::1 using 1:5 via a,b
set terminal png
set xlabel 'Regularity'
set ylabel 'Number of iterations'
set output "errors_regularity-vs-steps.png"
plot "errors.csv" every ::1 using 1:5 title "data", f(x) title "lin. fit" lc rgb "black"
g(x)=aa*x+bb
fit g(x) "errors.csv" every ::1 using 3:5 via aa,bb
set xlabel 'Improvement potential'
set ylabel 'Number of iterations'
set output "errors_improvement-vs-steps.png"
plot "errors.csv" every ::1 using 3:5 title "data", g(x) title "lin. fit" lc rgb "black"
h(x)=aaa*x+bbb
fit h(x) "errors.csv" every ::1 using 3:4 via aaa,bbb
set xlabel 'Improvement potential'
set ylabel 'error given by fitness-function'
set output "errors_improvement-vs-evo-error.png"
plot "errors.csv" every ::1 using 3:4 title "data", h(x) title "lin. fit" lc rgb "black"
i(x)=aaaa*x+bbbb
fit i(x) "errors.csv" every ::1 using 2:4 via aaaa,bbbb
set xlabel 'Variability'
set ylabel 'error given by fitness-function'
set output "errors_variability-vs-evo-error.png"
plot "errors.csv" every ::1 using 2:4 title "data", i(x) title "lin. fit" lc rgb "black"

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Some files were not shown because too many files have changed in this diff Show More