Lyapunov Fractal

Lyapunov fractals are mathematical objects living in the plane, graphing regions of stability or chaos regarding a certain logistical map following an ever-changing population growth, alternating between two values.
I implemented a Lyapunov fractal viewer in Java 1.8 which lets you freely move around the plane, set sequences and iteration depths to explore the fractal. Source code can be seen below or downloaded, as can the Java executable (, Lyapunov.jar).

Zircon Zity
Articles on the topic of Lyapunov fractals are a post by Earl Glynn from the year 2000 in which they talk about their Lyapunov fractal generator written in Pascal — a rewrite of a software written in 1992. Also worth reading is A. Dewdney’s article about Mario Markus’ work (Scientific American, September 1991).

My first encounter with this fractal was whilst browsing Wikipedia and stumbling across its Wikipedia entry. Since it was a fractal and looked fairly intricate and thus explorable, I set out to write my own renderer — I chose to implement it in Java, as it is a compiled language with a nice set of GUI libraries. Since Wikipedia listed an algorithm for generating Lyapunov fractal images, I thought an implementation would be fairly straight-forward. However, als always, the devil is in the detail and I quickly noticed that while the short Wikipedia entry looks convincing and well-written at first glance, lacks to answer deeper questions regarding the very topic it is about.

Eerie Tendrils
The following is a description of the fractal generation algorithm. It is a modified version of the algorithm detailed by Wikipedia which addresses certain ambiguities the original had (more on those later).

  • Take as input a complex point (a,b) \in \mathbb{R}^2 and a sequence S^* consisting of the letters \text{A} and \text{B}; for example S^* = \text{AAAAAABBBBBB}.
  • Construct a function S \colon \mathbb{N}_0 \to \{\text{A},\text{B}\}, n \mapsto S^*_{(n \mod |S^*|)} which returns the zero indexed n-th sequence entry and wraps around to the sequence’s start once the sequence’s end is hit.
  • Construct a function r \colon \mathbb{N}_0 \to \{a,b\}, n \mapsto \begin{cases}a&\text{if } S(n) = \text{A}\\b&\text{if } S(n) = \text{B}\end{cases} which selects either a or b.
  • Let x_0=0.5 and define x_n=r(n-1)\cdot x_{n-1}\cdot (1-x_{n-1}).
  • Calculate the Lyapunov exponent \lambda as follows.
    \lambda = \lim\limits_{N \to \infty} \frac{1}{N} \cdot \sum\limits_{n=1}^{N} \log_2{|r(n)\cdot(1-2\cdot x_n)|}
    Since calculating the limit as N goes to infinity is rather difficult in practice, one can simply use a sufficiently large N and pretend to have reached infinity.
  • Output a color value according to \lambda. I chose green for \lambda < 0 and blue for \lambda \geq 0, using arbitrarily defined value ranges to map \lambda to an actual color.

A Spec of Fractal
The following is a Java snippet which implements the algorithm described above.

// choose a or b according to Seq and n
static double r(String Seq, int n, double a, double b) {
	if (Seq.charAt(n%Seq.length()) == 'A') return a; else return b;
// calculate a pixel color (0x00rrggbb) for given parameters
static int LyapunovPixel(double a, double b, String Seq, int N) {
	// array of all x_n; x_0, the starting value, initialize all x_n values
	double[] X = new double[N+1]; X[0] = .5;
	for (int n = 1; n <= N; n++) X[n] = r(Seq,n-1,a,b)*X[n-1]*(1-X[n-1]);
	// calculate the Lyapunov exponent (to a certain precision dictated by N)
	double lmb = 0;
	for (int n = 1; n <= N; n++)
		lmb += Math.log(Math.abs(r(Seq,n,a,b)*(1-2*X[n])))/Math.log(2);
	lmb /= N;

	// infinity was hit, use a black pixel
	if (Double.isInfinite(lmb)) return 0x000000;

	// color pixel according to Lyapunov exponent
	double MIN = -1, MAX = 2;
	if (lmb < 0) return ((int)(lmb/MIN*255))<<8;
	else         return ((int)(lmb/MAX*255))<<0;

Lyapunov Spike
Coming back to Wikipedia’s algorithm, there were a few things I found irritating at best when attempting my implementation and thus addressed in the algorithm description seen above. A closer look at potentially misleading or ambiguos statements follows, together with my reasoning to resolve them.

It is not clear whether the sequence is zero or one indexed, though it has to be zero indexed as x_0=0.5, x_{n+1}=r_n\cdot\dots evaluates to x_1=r_0\cdot\dots, implying the definition of r_0 and thus S_0.

It is not clear which logarithm is meant; the natural logarithm, a generic \log_b or the dyadic logarithm — the latter is actually used. To find the actual logarithm base, I dug deeper and used Wikipedia’s external links to find a post by Earl Glynn, answering this question.

It is not clear what is meant by the second half of the sentence beneath the Lyapunov exponent. It reads “… dropping the first summand as r_0(1-2x_0) = r_n\cdot 0 = 0 for x_0 = 0.5.” As the sum starts with n=1 and one would not dare to calculate \log_2{|r(0)\cdot(1-2\cdot x_0)|} = \log_2{0} for x_0 = 0.5, this sentence’s existence bewilders me.

It is not clear how exactly the colors are derived, only that they have something to do with the Lyapunov exponent. I simply chose arbitrary values that looked convincing for mapping \lambda to a pixel color.

Dark Swirl
One of the most frustrating bugs I came across was an unexplainable axis flip. My code generated the fractal just fine except for the fact that every image was flipped along the diagonal crossing the origin with a 45^\circ angle to the horizontal. It was as though the coordinates (a,b) were swapped somewhere in my code and I simply could not figure out where.

Finally, after hours of looking at the same code over and over again, a closer look at Earl Glynn’s post brought an end to my misery. Just below the three welcoming fractal renderings, a screenshot of their software is shown — complete with a blue and red line indicating the coordinate system’s orientation. a and b are — contrary to all coordinate systems involving parameters named after the first two letters in the latin alphabet — indeed flipped. Wikipedia’s images must have simply ran with this decision, without noting it.
Because of this flip, when one wants to render images specified in the reversed sequence format, they simply have to swap all letters (for example \text{BBBABA} becomes \text{AAABAB}).

As Glynn themself says, “I would have reversed the ‘a’ and ‘b’ labels to be consistent with normal ‘x’ and ‘y’ axes conventions, but conform here with the same convention as used by Markus.” (Referring to Mario Markus, co-author of Lyapunov Exponents of the Logistic Map with Periodic Forcing.)

Slurping Cell
After having eliminated all vagueness regarding the fractal generation, writing the outer Java viewer hull was a fairly easy task. As a template I took my Mandelbrot Set III viewer (the reason why most variable names reference complex numbers) complete with multithreading, allowing a pixely fractal exploration until one stops their exploration, letting the thread catch up and display a higher resolution image. The same is done for rendering 4K images and saving them to files in the background — 4K renderings are saved as .png files and named according to their parameters. A list of program controls follows.

  • Mouse dragging lets one pan the complex plane.
  • Mouse scrolling lets one zoom the complex plane.
  • Pressing N evokes a dialogue box where one can specify an iteration depth N.
  • Pressing S evokes a dialogue box where one can enter a sequence S^*.
  • Pressing R resets all fractal parameters to the default.
  • Pressing P initiates a 4K fractal rendering. The 4K fractal rendering thread can be killed by exiting the program prematurely, thus losing (!) the rendering.

// Java 1.8 code; written by Jonathan Frech, 2017

Continue reading


Python Matrix Module

Matrices are an important part of linear algebra. By arranging scalars in a rectangular manner, one can elegantly encode vector transformations like scaling, rotating, shearing and squashing, solve systems of linear equations and represent general vector space homomorphisms.
However, as powerful as matrices are, when actually applying the theoretical tools one has to calculate specific values. Doing so by hand can be done, yet gets cumbersome quite quickly when dealing with any matrices which contain more than a few rows and columns.

So, even though there are a lot of other implementations already present, I set out to write a Python matrix module containing a matrix class capable of basic matrix arithmetic (matrix multiplication, transposition, …) together with a set of functions which perform some higher-level matrix manipulation like applying Gaussian elimination, calculating the reduced row echelon form, determinant, inversion and rank of a given matrix.

Module source code can be seen below and downloaded. When saved as in the current working directory, one can import the module as follows.

>>> import matrix
>>> A = matrix.Matrix([[13,  1, 20, 18],
...                    [ 9, 24,  0,  9],
...                    [14, 22,  5, 18],
...                    [19,  9, 15, 14]])
>>> print A**-1
        -149/1268  -67/634   83/1268 171/1268
          51/1268 239/1902 -105/1268 -33/1268
Matrix(    73/634 803/4755  -113/634 -87/3170 )
          13/1268  -75/634  197/1268 -83/1268

Matrices are defined over a field, typically \mathbb{F} = \mathbb{R} in theoretical use, though for my implementation I chose not to use a double data structure, as it lacked the conceptual precision in numbers like a third. As one cannot truly represent a large portion of the reals anyways, I chose to use \mathbb{F} = \mathbb{Q}, which also is a field though can be — to a certain scalar size and precision — accurately represented using fractional data types (Python’s built-in Fraction is used here).

To simplify working with matrices, the implemented matrix class supports operator overloading such that the following expressions — A[i,j], A[i,j]=l, A*B, A*l, A+B, -A, A/l, A+B, A-B, A**-1, A**"t", ~A, A==B, A!=B — all behave in a coherent and intuitive way for matrices A, B, scalars l and indices i, j.

When working with matrices, there are certain rules that must be obeyed, like proper size when adding or multiplying, invertibility when inverting and others. To minimize potential bug track down problems, I tried to include a variety of detailed exceptions (ValueErrors) explaining the program’s failure at that point.

Apart from basic matrix arithmetic, a large part of my module centers around Gaussian elimination and the functions that follow from it. At their heart lies the implementation of GaussianElimination, a function which calculates the reduced row echelon form rref(A) of a matrix together with the transformation matrix T such that T*A = rref(A), a list of all matrix pivot coordinates, the number of row transpositions used to achieve row echelon form and a product of all scalars used to achieve reduced row echelon form.
From this function, rref(A) simply returns the first, rrefT(A) the second parameter. Functions rcef(A) (reduced column echelon form) and rcefS(A) (A*S=rcef(A)) follow from repeated transposition.
Determinant calculation uses characteristic determinant properties (multilinear, alternating and the unit hypercube has hypervolume one).

  • \det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  \lambda \cdot a_{i 1}&\lambda \cdot a_{i 2}&\dots&\lambda \cdot a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix} = \lambda \cdot \det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{i 1}&a_{i 2}&\dots&a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix}
  • \det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{i 1}&a_{i 2}&\dots&a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{j 1}&a_{j 2}&\dots&a_{j n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix} = -\det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{j 1}&a_{j 2}&\dots&a_{j n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{i 1}&a_{i 2}&\dots&a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix}
  • \det \begin{pmatrix}1&a_{1 2}&a_{1 3}&\dots&a_{1 n}\\0&1&a_{2 3}&\dots&a_{2 n}\\0&0&1&\dots&a_{3 n}\\\vdots&\vdots&\vdots&\ddots&\vdots\\0&0&0&\dots&1\end{pmatrix} = \det \bold{1}_n = 1

Using these properties, the determinant is equal to the product of the total product of all factors used in transforming the matrix into reduced row echelon form and the permutation parity (minus one to the power of the number of transpositions used).
Questions regarding invertibility and rank can also conveniently be implemented using the above described information.

All in all, this Python module implements basic matrix functionality paired with a bit more involved matrix transformations to build a usable environment for manipulating matrices in Python.

# Python 2.7 code; Jonathan Frech; 9th, 10th, 11th, 12th, 14th of December 2017
# 4th of January 2018: added support for matrix inequality, implemented __ne__

Continue reading