## Zpr'(h

I have designed and implemented a new esoteric programming language called Zpr'(h. It is a language built upon iterated symbolic pattern matching, requiring the user to define their own semantics and interpreting them as computations.
Whilst developing Zpr'(h, I implemented a rudimentary standard library, defining semantics for natural numbers, mappings, lists and logic. Furthermore, I used these semantics to define a lazy computation of all prime numbers — albeit executing at a rather slow pace.
Having finalized the language’s specifications I began investigating its computational bounds. After all, testing primality is a primitive recursive relation. Thus, it a priori is not even clear if Zpr'(h is Turing complete — a useful feature for a programming language to have.

Pondering this question, I thought about how to show that Zpr'(h is indeed Turing complete — driven by hope that I have not created a primitively weak language. I briefly thought about implementing a Turing machine but quickly opted to implement a brainfuck interpreter — equivalent, since both can simulate each other.
After having written said brainfuck interpreter (brainfuck.zpr), I proceeded to test it only to realize that using byte-based pattern matching to implement a brainfuck interpreter in a functional manner does not lead to the most efficient implementation. Interpreting the brainfuck program ++[->+++<]>. — that is, multiplying two by three — takes a respectable twenty seconds at 4.00 GHz. Yet more excruciatingly, adhering to commutativity and interpreting +++[->++<]>. yields the same correct numerical result, although at a steep slowdown to over three minutes.
Time constraints are not the only factor — since the current Zpr'(h implementation does not alias any byte sequences if long byte sequences are duplicated, the memory footprint rises to the unmanageable, easily blowing the 1 GiB provided by default. Increasing the available memory most likely not make much of a difference given the aforementioned exponential behavior.
Thus, testing larger brainfuck programs appears not to be feasible due to computational resource limitations. Nevertheless, I am now fairly certain of Zpr'(h being Turing complete, even though my brainfuck implementation may not be correct.
To input brainfuck source code into the above interpreter, I used this translator.

Not being satisfied with a nigh untestable brainfuck implementation, I attempted to fulfil another classical interpretation of computability; recursive functions. As seen above, primitive recursive functions can already be modelled, leaving only the existence of µ-recursion open; a one-liner using the standard library:

(µ .p) |> (head (filter p |N0))

In conclusio, I am convinced that Zpr'(h is Turing complete, if not very efficient — a common faith of esoteric programming languages.

As a side note, implementing the Ackermann-Peter function is fairly intuitive: ackermann-peter.zpr
I have also golfed in Zpr'(h; it is not the most terse language out there.

## Complete Contact Configurations

Contact is a board game designed by Ken Garland in which players draw square tiles containing colored connections from a pile, attempting to form matches. Whilst the game is enjoyable to play — allowing the odd trading of cards in unfortunate circumstances — it seldom leads to a complete configuration, meaning a valid contacting arrangement using all available cards.

With the power of stochastically driven brute-force, however, finding such complete configurations turns out to be feasible — at least when playing with the 140 cards my Contact version contains. Not surprisingly, many solutions are of rather linear nature since the game only contains two branching cards, i.e. cards where three sides boast connections.
Thus, the search is further narrowed in by demanding a maximum dimension, that is the final configuration has to lie in a card rectangle of a given area. From my testing, a maximum dimension of 500 is moderately quickly computed (~ 30 sec @ 4.00 GHz), whilst lower maximum dimensions appear to be less likely.

This slideshow requires JavaScript.

From an implementation point of view a generalized Contact card (defined as four sides, each with three nodes being blank or colored one of three colors) snuggly fits into 24 bits, allowing for card rotation, reflection and match determination being implemented thru integer bit fiddling.
The stochastic process is driven by an (ideally assumed) uniformly distributed random number generator, being recursively applied until all cards are consumed. Finally, an image is created as a portable pixmap .ppm and resized to a .png using ImageMagick.

Source code: contact.c

## Non-uniform shuffling

A shuffle of a finite sequence of length $n$ of distinguishable elements refers to an algorithmic process which — modulo pseudo-randomness — can be modeled as a random variable uniformly distributed on the permutations $\mathbb{S}_n$.
However, most pseudo-random entropy sources provide only a pseudo-uniformly distributed realization of $\mathbb{F}_2^\mathbb{N}$, leading to the necessity of finding an algorithmic transformation process if one wishes to achieve a shuffle.
In the following, I will assume that a transforming process to a family of independent and uniformly on $\{1..n\}$ distributed random variables is already present for any $n\in\mathbb{N}$.

One naive and seemingly correct (it is not) approach is to traverse the given sequence, uniformly swapping the current entry with another one, i.e.

void falseShuffle(uint64_t *arr, size_t len) {
for (size_t j = 0; j < len; j++)
swap(arr, j, unif(len)); }

as an exemplary C implementation where $\texttt{unif}(n)$ is independent and uniformly distributed on $\{0..n-1\}$.

Yet, even though sensible on first sight, the above defined random variable is only in the most trivial cases uniformly distributed and — as empirical evidence suggests, see below — horrendously non-uniformly distributed otherwise.
To prove the non-uniformity postulated above, I first present the following number-theoretic result.

Claim. In only three trivial cases does the factorial of a natural number divide its tetration; formally

$\forall\,n\in\mathbb{N}_{>2}:n!\nmid n^n$.

Proof. Let $n\in\mathbb{N}_{>2}$ be a natural number larger than two. By the definition of the factorial, $\prod_{p is evident. Adhering to the uniqueness of prime factorizations, $\prod_{p follows. Observe that $n-1>1$ has to be prime since $\forall\,p, implying $n-1\mid\prod_p=n$ which cannot hold for $n>2$. QED

Now suppose, $\texttt{falseShuffle}$ was indeed non-trivially distributed uniformly. Without loss of generality, all involved probability spaces were finite. Then there had to exist a surjection from this algorithm’s entropic state to $\mathbb{S}_n$ with fibers of the same finite cardinality, implying $n!\mid n^n$. By the above proven claim, $n<3$ followed, making the distribution trivial. QED

One possible reason for the surprising nature of this non-uniformity is the striking source code resemblance to a correct implementation, i.e.

void shuffle(uint64_t *arr, size_t len) {
for (size_t j = 0; j < len; j++)
swap(arr, j, j + unif(len - j)); }

as an exemplary C implementation which can be inductively shown to resemble the same structure as $\mathbb{S}_n$, in each step sprinkling in some uniform randomness and thus being itself uniformly distributed.

To see just how non-uniform $\texttt{falseShuffle}$ is, I have calculated its discrete density for $n=4$:

[       |                ]
[   |   |||              ]
[   |   |||              ]
[   |   |||              ]
[   ||  ||||||| ||       ]
[||||| |||||||| |||    ||]
[|||||||||||||||||| || ||]
[||||||||||||||||||||||||]
[||||||||||||||||||||||||]
[||||||||||||||||||||||||]
[||||||||||||||||||||||||]
[||||||||||||||||||||||||]
[||||||||||||||||||||||||]
[||||||||||||||||||||||||]
[||||||||||||||||||||||||]
n = 4

If it was uniformly distributed, the discrete density would look like a rectangle; [||||| ... |||||]. Further plots for $0\leq n\leq 6$ are shown in nonUniformity.txt.

Source code for the analysis and plotting: nonUniformity.hs. Empirical evidence of non-uniformity: nonUniformity.c.