Whilst developing Zpr'(h, I implemented a rudimentary standard library, defining semantics for natural numbers, mappings, lists and logic. Furthermore, I used these semantics to define a lazy computation of all prime numbers — albeit executing at a rather slow pace.

Having finalized the language’s specifications I began investigating its computational bounds. After all, testing primality is a primitive recursive relation. Thus, it a priori is not even clear if Zpr'(h is Turing complete — a useful feature for a programming language to have.

Pondering this question, I thought about how to show that Zpr'(h is indeed Turing complete — driven by hope that I have not created a primitively weak language. I briefly thought about implementing a Turing machine but quickly opted to implement a brainfuck interpreter — equivalent, since both can simulate each other.

After having written said brainfuck interpreter (brainfuck.zpr), I proceeded to test it only to realize that using byte-based pattern matching to implement a brainfuck interpreter in a functional manner does not lead to the most efficient implementation. Interpreting the brainfuck program `++[->+++<]>.`

— that is, multiplying two by three — takes a respectable twenty seconds at 4.00 GHz. Yet more excruciatingly, adhering to commutativity and interpreting `+++[->++<]>.`

yields the same correct numerical result, although at a steep slowdown to over three minutes.

Time constraints are not the only factor — since the current Zpr'(h implementation does not alias any byte sequences if long byte sequences are duplicated, the memory footprint rises to the unmanageable, easily blowing the 1 GiB provided by default. Increasing the available memory most likely not make much of a difference given the aforementioned exponential behavior.

Thus, testing larger brainfuck programs appears not to be feasible due to computational resource limitations. Nevertheless, I am now fairly certain of Zpr'(h being Turing complete, even though my brainfuck implementation may not be correct.

To input brainfuck source code into the above interpreter, I used this translator.

Not being satisfied with a nigh untestable brainfuck implementation, I attempted to fulfil another classical interpretation of computability; recursive functions. As seen above, primitive recursive functions can already be modelled, leaving only the existence of µ-recursion open; a one-liner using the standard library:

(µ .p) |> (head (filter p |N0))

In conclusio, I am convinced that Zpr'(h is Turing complete, if not very efficient — a common faith of esoteric programming languages.

As a side note, implementing the Ackermann-Peter function is fairly intuitive: ackermann-peter.zpr

I have also golfed in Zpr'(h; it is not the most terse language out there.

With the power of stochastically driven brute-force, however, finding such complete configurations turns out to be feasible — at least when playing with the 140 cards my Contact version contains. Not surprisingly, many solutions are of rather linear nature since the game only contains two *branching cards*, i.e. cards where three sides boast connections.

Thus, the search is further narrowed in by demanding a *maximum dimension*, that is the final configuration has to lie in a card rectangle of a given area. From my testing, a maximum dimension of 500 is moderately quickly computed (~ 30 sec @ 4.00 GHz), whilst lower maximum dimensions appear to be less likely.

From an implementation point of view a generalized Contact card (defined as four sides, each with three nodes being blank or colored one of three colors) snuggly fits into 24 bits, allowing for card rotation, reflection and match determination being implemented thru integer bit fiddling.

The stochastic process is driven by an (ideally assumed) uniformly distributed random number generator, being recursively applied until all cards are consumed. Finally, an image is created as a portable pixmap `.ppm`

and resized to a `.png`

using ImageMagick.

Source code: contact.c

]]>However, most pseudo-random entropy sources provide only a pseudo-uniformly distributed realization of , leading to the necessity of finding an algorithmic transformation process if one wishes to achieve a shuffle.

In the following, I will assume that a transforming process to a family of independent and uniformly on distributed random variables is already present for any .

One naive and seemingly correct (it is not) approach is to traverse the given sequence, uniformly swapping the current entry with another one, i.e.

void falseShuffle(uint64_t *arr, size_t len) { for (size_t j = 0; j < len; j++) swap(arr, j, unif(len)); }

as an exemplary C implementation where is independent and uniformly distributed on .

Yet, even though sensible on first sight, the above defined random variable is *only* in the most trivial cases uniformly distributed and — as empirical evidence suggests, see below — horrendously non-uniformly distributed otherwise.

To prove the non-uniformity postulated above, I first present the following number-theoretic result.

**Claim.** In only three trivial cases does the factorial of a natural number divide its tetration; formally

.

**Proof.** Let be a natural number larger than two. By the definition of the factorial, is evident. Adhering to the uniqueness of prime factorizations, follows. Observe that has to be prime since , implying which cannot hold for . **QED**

Now suppose, was indeed non-trivially distributed uniformly. Without loss of generality, all involved probability spaces were finite. Then there had to exist a surjection from this algorithm’s entropic state to with fibers of the same finite cardinality, implying . By the above proven claim, followed, making the distribution trivial. **QED**

One possible reason for the surprising nature of this non-uniformity is the striking source code resemblance to a correct implementation, i.e.

void shuffle(uint64_t *arr, size_t len) { for (size_t j = 0; j < len; j++) swap(arr, j, j + unif(len - j)); }

as an exemplary C implementation which can be inductively shown to resemble the same structure as , in each step sprinkling in some uniform randomness and thus being itself uniformly distributed.

To see just how non-uniform is, I have calculated its discrete density for :

[ | ] [ | ||| ] [ | ||| ] [ | ||| ] [ || ||||||| || ] [||||| |||||||| ||| ||] [|||||||||||||||||| || ||] [||||||||||||||||||||||||] [||||||||||||||||||||||||] [||||||||||||||||||||||||] [||||||||||||||||||||||||] [||||||||||||||||||||||||] [||||||||||||||||||||||||] [||||||||||||||||||||||||] [||||||||||||||||||||||||] n = 4

If it was uniformly distributed, the discrete density would look like a rectangle; `[||||| ... |||||]`

. Further plots for are shown in nonUniformity.txt.

Source code for the analysis and plotting: nonUniformity.hs. Empirical evidence of non-uniformity: nonUniformity.c.

]]>For any natural number **n**, let denote the set of all operations on a set of that order. An operation **★** shall be called commutative iff and be called associative iff holds.

With the above defined, one may study . For **n = 2**, this set is nonempty for the first time, containing a manageable two elements, by name

.

However, based on the superexponential nature of , the sequence likely also grows rather quickly, OEIS only listing four members;

.

Based on this limited numerical evidence, I would suspect the commutative yet non-associative operations to be rather sparse, i.e.

.

Analysis source: operations.hs

(Non-)commutative and (non-)associative operations have also been studied nearly twenty years ago by Christian van den Bosch, author of OEIS sequence A079195. Unfortunately, their site appears to be down, which is where they hosted Closed binary operations on small sets (resource found on web.archive.org).

Let **ε > 0** be an arbitrary distance, define . Then has two local maxima at **-1** and **1**, whose vertical distance is **ε**.

It holds that .

]]>