Python Matrix Module

Matrices are an important part of linear algebra. By arranging scalars in a rectangular manner, one can elegantly encode vector transformations like scaling, rotating, shearing and squashing, solve systems of linear equations and represent general vector space homomorphisms.
However, as powerful as matrices are, when actually applying the theoretical tools one has to calculate specific values. Doing so by hand can be done, yet gets cumbersome quite quickly when dealing with any matrices which contain more than a few rows and columns.

So, even though there are a lot of other implementations already present, I set out to write a Python matrix module containing a matrix class capable of basic matrix arithmetic (matrix multiplication, transposition, …) together with a set of functions which perform some higher-level matrix manipulation like applying Gaussian elimination, calculating the reduced row echelon form, determinant, inversion and rank of a given matrix.

Module source code can be seen below and downloaded. When saved as in the current working directory, one can import the module as follows.

>>> import matrix
>>> A = matrix.Matrix([[13,  1, 20, 18],
...                    [ 9, 24,  0,  9],
...                    [14, 22,  5, 18],
...                    [19,  9, 15, 14]])
>>> print A**-1
        -149/1268  -67/634   83/1268 171/1268
          51/1268 239/1902 -105/1268 -33/1268
Matrix(    73/634 803/4755  -113/634 -87/3170 )
          13/1268  -75/634  197/1268 -83/1268

Matrices are defined over a field, typically \mathbb{F} = \mathbb{R} in theoretical use, though for my implementation I chose not to use a double data structure, as it lacked the conceptual precision in numbers like a third. As one cannot truly represent a large portion of the reals anyways, I chose to use \mathbb{F} = \mathbb{Q}, which also is a field though can be — to a certain scalar size and precision — accurately represented using fractional data types (Python’s built-in Fraction is used here).

To simplify working with matrices, the implemented matrix class supports operator overloading such that the following expressions — A[i,j], A[i,j]=l, A*B, A*l, A+B, -A, A/l, A+B, A-B, A**-1, A**"t", ~A, A==B, A!=B — all behave in a coherent and intuitive way for matrices A, B, scalars l and indices i, j.

When working with matrices, there are certain rules that must be obeyed, like proper size when adding or multiplying, invertibility when inverting and others. To minimize potential bug track down problems, I tried to include a variety of detailed exceptions (ValueErrors) explaining the program’s failure at that point.

Apart from basic matrix arithmetic, a large part of my module centers around Gaussian elimination and the functions that follow from it. At their heart lies the implementation of GaussianElimination, a function which calculates the reduced row echelon form rref(A) of a matrix together with the transformation matrix T such that T*A = rref(A), a list of all matrix pivot coordinates, the number of row transpositions used to achieve row echelon form and a product of all scalars used to achieve reduced row echelon form.
From this function, rref(A) simply returns the first, rrefT(A) the second parameter. Functions rcef(A) (reduced column echelon form) and rcefS(A) (A*S=rcef(A)) follow from repeated transposition.
Determinant calculation uses characteristic determinant properties (multilinear, alternating and the unit hypercube has hypervolume one).

  • \det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  \lambda \cdot a_{i 1}&\lambda \cdot a_{i 2}&\dots&\lambda \cdot a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix} = \lambda \cdot \det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{i 1}&a_{i 2}&\dots&a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix}
  • \det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{i 1}&a_{i 2}&\dots&a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{j 1}&a_{j 2}&\dots&a_{j n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix} = -\det \begin{pmatrix}  a_{1 1}&a_{1 2}&\dots&a_{1 n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{j 1}&a_{j 2}&\dots&a_{j n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{i 1}&a_{i 2}&\dots&a_{i n}\\  \vdots&\vdots&\ddots&\vdots&\\  a_{n 1}&a_{n 2}&\dots&a_{n n}  \end{pmatrix}
  • \det \begin{pmatrix}1&a_{1 2}&a_{1 3}&\dots&a_{1 n}\\0&1&a_{2 3}&\dots&a_{2 n}\\0&0&1&\dots&a_{3 n}\\\vdots&\vdots&\vdots&\ddots&\vdots\\0&0&0&\dots&1\end{pmatrix} = \det \bold{1}_n = 1

Using these properties, the determinant is equal to the product of the total product of all factors used in transforming the matrix into reduced row echelon form and the permutation parity (minus one to the power of the number of transpositions used).
Questions regarding invertibility and rank can also conveniently be implemented using the above described information.

All in all, this Python module implements basic matrix functionality paired with a bit more involved matrix transformations to build a usable environment for manipulating matrices in Python.

# Python 2.7 code; Jonathan Frech; 9th, 10th, 11th, 12th, 14th of December 2017
# 4th of January 2018: added support for matrix inequality, implemented __ne__

Continue reading



Recreating the famous falling bit effect from Matrix using python and curses.
The individual bit strips are separate entities, falling to the bottom and then being moved up again with a different x value. They also get a random speed between one and five deciseconds.

20 J-Trix frames

# Python 2.7.7 Code
# Jonathan Frech 7th of September, 2016
#         edited 8th of September, 2016
#         edited 9th of September, 2016

Continue reading