I had a maths-related nightmare that (initially) was concerned about counting solutions of a linear equation modulo $M$ with integral coefficients. Though the whole detail of the dream cannot be written down due to privacy reasons, at least I can write a simple introduction on solving this specific problem here.

Update on 26 December 2018: The original solution is wrong. See the correction. Another update the same day: The original method is indeed correct, see the correction to correction.

In the dream, someone modelled the possible versions of wicked nursery rhymes as solutions to linear equations modulo $M$, and asked how to count the solutions. Write the equations as a matrix equation $Ax\equiv b\quad\left({\mathrm{mod}\ M}\right),$ where $A\in\mathbb{Z}^{m\times n},x\in\mathbb{Z}_M^n,b\in\mathbb{Z}^m$. The objective is to count the solutions (and possibly represent them in an easy-to-enumerate way).

## Smith normal form of matrix over PID

If you are familiar with structural theorem of finitely generated module over a principal ideal domain, or $\lambda$-matrices (used for determining whether two matrices are similar), or just Smith normal form, you can skip this part.

Recall the elementary reductions for matrices over a field. Row reductions represent manipulations of equations. Rarely used in solving linear equations are column reductions, which represent change of variables. The implication of elementary reductions is that any invertible matrix is a product of several elementary matrices. Moreover, for any matrix $A$ (over a field), there exist two invertible matrices $P,Q$ and a natural number $r$, such that $A=P\begin{pmatrix}I_r&0\\0&0\end{pmatrix}Q,$ where $I_r$ is the $r\times r$ identity matrix. As we all know, $r$ is the rank of the matrix.

The similar thing for the second part goes for matrices over a PID (or more generally, a principal ideal ring or a Bézout domain). For any matrix $A$, there exist invertible matrices $P,Q$, a natural number $r$ and $r$ ring elements $\alpha_1,\dots,\alpha_r\neq 0$ such that $a_i\mid a_{i+1}$ for $i=1,\dots,r-1$ and $A=P\left(\begin{array}{ccc|c}\alpha_1&&&\\&\ddots&&0\\&&\alpha_r&\\\hline&0&&0\end{array}\right)Q.$ The counterpart of the first part, that invertible matrices are products of ‘elementary matrices’, holds for Euclidean domains. It holds for general Bézout domain if we allow a new kind of elementary matrices, ‘coprime combination’, which essentially means multiplying by $\begin{pmatrix}u&r\\s&v\end{pmatrix}$ for any $uv-rs=1$ for two selected rows (resp. columns). Including this kind of elementary matrices generate the general linear group. Read the Wikipedia article for Smith normal form for an example.

The $\alpha_i$’s appearing on the diagonal are called *invariant factors*, which are unique up to associatedness (multiplication by a unit). The prime factorisation of them corresponds to *elementary divisors*. The invariant factors can be determined by the quotients of successive *determinant factors*, the $k$-th of which is a greatest common divisor of all $k\times k$ minors.

## Solving the equation

Smith normal form answers the question in the nightmare. Recall that we want to count the solutions (and write them in an easy-to-enumerate way) of $Ax\equiv b\quad\left({\mathrm{mod}\ M}\right).$ First find out the Smith normal form of $A$ as $A=PDQ,\quad D=\left(\begin{array}{ccc|c}\alpha_1&&&\\&\ddots&&0\\&&\alpha_r&\\\hline&0&&0\end{array}\right).$ You can use the Smith normal form of $A$ as a matrix over either $\mathbb{Z}$ or $\mathbb{Z}_M$, and for the latter case, it is possible to write (the integer representatives of the representatives of) the invariant factors as a positive divisor of $M$, which happen to be the greatest common divisors of the corresponding factors over $\mathbb{Z}$ and $M$ (see here). Note that for the former case, $P,Q$ are invertible over $\mathbb{Z}$, hence are also invertible over $\mathbb{Z}_M$ (when naturally mapped via the quotient map). The equation can be rewritten as $DQx\equiv P^{-1}b\triangleq c\quad\left({\mathrm{mod}\ M}\right).$ Since $Q$ is invertible, $x\mapsto Qx\triangleq y$ is a bijection over $\mathbb{Z}_M^n$. The problem reduces to counting the solutions of $Dy\equiv c\quad\left({\mathrm{mod}\ M}\right)$. This is a diagonal system of equations, of which the number of solutions is a piece of cake — the product of the number of solutions to each single equation. Consider the equation $\alpha_i y_i\equiv c_i\quad\left({\mathrm{mod}\ M}\right),$ where $\alpha_i$ is a positive divisor of $M$. There are $\alpha_i$ solutions if $\alpha_i\mid c_i$, which are $y_{i,t}\equiv\frac{c_i+tM}{\alpha_i}\quad\left({\mathrm{mod}\ M}\right),\quad t=1,\dots,\alpha_i.$ Otherwise, there is no solution. Finally, any solution of the original equation is just $x=Q^{-1}y$, where $y$ is a solution to the diagonal system.

### Correction

~~The strikethrough texts above are wrong.~~ Each equation
$\alpha_i y_i\equiv c_i\quad\left({\mathrm{mod}\ M}\right),$
where $\alpha_i$ might not be ~~(representable by)~~ a factor of $M$, by Chinese remainder theorem (CRT), is equivalent to the system
$\alpha_i y_i\equiv c_i\quad\left({\mathrm{mod}\ p_j^{s_j}}\right),$
where $p_j^{s_j}$ runs through prime powers such that $s_j\geq 1,p_j^{s_j}\mid M$ and $p_j^{s_j+1}\nmid M$. Multiplied by appropriate coefficients, they can be rewritten as
$p_j^{t_{ij}}y_i\equiv v_{ij}\quad\left({\mathrm{mod}\ p_j^{s_j}}\right),$
where $p_j^{t_{ij}}$ is the maximum prime power dividing $\alpha_i$ yet below or equal to $p_j^{s_j}$. The solutions to each equation are, this time for real, easy. If $p_j^{t_{ij}}\mid v_{ij}$, there are $p_j^{t_{ij}}$ solutions, namely
$y_i\equiv\frac{v_{ij}}{p_j^{t_{ij}}}+kp_j^{s_i-t_{ij}}\quad\left({\mathrm{mod}\ p_j^{s_j}}\right),\quad k=1,\dots,p_j^{t_{ij}}.$
Otherwise, there is none. Each component in the diagonal system can be combined from the corresponding subset of the expanded system by CRT, which gives a formula to enumerate the solutions. If we just want to count, we can just multiply the number of solutions of each modulo-prime-power equation.

**Remarks.** CRT can be invoked before or after invariant factorisation. Doing so in the later stage allows reusing the factorisation over $\mathbb{Z}$ or $\mathbb{Z}_M$.

### Correction to Correction

I thought the part about choosing $\alpha_i$ as a divisor of $M$ was wrong, but it turns out it is correct. Moreover, we can pretend $M$ to be *some of the invariant factors* that follow non-zero entries (as $M$ is just zero in $\mathbb{Z}_M$), which unifies the way we write equations. The rewriting isn’t quite immediate. If you read carefully, $\mathbb{Z}_M$ is not a principal ideal domain, though matrices still admit an (and even very nice) invariant factorisation. Let’s slow down and look carefully how we could reach the factorisation we need.

Let $M=p_1^{s_1}\cdots p_z^{s_z}$ be the standard factorisation, where $p_1,\dots,p_z$ are distinct primes. For any $d\in\mathbb{Z}_{+}$, factorise $d=p_1^{s_1'}\cdots p_z^{s_z'}q$, where $\gcd\left({q,M}\right)=1$. Let $t_i=\min\left({s_i,s_i'}\right)$ and $g=\gcd\left({d,M}\right)$, then $g=p_1^{t_1}\cdots p_z^{t_z}$. Now CRT guarantees some $h\in\mathbb{Z}$ such that $h\equiv\begin{cases}d/g,&s_i'<s_i;\\1,&s_i'\geq s_i;\end{cases}\quad\left({\mathrm{mod}\ p_i^{s_i}}\right),\quad i=1,\dots,z.$ Note this $h$ is invertible modulo $M$ (because it’s invertible in each equation in CRT), let an inverse be $k\in\mathbb{Z}$. Note that $kd\equiv\left\{\begin{array}{rl}g,&s_i'<s_i;\\0\equiv g,&s_i'\geq s_i;\end{array}\right\}=g\quad\left({\mathrm{mod}\ p_i^{s_i}}\right),\quad i=1,\dots,z,$ which, again by CRT, yields $kd\equiv g\ \left({\mathrm{mod}\ M}\right)$. This means any non-zero number $d$ is associated with (a unit factor away from) $\gcd\left({d,M}\right)$ in $\mathbb{Z}_M$.

Now we turn to invariant factorisation of matrices over $\mathbb{Z}_M$. First, find out the invariant factorisation of $A$ over $\mathbb{Z}$ as $A=PDQ$. For each diagonal entry $d$, make it positive by adding a multiple of $M$, then invoke the above result and do one more row reduction (multiplying a unit) to transform $D$ so that each diagonal entry is a factor of $M$, whence $d$ becomes $\gcd\left({d,M}\right)$.

**Remarks.** The above method isn’t more desirable (for human; the time to find out the invariant factors dominates the total running time) than the one in the first correction, because it, after all, has to go through CRT to find out $k$, which involves breaking equations down to the prime powers and putting them back together. An obvious attempt of fix would be using Euclidean algorithm to find out Bézout’s identity $kd+?M=\gcd\left({d,M}\right)$. However, this method (without extra guarantees on the result returned) might fail, e.g., $2\times 4-6=\gcd\left({4,6}\right)$, but $\gcd\left({2,6}\right)\neq 1$.

**Remarks.** However, the above method is faster (for human) for counting. Since multiplying by $k$ doesn’t affect the prime powers in $c_i$ that are relevant (i.e., also primes whose powers appearing in $M$), we can just test whether $\gcd\left({d,M}\right)\mid c_i$ and conclude the count.

## Anecdotes

- I dug through my camera roll to find a photo for the hero image so that I don’t have to deal with copyright issues. The photo is a picture of John Steinberger’s lecture of Mathematics in Computer Science (before those fancy artistic effects), and it is chosen based on its content relevance. The picture demonstrates the Bézout’s identity. Just to clarify, the lecture wasn’t a nightmare.
- My blog building system is updated for this entry so that I can draw horizontal lines inside matrices.
- Now that I have to revise twice after the entry’s original publication, this really is a nightmare!

Please enable JavaScript to view the comments powered by Disqus.