Lagrange’s four-square theorem is a beautiful result in number theory. However, to the best of my knowledge, I haven’t encountered it often in theoretical computer science. Today’s entry discusses two interesting stories related to this theorem: the first about a recent project I did in cryptography, the second about an anecdote, originated from UC Berkeley, of a bug in printing. I would say the second story is more interesting.
Lagrange’s four-square theorem states that any natural number can be written as the sum of up to 4 perfect squares. That is, there exists such that .
Usage in proving non-negativity
In my recent research in cryptography (another entry in Chinese), the prototype protocol (different from what we present in the paper) requires that the prover prove the range of a committed value. In simple words, imagine you have a number , encrypted, where the encryption scheme allows arithmetic operations to be performed inside the encryption, and you want to prove it’s at least 0 and at most . What you can do is to create another encryption of and prove both encryptions contain a non-negative number.
How do you prove a number is non-negative? Well, the sum of squares is always non-negative. Since one can always write , the prover is asked to commit to (encrypt) in question, and commit to , prove these 8 encryptions are in the correct “format” (that is, the are really squares of instead of some bogus values), and finally add the commitments together, which yields a commitment to (an encryption of) .
When Ivan first told me the construction, I was surprised by the tricky trick! Moreover, he mentioned that the decomposition of a given number can be found in polynomial time.
Usage in working around a bug
Oct 2: Warning: Due to a known bug, the default Linux document viewer evince prints N*N copies of a PDF file when N copies requested. As a workaround, use Adobe Reader acroread for printing multiple copies of PDF documents, or use the fact that every natural number is a sum of at most four squares.
In the Zhihu answer, I replied:
What a feature! Well, it seems to me that “printing” “copies” and reducing the problem to a smaller size is another good solution.
In the reply section, Zhihu user 马云龙 thinks that the theorem guarantees the reduction stops in 4 steps, which is wrong. Greedily printing doesn’t guarantee four-square decomposition. I was having supper with Tao Meng (a classmate and friend of mine) at Taoli Dining Hall (a dining hall of Tsinghua University) when I read the wrong comment by 马云龙, and decided to work out the complexity of the reduction method, which was exactly why I decided to write this entry.
Let’s compare the two methods (reduction v.s. decomposition). Clearly, the communication complexity is minimised for the decomposition method. However, the computational complexity might not be the case.
For the decomposition method, Randomized Algorithms in Number Theory by Michael O. Rabin and Jeffery O. Shallit, which appeared in Communications on Pure and Applied Mathematics in 1986, mentions an algorithm that finds the decomposition in expected time (number of arithmetic operations of numbers of length ) based on ERH. (This should be the algorithm mentioned earlier by Ivan.)
For the reduction method, suppose we have copies to print, after the first step of printing, we have no more than copies to print. Therefore, the number of iterations (printing jobs to submit) satisfieswhich yields . In each step, we will compute the square root of an integer. Babylonian method (a.k.a. Newton’s method for ) converges quadratically, and the number of significant digits we require is , therefore, at most arithmetic operations are required before we reach the floored square root. The computational complexity of our reduction algorithm is , which is fairly good. The communication complexity is jobs. The best thing is that this method does not depend on ERH.