# Of Kerckhoffs’ principle

Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi.

— Auguste Kerckhoffs

A celebrated dictum in the field of cryptography (cryptology) is Kerckhoffs’ principle. It was originally the second of the six desiderata proposed by Auguste Kerckhoffs for military cryptography [AK83]. In English, they are: [WIKI]

1. The system must be practically, if not mathematically, indecipherable;
2. It should not require secrecy, and it should not be a problem if it falls into enemy hands;
3. It must be possible to communicate and remember the key without using written notes, and correspondents must be able to change or modify it at will;
4. It must be applicable to telegraph communications;
5. It must be portable, and should not require several persons to handle or operate;
6. Lastly, given the circumstances in which it is to be used, the system must be easy to use and should not be stressful to use or require its users to know and comply with a long list of rules.

With the development of computers, the last four of them are no longer relevant.

The first desideratum is the central idea of modern theoretical cryptography, which is now called computational security. A PRG $G:S\to R$ does not have to generate the uniform distribution over $R$ to be used practically. Instead, most applications only require that $G$ generate a distribution that is computationally indistinguishable from the uniform distribution. Such a PRG is called a secure one. Oh, and by the way, there is no known secure PRG, otherwise P vs NP problem would have been solved.

The second desideratum is the very dictum that a modern cryptography design procedure follows. When I first was introduced to this law in a course (I think it was Fundamentals of Cryptography, Fall 2016 by John Paul Steinberger), I thought to myself that this principle was only an intuition rather than a formal statement. One can make an encryption scheme that uses UTM for both encryption and decryption, and use descriptions of TM as the keys. Now that the ‘real algorithm’ itself becomes the key, it is part of the secret and need not be made public.

This is a misunderstanding.

Practically or theoretically, the keys must be drawn from a specific distribution of TMs, since some TMs do not halt and there is not a ‘default distribution’ over TMs — it is a countable set thus does not possess a uniform distribution. The definition of the distribution is part of the algorithm. To be clearer, one should classify everything fixed before starting an instance of the algorithm as the algorithm itself and make that public. I shall reason for this statement from both practical and theoretical points of view. For now, if one accepts this idea, it is easy to see that the drawn TMs (the keys) are secret, and the distribution of TMs from which a key is drawn is not, since the distribution itself is fixed in advance.

Let’s look at a concrete example: Say we want to camouflage the school-book RSA encryption scheme as a ‘UTM scheme’. The keys for RSA are $\mathrm{sk}=\left({p,q}\right)$ and $\mathrm{pk}=pq$ (in this version, 3 is assumed to be the public exponent). And the keys for UTM are just hardcoded programs that do fast modulo power and Euclidean algorithm. It is easy to see that the TM distributions are just RSA key distributions actioned by appropriate (efficiently computable) bijections. There is no difference between them, up to efficiently computable bijections.

Now let’s see why everything fixed before starting an instance of the algorithm should be made and considered public.

On the practical side, all random element withdrawal will be realised by drawing some random number and mapping it to the key. The algorithm for mapping a number to the key is written as the logic of the crypto software or preloaded on the crypto hardware. It is always possible to extract the logic from the binary file or the circuits. Therefore, this mapping is public, which implies that this distribution is public. Similar argument applies to anything fixed in advance.

When it comes to the theoretical world, scientists use advantages over attack games, or more often, the distinguishability of distributions, to model security. In this world there is not, in a philosofical sense, the idea of ‘knowing the system’. For example, the definition of a secure PRG (a rather confined one):

Let $G:{\left\{{0,1}\right\}}^{\star}\to{\left\{{0,1}\right\}}^{\star}$ be an efficient deterministic algorithm such that the output is always one bit longer than the input.
If for all efficient algorithm $D:{\left\{{0,1}\right\}}^{\star}\mapsto\left\{{0,1}\right\}$, $\Pr\left[{D\left(xy\right)=1}\right]-\Pr\left[{D\left(G\left(x\right)\right)=1}\right]$ is negligible in $k$, where $x$ is uniform over ${\left\{{0,1}\right\}}^k$ and $y$ is a uniformly random bit, the PRG $G$ is said to be secure.

In this definition, the distinguisher (adversay) $D$ is not explicitly said to ‘know’ $G$. Nevertheless, $D$ can know $G$ since it is quantified after $G$. For this reason, if we imagine cryptoanalysts and hackers as the designers of $D$, they are free to design it with the knowledge of $G$. To some extent, we could say $D$ knows $G$ (but, allowed by the universal quantifier, not necessarily exploits such knowledge). That $D$, the adversary, knows $G$ coincides with the idea that the algorithm is public.

Moreover, commonly used definitions for public-key encryption scheme directly calls out the fact that the distribution is part of the algorithm — such a scheme is defined as a tuple $\left({G,E,D}\right)$, where $G$ is the key generator. Letting it generate the RSA key is no different than letting it generate the encryption-decryption algorithm pair.

To conclude, it is appropriate to paraphrase the dictum as:

Everthing fixed before the crypto system is instantiated should be considered and made public, and such thing is the algorithm or the crypto system.

— GL