I learnt how to write uniform hybrid reductions in my rudimentary cryptography course, which are beasty beauties that wrap an adversary, trying break each of the many underlying cryptographic assumptions simultaneously. However, those constructions are less easy to write and read than using the transitivity of computational indistinguishability up to polynomially many times. The latter involves writing out the (perhaps polynomially many) hybrids and arguing that adjacent hybrids are indistinguishable. I also learnt non-uniform reductions, noticeably the technique to only work with deterministic adversaries. I’m writing a proof using hybrid argument lately, and I asked my advisor about whether I should write out the beast or just the list of hybrids, she says she prefers just writing the list of hybrids.

I recall the first time I see people using the transitivity of computational indistinguishability up to polynomially many times is when I took Ivan Damgård’s Kryptologisk protokolteori (Cryptologic Protocol Theory) course. I remember clearly Ivan mentioned the indistinguishability of polynomially many commitments could be proven by ‘replacing the commitments one by one and using the transitivity’. I was surprised because the proof I had been writing until then were ‘uniform’, which I shall explain below. Of course, now I have understood that those are different flavours of reductions.

## (Non-)uniform adversaries

There are two models of computational security in cryptography. Both require some distributions (views, etc.) to be indistinguishable for efficient adversaries. The nuance is the definition of ‘efficient’. In the *uniform* model, the efficient adversaries are modelled as probabilistic polynomial-time Turing machines, where as in the *non-uniform* model, they are modelled as polynomially-sized families of circuits. In idealised models, one also considers security against adversaries that make polynomially many oracle queries and have unbounded resources otherwise, but let’s put that aside for now.

Equivalently, non-uniform adversaries can also be modelled as: 1) deterministic (or probabilistic) polynomial-time Turing machines with families of polynomially long advice; 2) families of deterministic (or probabilistic) polynomial-time Turing machines of polynomial-sized descriptions.

## Semantic/CPA security for PKE

Let $\lambda$ be the security parameter. Defined the CPA security game $\CPA_\PKE^\mathcal{A}\left(b\right)$ for bit $b$, PKE scheme $\PKE=\left({\KeyGen,\Enc,\Dec}\right)$ and adversary $\mathcal{A}$ to be the following procedure:

- Let $\left({\pk,\sk}\right)\leftarrow\KeyGen\left(1^\lambda\right)$.
- Send $\pk$ to $\mathcal{A}$.
- Let $\mathcal{A}$ choose as many pairs of messages $\left({m_i^{\left(0\right)},m_i^{\left(1\right)}}\right)$ as it wants, and for each pair, return $\Enc\left({1^\lambda,\pk,m_i^{\left(b\right)}}\right)$.
- $\mathcal{A}$ outputs a bit $b'$, which becomes the outcome of the game.

We say $\PKE$ is CPA secure if for all efficient $\mathcal{A}$, the distinguishing advantage $\DistCPA_\PKE^\mathcal{A}=\Pr\left[{\CPA_\PKE^\mathcal{A}\left(0\right)=1}\right]-\Pr\left[{\CPA_\PKE^\mathcal{A}\left(1\right)=1}\right]$ is negligible in $\lambda$. Note that the distinguishing advantage might be negative here, which is the convention I followed when I first learnt cryptography. Following the convention, we say the adversary talks to a challenger, which is just the procedure executing the game.

The semantic security game is the same as that for CPA security, except that the adversary must submit exactly one pair of messages. Semantic security of a PKE scheme is defined similarly.

One of the first propositions a learner in public-key cryptography proves is that semantic security implies CPA security (for public-key schemes). The reason behind that is the distributions of ciphertexts in the two worlds are both efficiently sampleable, since the adversary is given the public key. Having more samples from efficiently sampleable indistinguishable distributions doesn’t make it much easier to distinguish.

Let’s prove it in two different ways.

### Uniform reduction

Let $\mathcal{A}$ be any efficient adversary playing the CPA game against a semantically secure $\PKE$. Since $\mathcal{A}$ is efficient, it makes at most polynomially many queries, say $Q\left(\lambda\right)$ ones. We create the following adversary $\mathcal{A}'$ playing the semantic security game against $\PKE$:

- Pick $t\takerandom\left\{{1,2,\dots,Q\left(\lambda\right)}\right\}$.
- Run $\mathcal{A}$ and interface it as the challenger, i.e., process all its communication.
- Receive $\pk$ and send it to $\mathcal{A}$.
- For the first $t-1$ queries from $\mathcal{A}$, answer it with $\Enc\left({1^\lambda,\pk,m_i^{\left(0\right)}}\right)$.
- For the $t$
^{th}query from $\mathcal{A}$, send the query to the challenger and answer $\mathcal{A}$ with whatever the challenger returns. - For any subsequent queries from $\mathcal{A}$, answer it with $\Enc\left({1^\lambda,\pk,m_i^{\left(1\right)}}\right)$.
- Output whatever $\mathcal{A}$ outputs.

It is fairly easy to show $\DistCPA_\PKE^\mathcal{A}=Q\left(\lambda\right)\DistSS_\PKE^{\mathcal{A}'}.$ Since $\mathcal{A}'$ is efficient, its advantage must be negligible, thus proving the negligibility of $\mathcal{A}$’s advantage.

### ‘Listing the hybrids’

Let’s take a two-query CPA adversary for example. Consider the (hybrid) games:

- $\Enc\left({1^\lambda,\pk,m_1^{\left({\color{red}0}\right)}}\right),\Enc\left({1^\lambda,\pk,m_2^{\left({\color{red}0}\right)}}\right)$
- $\Enc\left({1^\lambda,\pk,m_1^{\left({\color{red}1}\right)}}\right),\Enc\left({1^\lambda,\pk,m_2^{\left({\color{red}0}\right)}}\right)$
- $\Enc\left({1^\lambda,\pk,m_1^{\left({\color{red}1}\right)}}\right),\Enc\left({1^\lambda,\pk,m_2^{\left({\color{red}1}\right)}}\right)$

The first pair of games are indistinguishable, because given the public key, the adversary could just create the second ciphertext itself and pretend it came from the challenger. The second pair of games are indistinguishable for a similar reason. Since the sum of two negligible functions is still negligible, the first and the third games are indistinguishable, which is what we want. Note that the indistinguishability of adjacent games still involves a computational reduction.

However, when we apply this technique to adversaries that makes $\omega\left(1\right)$ queries, something iffy appears. The first mistake to avoid in elementary calculus is assuming the sum of infinitely (or rather indefinitely) many infinitesimals is infinitesimal. For example, $\frac{1}{n+i}\to 0\quad\left({n,i\to\infty,1\leq i\leq n}\right),$ but $\sum_{i=1}^{n}{\frac{1}{n+i}}\to\int_0^1{\frac{\operatorname{d}\!x}{1+x}}=\log 2\quad\left({n\to\infty}\right).$ Similarly, we might have concerns on adding up a bunch of negligible functions that ‘begin to be really negligible’ too slowly, so slowly that the sum becomes non-negligible. This concern is indeed valid for the uniform model, and the problem is known to be superficial only in the non-uniform model (I said ‘known’ because we technically do not know whether $\mathsf{P}\mathrlap{\:\:?}{{}={}}\mathsf{NP}$).

In the non-uniform model, it is enough to list the hybrids and prove the adjacent hybrids are indistinguishable. For the sake of completeness, let’s formally do the non-uniform reduction. Writing out the reduction circuits explicitly helps understand where non-uniformity comes in.

Let $\mathcal{A}_\lambda$ be an adversarial family of circuits playing the CPA game making at most (polynomial) $Q\left(\lambda\right)$ queries. For each $\lambda$, we consider the hybrid games $\mathsf{H}_i$ for $i=0,\dots,Q\left(\lambda\right)$, where $\mathsf{H}_i$ answers the first $i$ queries with ciphertexts of the first messages and the rest with those of the second ones. Let the distinguishing advantage of $\mathcal{A}_\lambda$ between $\mathsf{H}_{i-1},\mathsf{H}_i$ be $d_i$, then
$\left|\DistCPA_\PKE^{\mathcal{A}_\lambda}\right|\leq\sum_{i=1}^{Q\left(\lambda\right)}{\left|d_i\right|}.$
Now let $i^\star_\lambda\in\operatorname{argmax}_{i}{\left|d_i\right|}$, we then create an adversarial family of circuits $\mathcal{A}_\lambda'$, whose $\lambda$^{th} member uses $\mathcal{A}_\lambda$ by putting the challenge in the $i^\star_\lambda$^{th} slot. This new family clearly has polynomial size. By the choice of $i^\star$, we have
$\left|\DistCPA_\PKE^{\mathcal{A}_\lambda}\right|\leq Q\left(\lambda\right)\left|\DistSS_\PKE^{\mathcal{A}_\lambda'}\right|,$
and since $\mathcal{A}_\lambda'$ have negligible advantage, so must $\mathcal{A}_\lambda$.

The non-uniformity is introduced in the highlighted part. The sequence $i^\star$ might not be efficiently computable. Non-uniformity is also introduced by (conveniently) assuming the adversaries are deterministic, where non-uniformity allows us to hardwire the best random tape.

## Exercise

In one discussion with my advisor, I felt I was lacking practice of hybrid argument so we agreed that I do an exercise on reducing matrix DDH to the usual DDH. I did this and somehow recovered my muscle.

Try to prove the indistinguishability of $\left({g,g^{a_1},\dots,g^{a_n},g^{b_1},\dots,g^{b_m},g^{a_1b_1},g^{a_1b_2},\dots,g^{a_nb_m}}\right),\quad a_i,b_j\takerandom\mathbb{Z}_{\left|G\right|}$ and $\left({g,g^{a_1},\dots,g^{a_n},g^{b_1},\dots,g^{b_m},g^{c_{11}},g^{c_{12}},\dots,g^{c_{nm}}}\right),\quad a_i,b_j,c_{ij}\takerandom\mathbb{Z}_{\left|G\right|}$ for polynomial $n,m$ by reducing it to the DDH assumption in group $G=\left\langle g\right\rangle$.

## More…

While just writing out the hybrids is easier than producing the gigantic beast, I still appreciate the uniform reduction. It’s like building a Swiss Army knife (the difference is that not all tasks are accomplished). One factor is the inertia from my rudimentary course. The other thing is that uniform reductions produce stronger results, namely that security of the underlying assumptions against uniform adversaries implies security of the construction against uniform adversaries, and that against non-uniform ones implies that against non-uniform ones (whereas a non-uniform reduction only produces security based on assumptions against non-uniform adversaries). Nevertheless, in most cases, having written out the hybrids, it just requires some additional labour to convert the list into the gigantic beast, and I personally ponder the proofs by using transitivity (of course I don’t create the beasts in my mind when I’m still trying to coming up with a proof [or a disproof]).

I think people can also consider non-uniform cryptographic schemes. While that might be theoretically interesting, all the constructions I know are uniform schemes. It is also reasonable to assume the adversary is ‘stronger than the scheme’ (and desirable to have systems secure in such situation), in the sense that the schemes are uniform and the adversaries are non-uniform.

**Updated on 22 June.** There is a very neat paper by Oded Goldreich and Bernd Meyer that separates uniform/non-uniform indistinguishabilities (in a very perfect sense) via some very pathetic ensembles.

Please enable JavaScript to view the comments powered by Disqus.