Skip to content

Instantly share code, notes, and snippets.

@defuse
Last active April 7, 2016 18:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save defuse/c525cfaaebfb5e63346bfd36c077d7b6 to your computer and use it in GitHub Desktop.
Save defuse/c525cfaaebfb5e63346bfd36c077d7b6 to your computer and use it in GitHub Desktop.
Are PRFs secure as commitments?

In this document I try to answer the question: Are PRFs secure commitment schemes?

Note: This document is just a sketch of my thoughts. Please interpret the notation and reasoning charitably!

Definition: Strong Computational Hiding

  1. Alice selects a random b in {0, 1}
  2. Adversary sends oracle queries of the form (m0, m1), and gets back Commitr(mb) for a random r.
  3. Adversary outputs a guess b' for b.

The advantage is Pr[b = b'] - 1/2.

Definition: PRF Security

A PRF seed K is chosen randomly and is fixed. The adversary sends oracle queries in the form of PRF inputs. If the adversary is in the "real" world and queries m they recieve PRFK(m). If they are in the "random" world they receive the result of running a random function on m.

The advantage is Pr[Adversary outputs 1 in the real world] - Pr[Adversary outputs 1 in the random world].

Definition: Weird PRF

The same as a standard PRF, except that in the real world for each query m the key K is re-chosen randomly, and in the random world, one of 2^|K| random functions is selected at random, and the result of running that selected function on m is returned.

Theorem: Weird PRFs are Strong Computational Hiding

Suppose I have an adversary A in the game Strong Computational Hiding against Commitr(m) = WeirdPRFr(m) with advantage ADV. Then here's how I turn it into an adversary for the WeirdPRF game with similar advantage:

  1. Select a random bit b.
  2. Run A. When A queries (m0, m1), return Oracle(m).
  3. Let b' be A's guess. Output b == b'.

In the real world case, Oracle(m) = WeirdPRFr(m) for a random r, so it is exactly identical to the Strong Computational Hiding game, and thus Pr[b == b'] is the same (and equal to ADV + 1/2).

In the random world case, Oracle(m) is the result of choosing one of 2^|r| random functions and evaluating it on m. If r happens to not repeat, A is just getting back random strings, so no information about b is learned and Pr[b == b'] is 1/2. The probability of r repeating in one run of the game is approximately N in 2^(|r| / 2) where N is the number of oracle queries plus the number of WeirdPRF evaluations the adversary does without using the oracle. If this happens assume w.l.o.g. that A's guess is correct 100% of the time. So Pr[b == b'] is at most 1/2 + N/2^(|r| / 2) in the random world.

Then,

Advantage in this game
= Pr[b == b'] in the real world - Pr[b == b'] in the random world
= ADV + 1/2 - Pr[b == b'] in the random world
>= ADV + 1/2 - 1/2 - N/2^(|r| / 2) 
= ADV - N/2^(|r| / 2).

If ADV is nonnegligible then so is ADV - N/2^(|r| / 2) as long as N << 2^(|r| / 2) which it should be since N is bound by the resources of the adversary. So an attack on the commitment scheme implies an attack on the WeirdPRF.

Question: Does PRF imply WeirdPRF?

If PRF implies WeirdPRF, then Commitr(m) = PRFr(m) would be a Strong Computational Hiding commitment scheme. I don't know how to prove this and can't find a counterexample.

Theorem: WeirdPRF does not imply PRF

We do know that the implication doesn't hold in the opposite direction. If WeirdPRFs exist then we can make a function that is a WeirdPRF but isn't a PRF. For any WeirdPRF, define Fk(m) = WeirdPRFk(m) if m != 0, or k if m = 0. Clearly this is not a PRF since querying for input 0 gives you k. It is a WeirdPRF because in the real world querying 0 gives you a random K, and in the random world you get the result of running 0 through a random random [sic] function. Those two cases are still hard to distinguish.

This suggests WeirdPRF is a laxer requirement than PRF, which makes intuitive sense. But it may not be laxer, since while we are making the adversary's job harder by randomizing the seed each time, we are also making it easier by only requiring them to distinguish it from a randomly-chosen random function, not just a random function.

Question: Versus Random Oracle Model

How does assuming something is a WeirdPRF compare to assuming it is a random oracle? You can think of a WeirdPRF as a random oracle, except when you query, the first |r| bits of the input are chosen randomly and aren't under your control.

My guess is that WeirdPRF doesn't imply PRF and assuming the existence of a WeirdPRF is only slightly less strong of an assumption than working in the random oracle model.

How useful is this analysis?

Here I've tried to construct a strong computational hiding function from a PRF. I failed to do that, but does it really matter? In some sense, strong computational hiding is a "weaker" assumption than PRF. For example the function that always returns 0 is strong computational hiding (but isn't a PRF).

The only time you would want to know PRF => Strong Computational Hiding is if you were already assuming your function is a PRF, and you don't want to additionally assume it's Strong Computational Hiding.

Is the many-query game necessary, or would a one-query game suffice?

Modify the "Strong Computational Hiding" game defined above by allowing the adversary only one oracle query. Call this the Weak Computational Hiding game. How do the strengths of commitment schemes under these two different notions of security relate?

It's obvious that an adversary that wins the Weak Computational Hiding game can be modified to win the Strong Computational Hiding game: Just pass their one query to the oracle, and don't make any use of your ability to query the oracle more times. The resulting adversary has the same advantage.

Update: I realized later that what I wrote below is a transformation of one arbitrary adversary into another. The reduction does not assume the adversary runs in polynomial time, and when I claim it's the "best reduction" you can give for an arbitrary adversary, that's still true assuming my proof sketch is correct, but if you assume the adversary uses polynomially-many queries, as Matt Green pointed out to me there's an argument you can use to get a better reduction.

What about the converse? Does an adversary that wins the Strong Computational Hiding game with imply one that wins the Weak Computational Hiding game? To make reasoning easier, let's consider the non-adaptive version of the Strong Computational Hiding game: the adversary must provide all of their queries up front, and they all get answered at once.

Let Commr(m) be the commitment scheme under study. Suppose we have an adversary AS which has advantage ADVS in the Strong Computational Hiding Game.

Define Flipi(AS) to be an adversary as follows:

  1. Run AS, getting a sequence of tuples (their non-adaptive oracle queries) as output.
  2. Reverse tuple i in the list, i.e. change (x, y) to (y, x).
  3. Output the modified list to the oracle, and return the result to AS.

Now, let x be (an/the) integer such that the advantage of Flipx(AS) is minimal in the non-adaptive Strong Computational Hiding game. Give Flipx(AS) the name AMIN and denote its advantage by ADVMIN. Define Δ = ADVS - ADVMIN. Note: Since we chose ADVMIN to be minimal, we are choosing Δ to be maximal.

Theorem: Given AS in the Strong game there exists an adversary AM with advantage Δ/2 in the Weak game.

Here's how you do it. Given AS define AM as follows:

  1. Select a random bit b.
  2. Run AS, getting a list of pairs as output.
  3. Send the pair at index x to the oracle.
  4. For all other indices, simply compute and return Commitr(mb) for a random r ourselves.
  5. Let b' be AS's output bit. If b == b', output b. Otherwise, output the negation of b.

Intuition: We've found the query whose reversal does the most "damage" to AS. If our guess of b is correct, we aren't actually doing any damage since the situation is identical from AS's perspective. If our guess of b is wrong, then AS's output bit is most likely to be independent of b, making us more likely to output the negation of b.

More formally, consider two possible cases. In the first case, our guess of b in step 1 is correct. If so, then the probability b == b' and we output b is ADVS + 1/2. In the second case, our guess of b in step 1 is wrong. If so, then the probability we output the negation of b is the same as the probability that AMIN is wrong, which is 1 - (ADVMIN + 1/2). Substituting the definition of Δ we get

1 - (ADVMIN + 1/2)

= 1 - ADVMIN - 1/2

= 1 + Δ - ADVS - 1/2

= 1/2 + Δ - ADVS.

The first case happens with probability 1/2 and the second case happens with probability 1/2, so this adversary's overall probability of guessing correctly is:

1/2(ADVS + 1/2) + 1/2(1/2 + Δ - ADVS)

= 1/2ADVS - 1/2ADVS + 1/4 + 1/4 + Δ/2

= 0 + 1/2 + Δ/2

The advantage is therefore (1/2 + Δ/2) - 1/2 = Δ/2. QED. (Note: this is an existence proof not a constructive proof, since it assumes you know advantage-minimizing index x, in reality you'd have to find that or guess it somehow)

There is good reason to beleive this is the best reduction we can find, at least without making more assumptions about AS. What follows is some evidence for that, not a proof, but I think it can be turned into a proof.

Let R(x) be a random oracle. Define Commr(m) by R(r || m). Intuitively, the best strategy to break this scheme in the Weak game is to query (m0, m1) where m0 != m1 and then use your remaining time to brute-force guess r. If the adversary can run for T steps (assuming T << 2^|r|), this has advantage at most T/2^|r| (I'm claiming this is the best any adversary can do against this scheme in the Weak game). Also, one possible strategy in the strong game is:

  1. Fix a pair (m0, m1) where m0 != m1
  2. Send (m0, m1) T/2 times and (m0, m0) T/2 times.
  3. If there's a collision between the first half of the responses and the last half, output 0. Otherwise output 1.

This adversay wins with ~100% probability if r collides between the first half and the last half of the queries, and 50% probability if not. So the advantage is the probability of r colliding between the first half and the last half. Reversing any single one of the queuries reduces removes (T/2) pairs that could have collided, so reducing the probability by (T/2)/2^|r|, so Δ = (T/2)/2^|r|. (***)

Applying the proof of the theorem to the above adversary, we get the existence of an adversary with advantage Δ/2 = 1/4 * T/2^|r| in the Weak game. This is only a factor of 4 less than the best possible attack. Therefore, if there were a better reduction, one that was an improvement by more than a factor of 4, i.e. gave an advantage of more than 2Δ, then we would have a contradiction because applying the reduction would give us an attack on this specific commitment scheme that's better than (what we're assuming can be proven to be) the best possible attack on this specific commitment scheme.

In conclusion, you can turn an adversary for the Strong game into one for the Weak game if only if its Δ (er, Δ/2) is nonnegligible.

TODO: This isn't proving what I think it is. See: zcash/zcash#792 (comment)

TODO: Rewrite the last part in a rigorous way (prove security in the random oracle model and show that a better reduction would contradict that proof.)

TODO: Fix this up to work with the adaptive game not just the non-adaptive variant.

TODO: At (***): It's actually even better, it makes the guess wrong with ~100% probability if one of the collisions happened there, so it reduces the probability by T/2^|r|, and so we get 1/2 * T/2^|r| in the result instead of 1/4 * T/2^|r|.

TODO: It should be possible to always just use a random index instead of maximizing over indicies. The intuition is that given some adversay, you can modify that adversary such that it knows its "x index" and actively tries to hide it by randomizing which position it puts it in, as though it's trying to prevent itself from being used in this reduction. Any generic reduction must work for such index-hiding adversaries. We can maybe use this to argue that choosing a random index is no worse in general than choosing the maximal one. (For any adversary you can make one with the same advantage in the Strong game but does this index-hiding thing... then a miracle occurs ... therefore it doesn't matter.)

TODO: Instead of talking about the reduction "making an assumption about the adversary", talk about "non-relativizing" reductions, in the same sense that a proof of P=NP or P!=NP must be "non-relativizing". Actually, they are called "fully black-box reductions" see https://eprint.iacr.org/2013/101.pdf

TODO: What if your "one query" is a quantum state, and you get back a quantum state!?

References

@daira
Copy link

daira commented Apr 4, 2016

Question: Does PRF imply WeirdPRF?

If WeirdPRF implies PRF, [...]

The last sentence here is the wrong way round.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment