Skip to content

Instantly share code, notes, and snippets.

@0xArsi
Last active January 30, 2024 17:53
Show Gist options
  • Save 0xArsi/0a855167a54ceeaf585d68d0d1a72f65 to your computer and use it in GitHub Desktop.
Save 0xArsi/0a855167a54ceeaf585d68d0d1a72f65 to your computer and use it in GitHub Desktop.

ZKHack IV - Supervillian Puzzle Writeup

Objective

Suppose we have $n$ honest validators in a network, and each has their own public key and proof-of-key. This data is given in the file the puzzle reads from. For simplicity, we denote the $i$-th public key by $k_i$ and the $i$-th proof by $p_i$. We would like to create a new_key and new_proof such that:

pok_verify(new_key, new_key_index, new_proof) passes

bls_verify(aggregate_key, aggregate_signature, message) passes

where new_key, new_proof and message are the only variables in our control, and aggregate_key and aggregate_signature are the respective "sums" of all the keys / signatures involved, including our own. One can interpret new_key as a rogue key (denoted by $k_{n+1}$) which has the following traits:

  1. $k_{n+1}$ does not belong to ${k_1 \dots k_n}$.

  2. $k_{n+1}$ may be a function of the ${k_1 \dots k_n}$ - in other words, we may be able to use the other validators' public keys to construct our own.

  3. $k_{n+1}$ can be used to create valid signatures even though we are not a validator.

The puzzle's architecture attempts to combat the creation of rogue keys by requiring that each validator prove that their public key belongs to some validator in the network. As you saw in the code, each key has a corresponding key index, which is used in the call to pok_verify(...). Let's look at this call in more detail:

public_keys.iter().enumerate().for_each(|(i, (pk, proof))| pok_verify(*pk, i, *proof));

In other words - for each key index, public key, and corresponding proof: compute pok_verify using the key, index, and proof. Let's look at what this iterator evaluates for each public key in public_keys via pok_verify:

fn pok_verify(pk: G1Affine, i: usize, proof: G2Affine) {
    assert!(Bls12_381::multi_pairing(
        &[pk, G1Affine::generator()],
        &[derive_point_for_pok(i).neg(), proof]
    )
    .is_zero());
}

Note that if you search for the trait-specific definition of is_zero it actually calls a function called is_one. So it is not checking if the pairing product is equal to the additive identity. It's checking if the pairing product is the multiplicative identity.

pok_verify computes an elliptic curve pairing check using the current public key and corresponding proof. It uses a pairing function $e\colon \mathbb{G}_1 \times \mathbb{G}_2 \to \mathbb{G}_t$ where $\mathbb{G}_1$ and $\mathbb{G}_2$ are cyclic elliptic curve groups (basically sets of elliptic curve points that are closed under elliptic curve addition, have an identity element, and have an additive inverse for every constituent point) and $\mathbb{G}_t$ is some target multiplicative group (behaves like the multiplicative group of the integers loosely speaking).

A crucial property of $e$ is its bilinearity. We use bilinear functions all the time; when I do $2 \times 5$, I've just evaluated $f(2, 5)$, where $f$ is the bilinear function $f(x,y) = xy$. If I double $x$ (equal to 2) by performing $f(4, 5)$ I get twice the original result (20). The same happens if I double $y$ (equal to 5) instead of $x$. The same way I just took a pair of integers (2, 5) and mapped them to another number (10) in a way that resembles multiplication, I can do such a thing with pairs of elliptic curve points using $e$.

Consider an elliptic curve point $A \in \mathbb{G}_1$, $B \in \mathbb{G}_2$, and $C \in \mathbb{G}_t$. $e$ has a useful property (bilinearity), among others we will not discuss here:

Let $a, b \in \mathbb{F}$ where $\mathbb{F}$ is some scalar field. Then for the pair of elliptic curve points $(A, B) \in \mathbb{G}_1 \times \mathbb{G}_2$, we have that

$$ e(aA, B) = e(A, aB) = e(A, B)^a $$

and more generally,

$$ e(aA, bB) = e(aA, B)^b = e(A, bB)^a = e(A, B)^{ab} = e(bA, aB) $$

Going back to pok_verify and using our previous notation, we see that the following pairing checks are computed:

$$ e(k_1, 1H) = e(G, p_1) $$

$$ e(k_2, 2H) = e(G, p_2) $$

$$ \cdots $$

$$ e(k_n, nH) = e(G, p_n) $$

where we assume here $i$ starts at 1 instead of 0 for simplicity (the actual code starts $i$ at zero). We also assume that $G$ and $H$ are generator points of $\mathbb{G}_1$ and $\mathbb{G}_2$, respectively.

What was the point of this theoretical nonsense? The bilinearity of $e$ tells that if we multiply a parameter in $e$ on the left hand side, it will "scale" the LHS by that amount. So we have to scale one of the parameters of $e$ on the right side of the equation by the same amount to maintain the equality. Remember that each $k_i$ is a public key derived from a secret key $s_i$ where $k_i = s_i G$, where $G$ is a generator point for the EC group $\mathbb{G}_1$. Looking at the general case, we see that

$$ e(s_iG, iH) = e(G, p_i) $$

We have $s_i$ and $i$ both in the left pairing, but we can't see any of that on the right pairing. We cannot modify $G$ in the code, so we do not consider that here. So it must follow from bilinearity of $e$ that $p_i = (s_i \cdot i)H$ so that the equality is maintained. From this follows a crucial fact: we can compute the hiding of any honest validator's secret key in $\mathbb{G}_2$. Just multiply $p_i$ by $i^{-1}$. So if we want to construct a rogue public key based on other validator's public keys (which puts their secret keys in the left side of the pairing check), we now have the ability to put the hidden secret keys of the validators in the right hand side of the pairing check equation to maintain equality and pass pok_verify. We will see this in more detail shortly.

Despite these advancements, we still have not found a way to pass bls_verify. Let's look at the call to it:

bls_verify(aggregate_key, aggregate_signature, message);

where

fn bls_verify(pk: G1Affine, sig: G2Affine, msg: &[u8]) {
    assert!(Bls12_381::multi_pairing(
        &[pk, G1Affine::generator()],
        &[hasher().hash(msg).unwrap().neg(), sig]
    )
    .is_zero());
}

and

let aggregate_key = public_keys
       .iter()
       .fold(G1Projective::from(new_key), |acc, (pk, _)| acc + pk)
       .into_affine();

and aggregate signature is left to us.

Another pairing check, but different. This time, we need to add our public key to all the other keys, then we perform a check with this aggregate key and whatever aggregate signature we come up with. Note that before we sign a message $m$, we first hash it to produce $h(m) \in \mathbb{G}_2$. For a signer with secret key $s_i$, the signature $\sigma_i$ is just $s_i \cdot h(m) \in \mathbb{G}_2$.

There is one glaring issue with this pairing check which is related to the previous one. If we use all the secret keys (via the aggregate public key derived from all the secret keys) on the left hand side of the pairing check equation, then it follows from bilinearity of $e$ that we need to use all the secret keys on the pairing function in the right hand side of the pairing check equation. But we don't have any of the secret keys of the honest validators. We have their hidings, but that's not helpful here because signing a message requires a raw secret key, as shown in the code:

fn bls_sign(sk: Fr, msg: &[u8]) -> G2Affine {
    hasher().hash(msg).unwrap().mul(sk).into_affine()
}

We only have the secret key we decide to make, and thus, the aggregate signature can only include our key. Assuming we have our message $m$, let's pick a secret key and call it $s_r$. It's some scalar field element. Then our aggregate signature would look as follows:

$$ \sigma_r = s_r \cdot h(m) $$

where $h(m) \in \mathbb{G}_2$. The issue is probably clearer now - the aggregate signature on the right hand side of the pairing check equation contains our secret key $s_r$, while the left hand side of the pairing check equation contains the secret keys of all the validators and our own (via the derived public keys). This looks as follows:

$$ e(k_1 + k_2 + \dots + k_n + k_{n+1}, h(m)) = e(G, s_r h(m)) $$

Note that we have $n+1$ keys here because, as mentioned, this pairing check is using the sum of all keys available including our own. We are the $n+1$-th key. However, if we just had $k_r$ here instead of that sum of keys, this pairing check would pass, as we've just signed this message with our own secret key:

$$ e(k_r, h(m)) = e(s_rG, h(m)) = e(G, s_r h(m)) $$

As we control the value of $k_{n+1}$, we need to make it cancel out the other validator public keys for this pairing check (keeping our own). We can do so by simply subtracting the keys of the other validators and adding our own public key $k_r$:

$$ k_{n+1} = k_r - k_1 - k_2 - \dots - k_n = k_r - \sum_{i=1}^n k_i $$

It's pretty clear that substituting this value for $k_{n+1}$ will give us the same pairing check that we concluded would pass. We are not done, however, because we still have not produced a proof that $k_{n+1}$ is a valid key. We need to do so in order to pass pok_verify which precedes bls_verify. To do this, we refer back to our previous insights. Recall that we found out how to compute the hiding of any honest validator's secret in $\mathbb{G}_2$. Again, pok_verify is checking that, given the aggregate key we described to pass bls_verify:

$$ e(k_r - \sum_{i=1}^n k_i, (n+1)H) = e(G, p_{n+1}) $$

where $p_{n+1}$ is the proof for the public key $k_{n+1}$. Again, expressing the public keys in their true form using the secret keys, we get

$$ e((s_r - \sum_{i=1}^n s_i)G, (n+1)H) = e(G, p_{n+1}) $$

again, because $e$ is bilinear, we would need to have $(s_r - s_1 - ... - s_n)$ and $n+1$ in the pairing function on the right hand side of the equation so the LHS and RHS get scaled by the same amount. We need to use these ideas and previous ones to compute $p_{n+1}$. Recall how we found that $p_i = i \cdot s_i \cdot H \in \mathbb{G_2}$. Based on the work directly above and the fact that we are forced to use this key combination in pok_verify, we need to have that $p_{n+1} = (n+1)(s_r - s_1 - \cdots - s_n)H$ inside the call to $e$ on the RHS for the equality to be maintained. This is where the fact we found when analyzing pok_verify - namely, that $s_iH = (i^{-1})p_i$. is helpful. Because we can now compute

$$ p_{n+1} = (n+1)(s_rH - s_1H - \dots - s_nH) = (n+1)(s_rH - (1^{-1})p_1 - (2^{-1})p_2 - \dots - (n^{-1})p_n) $$

where $i^{-1}$ denotes the multiplicative inverse of $i$. Note that we can compute $s_rH$ using derive_point_for_pok with the right value of i - namely, new_key_index - and multiplying this result by the multiplicative inverse of new_key_index + 1. the + 1 there is because the proof keys are 0-indexed in the code, but derive_point_for_pok adds 1 to its input within the computation:

fn derive_point_for_pok(i: usize) -> G2Affine {
    let rng = &mut ark_std::rand::rngs::StdRng::seed_from_u64(20399u64);
    G2Affine::rand(rng).mul(Fr::from(i as u64 + 1)).into()
}

Note that the generator point used here is not the standard G2Affine::generator() - it's a different but fixed generator point (these groups should be prime order so that any element sampled is a generator point. This follows from Lagrange's theorem if you're familiar).

Now we have a rogue key $k_{n+1}$ that will pass verification when combined with other keys in bls_verify, and we have a proof $p_{n+1}$ which will allow us to pass pok_verify. In the code, this looks as follows.

let new_key = public_keys
                   .iter()
                   .fold(
                    G1Projective::generator().mul(secret_key), 
                   |acc, (pk, _)|acc.sub(pk)
                   ).into_affine();

In other words: start an accumulator at $s_rG$ and subtract another validator's public key $k_i$ from it each time, until we are left with the value of $k_{n+1}$ that we formalized earlier. new_proof is computed in a similar manner:

   let new_proof = public_keys
                   .iter()
                   .enumerate()
                   .fold(
                    derive_point_for_pok(new_key_index).mul(secret_key).mul(Fr::from(new_key_index as u64 + 1).inverse().unwrap()),
                    |acc, (i, (_, proof))|{
                       let scaled_proof = proof.mul(Fr::from(i as u64 + 1).inverse().unwrap());
                       acc.sub(scaled_proof)
                    }
                   ).mul(Fr::from(new_key_index as u64 + 1)).into_affine();

In other words: create an accumulator variable starting with $s_rH$, and subtract each $(i^{-1})p_i = s_iH$ from it. Then multiply the result by $n+1$ to get a proof consistent with the new key index. This gives us the proof we formalized earlier.

And lastly, our aggregate signature - which is just a message of our choice signed with our secret key:

//defined before anything else in the actual code, but I just included it here for reference
let secret_key = Fr::from(64 as u64);
let aggregate_signature = bls_sign(secret_key, message);

And that's it! I hope this was helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment