Suppose we have new_key
and new_proof
such that:
pok_verify(new_key, new_key_index, new_proof)
passes
bls_verify(aggregate_key, aggregate_signature, message)
passes
where new_key
, new_proof
and message
are the only variables in our control, and aggregate_key
and aggregate_signature
are the respective "sums" of all the keys / signatures involved, including our own. One can interpret new_key
as a rogue key (denoted by
-
$k_{n+1}$ does not belong to${k_1 \dots k_n}$ . -
$k_{n+1}$ may be a function of the${k_1 \dots k_n}$ - in other words, we may be able to use the other validators' public keys to construct our own. -
$k_{n+1}$ can be used to create valid signatures even though we are not a validator.
The puzzle's architecture attempts to combat the creation of rogue keys by requiring that each validator prove that their public key belongs to some validator in the network. As you saw in the code, each key has a corresponding key index, which is used in the call to pok_verify(...)
. Let's look at this call in more detail:
public_keys.iter().enumerate().for_each(|(i, (pk, proof))| pok_verify(*pk, i, *proof));
In other words - for each key index, public key, and corresponding proof: compute pok_verify
using the key, index, and proof. Let's look at what this iterator evaluates for each public key in public_keys
via pok_verify
:
fn pok_verify(pk: G1Affine, i: usize, proof: G2Affine) {
assert!(Bls12_381::multi_pairing(
&[pk, G1Affine::generator()],
&[derive_point_for_pok(i).neg(), proof]
)
.is_zero());
}
Note that if you search for the trait-specific definition of is_zero
it actually calls a function called is_one
. So it is not checking if the pairing product is equal to the additive identity. It's checking if the pairing product is the multiplicative identity.
pok_verify
computes an elliptic curve pairing check using the current public key and corresponding proof. It uses a pairing function
A crucial property of
Consider an elliptic curve point
Let
and more generally,
Going back to pok_verify
and using our previous notation, we see that the following pairing checks are computed:
where we assume here
What was the point of this theoretical nonsense? The bilinearity of
We have pok_verify
. We will see this in more detail shortly.
Despite these advancements, we still have not found a way to pass bls_verify
. Let's look at the call to it:
bls_verify(aggregate_key, aggregate_signature, message);
where
fn bls_verify(pk: G1Affine, sig: G2Affine, msg: &[u8]) {
assert!(Bls12_381::multi_pairing(
&[pk, G1Affine::generator()],
&[hasher().hash(msg).unwrap().neg(), sig]
)
.is_zero());
}
and
let aggregate_key = public_keys
.iter()
.fold(G1Projective::from(new_key), |acc, (pk, _)| acc + pk)
.into_affine();
and aggregate signature is left to us.
Another pairing check, but different. This time, we need to add our public key to all the other keys, then we perform a check with this aggregate key and whatever aggregate signature we come up with. Note that before we sign a message
There is one glaring issue with this pairing check which is related to the previous one. If we use all the secret keys (via the aggregate public key derived from all the secret keys) on the left hand side of the pairing check equation, then it follows from bilinearity of
fn bls_sign(sk: Fr, msg: &[u8]) -> G2Affine {
hasher().hash(msg).unwrap().mul(sk).into_affine()
}
We only have the secret key we decide to make, and thus, the aggregate signature can only include our key. Assuming we have our message
where
Note that we have
As we control the value of
It's pretty clear that substituting this value for pok_verify
which precedes bls_verify
. To do this, we refer back to our previous insights. Recall that we found out how to compute the hiding of any honest validator's secret in pok_verify
is checking that, given the aggregate key we described to pass bls_verify
:
where
again, because pok_verify
, we need to have that pok_verify
- namely, that
where derive_point_for_pok
with the right value of i
- namely, new_key_index
- and multiplying this result by the multiplicative inverse of new_key_index + 1
. the + 1
there is because the proof keys are 0-indexed in the code, but derive_point_for_pok
adds 1 to its input within the computation:
fn derive_point_for_pok(i: usize) -> G2Affine {
let rng = &mut ark_std::rand::rngs::StdRng::seed_from_u64(20399u64);
G2Affine::rand(rng).mul(Fr::from(i as u64 + 1)).into()
}
Note that the generator point used here is not the standard G2Affine::generator()
- it's a different but fixed generator point (these groups should be prime order so that any element sampled is a generator point. This follows from Lagrange's theorem if you're familiar).
Now we have a rogue key bls_verify
, and we have a proof pok_verify
. In the code, this looks as follows.
let new_key = public_keys
.iter()
.fold(
G1Projective::generator().mul(secret_key),
|acc, (pk, _)|acc.sub(pk)
).into_affine();
In other words: start an accumulator at new_proof
is computed in a similar manner:
let new_proof = public_keys
.iter()
.enumerate()
.fold(
derive_point_for_pok(new_key_index).mul(secret_key).mul(Fr::from(new_key_index as u64 + 1).inverse().unwrap()),
|acc, (i, (_, proof))|{
let scaled_proof = proof.mul(Fr::from(i as u64 + 1).inverse().unwrap());
acc.sub(scaled_proof)
}
).mul(Fr::from(new_key_index as u64 + 1)).into_affine();
In other words: create an accumulator variable starting with
And lastly, our aggregate signature - which is just a message of our choice signed with our secret key:
//defined before anything else in the actual code, but I just included it here for reference
let secret_key = Fr::from(64 as u64);
let aggregate_signature = bls_sign(secret_key, message);
And that's it! I hope this was helpful.