Skip to content

Instantly share code, notes, and snippets.

@nickfarrow
Last active May 13, 2024 05:03
Show Gist options
  • Save nickfarrow/64c2e65191cde6a1a47bbd4572bf8cf8 to your computer and use it in GitHub Desktop.
Save nickfarrow/64c2e65191cde6a1a47bbd4572bf8cf8 to your computer and use it in GitHub Desktop.
Modifying FROST Threshold and Signers

Modifying FROST Signers and Threshold

FROST's distributed key generation involves N parties each creating a secret polynomial, and sharing evaluations of this polynomial with other parties to create a distributed FROST key.

The final FROST key is described by a joint polynomial, where the x=0 intercept is the jointly shared secret s=f(0). Each participant controls a single point on this polynomial at their participant index.

The degree T-1 of the polynomials determines the threshold T of the multisignature - as this sets the number of points required to interpolate the joint polynomial and compute evaluations under the joint secret.

T parties can interact in order to interpolate evaluations using the secret f[0] without ever actually reconstructing this secret in isolation (unlike Shamir Secret Sharing where you have to reconstruct the secret).


We wonder, is it possible to change the number of signers N, and change the threshold T after keygen has been completed? And importantly, can these changes be made by with a threshold number of signers, as opposed to requiring the consent of all N signers? (Imagine losing a FROST secret keyshare and wanting to reissue another!).

Much of our investigation has led to ideas which have already been fleshed out in the secret sharing literature.

Note the following methods mentioned here are not explicit, and we still need to read into which are proven secure and most appropriate for our purpose, each may come with caveats.

Decreasing N: Removing a Signer

We can turn a t of n into a t of (n-1) if we can trust one user to delete their secret keyshare (make sure n>t!).

If we can not reliably trust the party to delete their secret keyshare, we can further render the revoked secret keyshares incompatible with future multisignature participants.

We can do this using some form of proactive secret sharing:

Shares are periodically renewed (without changing the secret) in such a way that information gained by the adversary in one time period is useless for attacking the secret after the shares are renewed

See Proactive Secret Sharing Or: How to Cope With Perpetual Leakage and Proactive Secret Sharing on Wikipedia.

Overview - we can create a new joint polynomial with the same joint secret, and then ask all n-1 participants to delete their old secret keyshare (point on that particular old joint polynomial). If t-1 parties of n-1 feign deletion and collude then they could still sign with the removed party.

To create a new joint polynomial with the same public key and joint-secret we redo keygen with n-1 parties. Each participant uses the same first coefficient in their original keygen polynomial, and other terms random. This will produce a polynomial with the same joint secret, but all other points different and incompatible with previous keyshares.

Decreasing T: Reducing the Threshold

We can decrease the threshold by sharing a secret of a single party to all other signers, allowing every other party to produce signature shares using that secret keyshare.

This effectively turns a t of n into a (t-1) of (n-1). We can keep n the same if we also know how to increase the number of signers (below), as we can issue a brand new secret keyshare and distribute it to all the other signers; going from a t of n to a (t-1) of n.

In more adversarial multisig scenarios, steps could be taken to manage some fair exchange of this secret to ensure it reaches all participants.

Increasing N: Adding a Signer

Backing up each individual secret keyshares is advised -- but backups are certainly not the same as issuing an additional party who has the power to contribute an independent signature share towards the threshold. Issuing new signers is slightly more involved.

The idea is that we can securely evaluate the joint polynomial at further indexes for additional participants. We do not want to rely on all n participants being present since this is useless in the case of a lost signer.

Multi-Party Computation: Enrollment

We can use an enrollment protocol to add a new party without redoing keygen.

See Novel Secret Sharing and Commitment Schemes for Cryptographic Applications and A Survey and Refinement of Repairable Threshold Schemes - Section 4.1.3

Enrollment protocols allow us to repair (recover) or add a new party without redoing keygen. A threshold number of parties collaborate to evaluate the joint polynomial at a new participant index, and securely share this new secret keyshare to the new participant.

Whenever we want to add a new party at some new participant index, T parties each use their secret keyshare point to evaluate the joint polynomial at some new index. Each party contributes evaluation shares from their basis polynomial, and weighs them using the appropriate lagrange factor to ensure the sum of these pieces will lie on the joint polynomial. By securely sharing these with the new party, the new party sums them to form a secret keyshare and can now participate in FROST signing.

If provided with the original commitment polynomials used during keygen, this new party can also verify that their received point does indeed lie on the joint polynomial (though perhaps there could be some trickery with lying about commitment polynomials).

Proof of concept recovery of a signer, and enrollment of a new signer (without MPC communication)

Sharing Fragments: Shamir Secret Sharing Extra Shares

This method may be more complex and less flexible than an enrollment protocol, though perhaps easier to implement and prove secure. Suppose we want the option of later issuing K extra signers to join the multisig.

Following standard keygen where each P_i evaluates their secret scalar polynomial f_i from 1 to n, we use a a modification where each party also evaluates from n+1 to n+k. Each party calculates k extra secret shares which can later be used to issue a new signer.

To add a new signer to the FROST multisignature later on, they must receive these secret shares from every other keygen participant. Meaning that we require all N signers to be available and agree to add the new signer. This is of no use in the scenario of a lost FROST device or uncooperative signer!

So why not distribute these secret shares for redundancy? We can not trivially share these secret shares around, since we may risk prematurely creating additional signers if one party were to obtain shares at some index from all signers.

Instead, we can shamir secret share the secret shares -- reintroducing our threshold T! Let's call these shamir shares "fragments" of secret shares, rather than shamir shares-of-shares.

The procedure would look like this:

  1. Each party evaluates their scalar polynomial at K extra indexes, creating K extra secret shares.
  2. Each party then takes these K scalars and uses shamir-secret-sharing to fragment them up into N pieces with recovery threshold T. This gives a K by N array of fragments.
  3. As each party P_i goes to send share j to party j, they additionally send the jth column of their fragment array for storage.

To issue a new signer (party n+1), we need to do is get T signers to send all the fragments they hold which belong to index n+1. We recover at least T x N fragments allowing us to recreate N secret shares which we then collect into a long lived secret keyshare, resulting in a new signer with their own point on the joint polynomial.

Exploratory implementation (without shamir sharing)

Increasing T: Larger Signing Threshold

Increasing the threshold seems more difficult than redoing keygen, it would require the group the somehow increase the degree of the polynomial, and trust everyone to delete the old one.


Thanks for reading, hope this makes you as excited about FROST as we are!

Please leave any comments/ideas/whatever below.

Thanks, as always, to @LLFourn for his feedback, ideas, and knowledge of existing literature!

@jesseposner
Copy link

Interestingly, the "On Proactive Secret Sharing Schemes" paper claims that a proactive VSS exists if and only if 2t < n (Theorem 2, but I can't find the proof of this statement).

It also states in Theorem 1 that a VSS in general exists if and only if 2t < n. I believe this is being driven by a robustness requirement. From the FROST paper on page 8:

Further, because FROST does not provide robustness, FROST is secure so long as the adversary controls fewer than the threshold t participants, an improvement over robust designs, which can at best provide security for t ≤ n/2

Similarly, we can probably remove the 2t < n requirement for both theorem 1 and theorem 2 if we drop robustness.

@jesseposner
Copy link

Also, in that paper t != k. So it's actually 2k < n and not 2t < n, where t = k + 1:

The simplest access structure Γ is called (k,n)-threshold if all subsets of players P with at least k + 1 participants are qualified to reconstruct the secret and any subset of up to k players are forbidden of doing it.

@conduition
Copy link

It seems like they assume an honest majority ("Our redistribution protocol can tolerate up to m−1 faulty old shareholders (provided that there are at least m non-faulty old shareholders)").

I think that's a reasonable assumption at least for my use-case, because if $m$ or more shareholders are corrupted and collude, they could reconstruct the secret - might as well give up at that point.

True, W^3's VSR protocol is not robust, because if $2m &lt; n$, then it would be possible for $m$ or more faulty (but not maliciously colluding) shareholders to prevent a VSR procedure with $m$ or more non-faulty participants from succeeding. I suppose it depends on your use-case whether the trade-off is worth it.

It seems to me that the protocol you describe in your blog doesn't work well with offline signers. A malicious signer can put garbage shares (i.e., shares that do not agree with the coefficient commitments) into their message queue and since they're offline, they have no way of complaining. We also don't know if they have received the same coefficient commitments as everyone else.

I believe that can be worked around. In your specific example, the offline shareholder would disregard invalid subshares they received while offline. While processing their message queue, the offline shareholder would only accept a VSR session as valid if they receive valid and consistent subshares from at least $m$ or more distinct peers. Consistency is achieved by either the assumption of a broadcast channel (which still holds even for offline peers), or by having peers send ACK messages to each other over a mesh topology, to confirm commitments are consistent - This can still work even while a peer is offline, as long as they have an async message queue to receive such ACK messages from peers. This only works as long as at least $m$ peers are honest and online to perform VSR, and no more than $m-1$ peers in the VSR session are dishonest/faulty.

Bear in mind I could be very wrong here as I haven't gotten to the point of actually engineering or designing an async VSR system - It is still theoretical at this point.


I think the real "WTF" moment with VSR happens if you have a fork. Consider the situation where $2m \le n$ and two distinct groups of $m$ or more shareholders decide to VSR at the same time, and for some reason can't reach each other. For instance, in a FROST group of 10 signers, there could be two subgroups of 5 signers each who are somehow firewalled off from each other and think the other subgroup is offline. The two subgroups might execute two independent VSR sessions. How do the two subgroups reconcile their shares once the groups are able to get back into contact?

@conduition
Copy link

In your blog post you mention [Novel Secret Sharing and Commitment Schemes for Cryptographic Applications by Mehrdad Nojoumian] (https://uwspace.uwaterloo.ca/bitstream/handle/10012/6858/nojoumian_mehrdad.pdf?sequence=1#subsection.4.3.1) as the origin of your writeup. This thesis only seems to consider VSR in the context of the "passive adversary" model which seems weaker than what we would want to have ideally.

@jonasnick Thanks for the reminder! I actually wrote that blog post before i read Wong/Wing/Wang's paper. i happened to come up with the same verifiability extension which they did. When I have some time, I promise I'll update it to point to the WWW paper and remove my scary disclaimer about homebrewed crypto 👍

@jesseposner
Copy link

I've added support for refresh, repair, enrollment, disenrollment, threshold increase, and threshold decrease here: https://github.com/jesseposner/FROST-BIP340.

@jesseposner
Copy link

In "On Dealer-free Dynamic Threshold Schemes", the threshold increase and decrease protocols in the active adversary model use a bivariate polynomial. I assume that a bivariate polynomial is incompatible with FROST, which uses a univariate polynomial. However, the bivariate polynomial is a VSS instantiation, and is used to validate the outputs of each participant and to be able to assign blame to participants who produce a faulty output.

In the “On Dealer-free Dynamic Threshold Schemes” paper it states that for the threshold decrease/increase: “We note that these protocols could be described in terms of other VSS schemes, as well.” This seems to imply that we can use Feldman VSS instead of the bivariate polynomial for our VSS.

For threshold increase, with public verification shares of each participant (or the coefficient commitments, which can derive the public verification shares), then the share output (the product of the Lagrange coefficient and a participant’s share) can be verified. Similarly, the threshold decrease outputs can be verified by reference to coefficient commitments and public verification shares.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment