Skip to content

Instantly share code, notes, and snippets.

@nickfarrow
Last active April 29, 2024 16:58
Show Gist options
  • Star 13 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save nickfarrow/64c2e65191cde6a1a47bbd4572bf8cf8 to your computer and use it in GitHub Desktop.
Save nickfarrow/64c2e65191cde6a1a47bbd4572bf8cf8 to your computer and use it in GitHub Desktop.
Modifying FROST Threshold and Signers

Modifying FROST Signers and Threshold

FROST's distributed key generation involves N parties each creating a secret polynomial, and sharing evaluations of this polynomial with other parties to create a distributed FROST key.

The final FROST key is described by a joint polynomial, where the x=0 intercept is the jointly shared secret s=f(0). Each participant controls a single point on this polynomial at their participant index.

The degree T-1 of the polynomials determines the threshold T of the multisignature - as this sets the number of points required to interpolate the joint polynomial and compute evaluations under the joint secret.

T parties can interact in order to interpolate evaluations using the secret f[0] without ever actually reconstructing this secret in isolation (unlike Shamir Secret Sharing where you have to reconstruct the secret).


We wonder, is it possible to change the number of signers N, and change the threshold T after keygen has been completed? And importantly, can these changes be made by with a threshold number of signers, as opposed to requiring the consent of all N signers? (Imagine losing a FROST secret keyshare and wanting to reissue another!).

Much of our investigation has led to ideas which have already been fleshed out in the secret sharing literature.

Note the following methods mentioned here are not explicit, and we still need to read into which are proven secure and most appropriate for our purpose, each may come with caveats.

Decreasing N: Removing a Signer

We can turn a t of n into a t of (n-1) if we can trust one user to delete their secret keyshare (make sure n>t!).

If we can not reliably trust the party to delete their secret keyshare, we can further render the revoked secret keyshares incompatible with future multisignature participants.

We can do this using some form of proactive secret sharing:

Shares are periodically renewed (without changing the secret) in such a way that information gained by the adversary in one time period is useless for attacking the secret after the shares are renewed

See Proactive Secret Sharing Or: How to Cope With Perpetual Leakage and Proactive Secret Sharing on Wikipedia.

Overview - we can create a new joint polynomial with the same joint secret, and then ask all n-1 participants to delete their old secret keyshare (point on that particular old joint polynomial). If t-1 parties of n-1 feign deletion and collude then they could still sign with the removed party.

To create a new joint polynomial with the same public key and joint-secret we redo keygen with n-1 parties. Each participant uses the same first coefficient in their original keygen polynomial, and other terms random. This will produce a polynomial with the same joint secret, but all other points different and incompatible with previous keyshares.

Decreasing T: Reducing the Threshold

We can decrease the threshold by sharing a secret of a single party to all other signers, allowing every other party to produce signature shares using that secret keyshare.

This effectively turns a t of n into a (t-1) of (n-1). We can keep n the same if we also know how to increase the number of signers (below), as we can issue a brand new secret keyshare and distribute it to all the other signers; going from a t of n to a (t-1) of n.

In more adversarial multisig scenarios, steps could be taken to manage some fair exchange of this secret to ensure it reaches all participants.

Increasing N: Adding a Signer

Backing up each individual secret keyshares is advised -- but backups are certainly not the same as issuing an additional party who has the power to contribute an independent signature share towards the threshold. Issuing new signers is slightly more involved.

The idea is that we can securely evaluate the joint polynomial at further indexes for additional participants. We do not want to rely on all n participants being present since this is useless in the case of a lost signer.

Multi-Party Computation: Enrollment

We can use an enrollment protocol to add a new party without redoing keygen.

See Novel Secret Sharing and Commitment Schemes for Cryptographic Applications and A Survey and Refinement of Repairable Threshold Schemes - Section 4.1.3

Enrollment protocols allow us to repair (recover) or add a new party without redoing keygen. A threshold number of parties collaborate to evaluate the joint polynomial at a new participant index, and securely share this new secret keyshare to the new participant.

Whenever we want to add a new party at some new participant index, T parties each use their secret keyshare point to evaluate the joint polynomial at some new index. Each party contributes evaluation shares from their basis polynomial, and weighs them using the appropriate lagrange factor to ensure the sum of these pieces will lie on the joint polynomial. By securely sharing these with the new party, the new party sums them to form a secret keyshare and can now participate in FROST signing.

If provided with the original commitment polynomials used during keygen, this new party can also verify that their received point does indeed lie on the joint polynomial (though perhaps there could be some trickery with lying about commitment polynomials).

Proof of concept recovery of a signer, and enrollment of a new signer (without MPC communication)

Sharing Fragments: Shamir Secret Sharing Extra Shares

This method may be more complex and less flexible than an enrollment protocol, though perhaps easier to implement and prove secure. Suppose we want the option of later issuing K extra signers to join the multisig.

Following standard keygen where each P_i evaluates their secret scalar polynomial f_i from 1 to n, we use a a modification where each party also evaluates from n+1 to n+k. Each party calculates k extra secret shares which can later be used to issue a new signer.

To add a new signer to the FROST multisignature later on, they must receive these secret shares from every other keygen participant. Meaning that we require all N signers to be available and agree to add the new signer. This is of no use in the scenario of a lost FROST device or uncooperative signer!

So why not distribute these secret shares for redundancy? We can not trivially share these secret shares around, since we may risk prematurely creating additional signers if one party were to obtain shares at some index from all signers.

Instead, we can shamir secret share the secret shares -- reintroducing our threshold T! Let's call these shamir shares "fragments" of secret shares, rather than shamir shares-of-shares.

The procedure would look like this:

  1. Each party evaluates their scalar polynomial at K extra indexes, creating K extra secret shares.
  2. Each party then takes these K scalars and uses shamir-secret-sharing to fragment them up into N pieces with recovery threshold T. This gives a K by N array of fragments.
  3. As each party P_i goes to send share j to party j, they additionally send the jth column of their fragment array for storage.

To issue a new signer (party n+1), we need to do is get T signers to send all the fragments they hold which belong to index n+1. We recover at least T x N fragments allowing us to recreate N secret shares which we then collect into a long lived secret keyshare, resulting in a new signer with their own point on the joint polynomial.

Exploratory implementation (without shamir sharing)

Increasing T: Larger Signing Threshold

Increasing the threshold seems more difficult than redoing keygen, it would require the group the somehow increase the degree of the polynomial, and trust everyone to delete the old one.


Thanks for reading, hope this makes you as excited about FROST as we are!

Please leave any comments/ideas/whatever below.

Thanks, as always, to @LLFourn for his feedback, ideas, and knowledge of existing literature!

@AllFi
Copy link

AllFi commented Aug 7, 2023

Hello! Could you please explain more about how this works?

To create a new joint polynomial with the same public key and joint-secret we redo keygen with n-1 parties. Each participant uses the same first coefficient in their original keygen polynomial, and other terms random. This will produce a polynomial with the same joint secret, but all other points different and incompatible with previous keyshares.

It looks like if we initally have joint secret $s = a_{0,0} + a_{1,0} + ... + a_{n,0}$, then after this procedure we will have $s' = a_{0,0} + a_{1,0} + ... + a_{n-1,0} = s - a_{n,0}$. I thought it shouldn't be possible to redo keygen preserving the same public key without all n participants or reconstructing joint secret.

@nickfarrow
Copy link
Author

Hey @AllFi, You are absolutely correct, a massive oversight and doesn't make sense at all. I need to look into the existing literature some more and will update this gist

Really appreciate your attention to detail!

@AllFi
Copy link

AllFi commented Aug 11, 2023

I am glad to be of help! These are quite interesting topics so I hope you'll find out the solutions :)

@conduition
Copy link

conduition commented Sep 4, 2023

For those not aware, FROST inherits its threshold key ownership properties from Shamir's Secret Sharing, so modifying a set of FROST signers is effectively equivalent to modifying the set of shareholders in a shamir secret sharing scheme.

I just published an article featuring an alternative protocol for Enrollment (cooperatively issuing a share) which is very similar to the method you've mentioned. My protocol description also details how to verify all multi-party computations, and includes a proof of security against both passive and active adversarial shareholders. You might find this interesting @nickfarrow


To change the threshold, i found this interesting protocol which allows a group of shamir shareholders (or FROST signers) to compute updated shares which correspond to a new threshold $t'$. It requires that only $t$ signers be online, but all $n$ signers must at least be able to receive messages asynchronously while offline, to update their shares to have threshold $t'$. Of course it wouldn't stop a malicious subset of shareholders from holding onto their old shares which are valid for the original threshold $t$.

Here's a full description, including verifiable commitments.

This approach works to increase or decrease the threshold, or even as a way to invalidate old shares without changing the threshold (for proactive approaches). It can be repeated indefinitely and requires no cooperation from the dealer.

However, as with any post-keygen tightening of security parameters, whether removing a signer (decreasing $n$), or increasing the threshold $t$, it seems as though we always need to rely on an honor's system of sorts. Even though participants might not be colluding, they still may choose to keep their old shares. In an adversarial context where FROST signers don't trust each other, they would be right to do so. As Satoshi once said, you should never delete a wallet (or private key).

Contrastingly, loosening of security parameters (enrolling new shares, or decreasing $t$) is very practical and doesn't run into any incentive trust problems like that.

https://en.wikipedia.org/wiki/Proactive_secret_sharing

Woah, that article needs some proofreading 🥲

@yttsen9
Copy link

yttsen9 commented Sep 7, 2023

Love this great research and thanks for sharing! Wondering if you may know any existing Frost implementation that already supports modifying signer/ threshold today? or is this feature still an active R&D phase? @nickfarrow

@conduition
Copy link

conduition commented Sep 8, 2023

@yttsen9 The ZCash Foundation's FROST implementation in Rust currently supports share issuance (AKA enrollment). However, they don't yet have support for threshold modification. I recently opened an issue to discuss adding this functionality.

@yttsen9
Copy link

yttsen9 commented Sep 8, 2023

@yttsen9 The ZCash Foundation's FROST implementation in Rust currently supports share issuance (AKA enrollment). However, they don't yet have support for threshold modification. I recently opened an issue to discuss adding this functionality.

Thank you for sharing this! will look into it

@nickfarrow
Copy link
Author

Hey @conduition was looking to share your article with someone again and realised I missed responding here.

Awesome article, really fantastic. We'll be revisiting these ideas more once we're ready to start implementing some of these features for @frostsnap

@jonasnick
Copy link

In the past few weeks, I saw a lot of people referencing this gist. Thanks @nickfarrow for starting this discussion! However, some people got the impression that it would be easy today to modify the FROST signer set and threshold without doing an on-chain transaction. I want to highlight what Nick wrote right in the beginning:

Note the following methods mentioned here are not explicit, and we still need to read into which are proven secure and most appropriate for our purpose, each may come with caveats.

One example of the caveats, in addition to the ones Nick already mentions, is that the proactive secret sharing construction for removing a signer requires an honest majority ("Our solution to the proactive secret sharing problem can support up to k = n/2 -1 corrupted parties at any time period.") and a (complex?) accusation protocol to deal with misbehaving parties. Moreover, the enrolment protocol for adding a signer is only proven to be secure against honest but curious adversaries in the "A Survey and Refinement of Repairable Threshold Schemes" paper ("Assume all players act honestly during the protocol." - first sentence in the proof of Theorem 4.1).

You'd ideally also want that the operations are atomic. For example, the proposed method to reduce the threshold requires a signer to give up their secret entirely (resulting in a t-1-of-n-1 setup), but there's no guarantee that this signer will be enroled again into a setup (to result in a the desired t-1-of-n).

One major challenge is that the signers must be in agreement about whether a transition happened or not. If some believe it happened and delete their old key shares (as for example in proactive sharing) and others believe it hasn't happened, they become unable to sign (at least until you convince everyone that the transition happened).

Reaching agreement traditionally requires a honest majority, which may be a fine assumption in some scenarios and unacceptable for others. To get around this assumption in the BIP DKG draft, we use a weaker form of agreement ("conditional agreement") and propose a very straightforward protocol to arrive at that notion of agreement (certifying_eq). A result of this weaker notion is that BIP DKG is not robust: a single misbehaving participant can prevent the protocol from completing successfully. This may be okay in practice at key generation time, but for modifying signers & threshold of an existing setup, you probably want a robust protocol.

Regarding the issue pointed out by @AllFi about removing a signer: If I read the paper correctly, the signers (excluding the removed signer) create new polynomials f_i(0) = 0 (constant coefficient is 0), share it and add the received shares on their existing share.

@conduition
Copy link

@jonasnick I've been working on an implementation of a verifiable secret resharing scheme for FROST which accomplishes most of what @nickfarrow has discussed above with weaker assumptions. There's some background on it here:

VSR allows for changing the threshold, adding new signers, and removing old signers (by exclusion), although there certainly are caveats regarding how it can be used safely.

The best academic source I found was this paper by Wong, Wang, and Wing which describes the VSR protocol mathematically.

You'd ideally also want that the operations are atomic. For example, the proposed method to reduce the threshold requires a signer to give up their secret entirely (resulting in a t-1-of-n-1 setup), but there's no guarantee that this signer will be enroled again into a setup (to result in a the desired t-1-of-n).

One major challenge is that the signers must be in agreement about whether a transition happened or not. If some believe it happened and delete their old key shares (as for example in proactive sharing) and others believe it hasn't happened, they become unable to sign (at least until you convince everyone that the transition happened).

Atomicity is definitely a challenge here. I haven't come up with a good solution to this.

You have to also consider how to handle offline signers. What if not all participants in the group are online and available during resharing? Then they will miss out on any resharing events which occur while they are offline, which would invalidate their old shares and they wouldn't be able to receive the information they need to update their signing shares.

The best solution I could envision would be to assume that each signer has an async message queue (like email) at which they receive authenticated messages even while offline. As long as those messages can be processed in order, the offline signer can use them to reconstruct the latest honest state and then come back online. However I haven't seen this idea referenced anywhere in the literature so I would be hesitant to depend on it without exploring it more rigorously.

@conduition
Copy link

conduition commented Mar 25, 2024

Reaching agreement traditionally requires a honest majority, which may be a fine assumption in some scenarios and unacceptable for others.

@jonasnick My solution to key agreement was to have each participant sign their VSR commitment in the first round, and verify each commitment upon receipt. If a participant signs more than one commitment (under the same VSR session ID), this can be used as a non-interactive proof of misbehavior. Complaints are accepted if and only if the complainer can provide such a proof.

EDIT; i'm realizing this actually won't work because participants could still send invalid commitment signatures, and there wouldn't be any way to prove who is at fault: accuser or accused. I think Gupta's complaint mechanism is probably the best option for a robust VSR/DKG protocol.

Old Response

In the final round of VSR, there is an acknowledgement round where participants must confirm whether they computed a compatible set of shares. During this round, the participants share the signed commitments they received in the first round, and ideally should sign this message too. Since the set of commitments determines the new set of shares, then if any two participants compute incompatible shares, then someone will be able to produce a proof of misbehavior and accuse the malicious party: Either a signature will be invalid somewhere, or one participant must have signed more than one commitment.

On receiving a valid complaint against a participant, the honest participants exclude the culprit's contributions to the resharing session and rebroadcast the complaint to the remaining honest participants to ensure everyone agrees on who has been naughty. As long as $t$ or more of the participants are honest and consistent, the result is a robust VSR protocol which requires only one execution to complete successfully.


This is something which I created myself and I don't think it exists in the literature, so please don't go implementing this dear readers. I'm still very much in the prototype stage here, working out bugs/vulns, and I haven't even published the source code yet.

I suppose this technique might be applicable to the DKG process as well.

@jesseposner
Copy link

jesseposner commented Mar 27, 2024

Regarding the issue pointed out by @AllFi about removing a signer: If I read the paper correctly, the signers (excluding the removed signer) create new polynomials f_i(0) = 0 (constant coefficient is 0), share it and add the received shares on their existing share.

I've recently been researching an interesting security issue with this technique (i.e. Herzberg PSS). Herzberg relies on the adjacent assumption (i.e. if a party is corrupted during an update phase, it is corrupted in both time periods adjacent to that update phase). However, subsequent papers have looked at the consequences of removing this assumption.

When we remove the adjacent assumption, there's an issue when we secret share polynomials with a constant coefficient of 0. The issue is that we are giving all the participants an extra share, namely, the point at (0,0). So an attacker that controls t-1 signers can compute the update polynomial, and with that polynomial, the attacker can update any share from the previous time period to a share in the new time period. This allows an attacker who controls t-1 signers in the update phase, and also has 1 share from another signer in the previous time period, to compute the secret.

This issue was first identified by Nikov and Nikova in their paper "On Proactive Secret Sharing Schemes," however, they did not provide a solution. An interesting solution to this issue was published by Xia et al. in their paper "Provably Secure Proactive Secret Sharing Without the Adjacent Assumption."

The Xia solution involves creating a higher degree update polynomial (so that the (0,0) point is no longer sufficient for a t-1 attacker), and then after aggregating the shares, truncating the polynomial back down to t-1. Polynomial truncation might also provide a method for reducing the threshold generally, although that application is not discussed in the paper.

The truncation protocol builds on a scheme by Ben-Or et al. There's a paper called "Dealer-Free Dynamic Secret Sharing Schemes with Unconditional Security" that walks through an example of the Ben-Or scheme (see Example 2, steps 8-11 in Section 5.2).

The Xia paper modifies the scheme such that the projection matrix includes some random vectors, and it demonstrates a security issue if this modification is not made. However, the Xia paper is extremely terse in describing how to generate the random vectors: "generated in a distributed fashion and shared among the parties," citing to Gennaro, R., Jarecki, S., Krawczyk, H., Rabin, T.: Secure distributed key generation for discrete-log based cryptosystems. J. Cryptol. 1, 51–83 (2007), which defines a DKG scheme similar to FROST.

However, I haven't been able to figure out precisely how to use a DKG to produce the random vectors. Unlike the Ben-Or scheme, it seems like the projection matrix should not be fully public, otherwise why use a DKG to produce the random vectors? The parties can compute T = B^-1 * P * B without revealing P but since B is public, P can be derived from T. It seems like their must be a way to compute R = S * T without revealing T, but I'm not sure if this is the right approach and I haven't been able to figure out how to get that to work.

@jonasnick
Copy link

Hey @conduition, thanks for the pointers. Interesting ideas!

In your blog post you mention [Novel Secret Sharing and Commitment Schemes for Cryptographic Applications by Mehrdad Nojoumian] (https://uwspace.uwaterloo.ca/bitstream/handle/10012/6858/nojoumian_mehrdad.pdf?sequence=1#subsection.4.3.1) as the origin of your writeup. This thesis only seems to consider VSR in the context of the "passive adversary" model which seems weaker than what we would want to have ideally.

The best academic source I found was this paper by Wong, Wang, and Wing which describes the VSR protocol mathematically.

It seems like they assume an honest majority ("Our redistribution protocol can tolerate up to m−1 faulty old shareholders (provided that there are
at least m non-faulty old shareholders)").

You have to also consider how to handle offline signers.

It seems to me that the protocol you describe in your blog doesn't work well with offline signers. A malicious signer can put garbage shares (i.e., shares that do not agree with the coefficient commitments) into their message queue and since they're offline, they have no way of complaining. We also don't know if they have received the same coefficient commitments as everyone else.

@jonasnick
Copy link

Interestingly, the "On Proactive Secret Sharing Schemes" paper claims that a proactive VSS exists if and only if 2t < n (Theorem 2, but I can't find the proof of this statement).

@jesseposner
Copy link

Interestingly, the "On Proactive Secret Sharing Schemes" paper claims that a proactive VSS exists if and only if 2t < n (Theorem 2, but I can't find the proof of this statement).

It also states in Theorem 1 that a VSS in general exists if and only if 2t < n. I believe this is being driven by a robustness requirement. From the FROST paper on page 8:

Further, because FROST does not provide robustness, FROST is secure so long as the adversary controls fewer than the threshold t participants, an improvement over robust designs, which can at best provide security for t ≤ n/2

Similarly, we can probably remove the 2t < n requirement for both theorem 1 and theorem 2 if we drop robustness.

@jesseposner
Copy link

Also, in that paper t != k. So it's actually 2k < n and not 2t < n, where t = k + 1:

The simplest access structure Γ is called (k,n)-threshold if all subsets of players P with at least k + 1 participants are qualified to reconstruct the secret and any subset of up to k players are forbidden of doing it.

@conduition
Copy link

It seems like they assume an honest majority ("Our redistribution protocol can tolerate up to m−1 faulty old shareholders (provided that there are at least m non-faulty old shareholders)").

I think that's a reasonable assumption at least for my use-case, because if $m$ or more shareholders are corrupted and collude, they could reconstruct the secret - might as well give up at that point.

True, W^3's VSR protocol is not robust, because if $2m &lt; n$, then it would be possible for $m$ or more faulty (but not maliciously colluding) shareholders to prevent a VSR procedure with $m$ or more non-faulty participants from succeeding. I suppose it depends on your use-case whether the trade-off is worth it.

It seems to me that the protocol you describe in your blog doesn't work well with offline signers. A malicious signer can put garbage shares (i.e., shares that do not agree with the coefficient commitments) into their message queue and since they're offline, they have no way of complaining. We also don't know if they have received the same coefficient commitments as everyone else.

I believe that can be worked around. In your specific example, the offline shareholder would disregard invalid subshares they received while offline. While processing their message queue, the offline shareholder would only accept a VSR session as valid if they receive valid and consistent subshares from at least $m$ or more distinct peers. Consistency is achieved by either the assumption of a broadcast channel (which still holds even for offline peers), or by having peers send ACK messages to each other over a mesh topology, to confirm commitments are consistent - This can still work even while a peer is offline, as long as they have an async message queue to receive such ACK messages from peers. This only works as long as at least $m$ peers are honest and online to perform VSR, and no more than $m-1$ peers in the VSR session are dishonest/faulty.

Bear in mind I could be very wrong here as I haven't gotten to the point of actually engineering or designing an async VSR system - It is still theoretical at this point.


I think the real "WTF" moment with VSR happens if you have a fork. Consider the situation where $2m \le n$ and two distinct groups of $m$ or more shareholders decide to VSR at the same time, and for some reason can't reach each other. For instance, in a FROST group of 10 signers, there could be two subgroups of 5 signers each who are somehow firewalled off from each other and think the other subgroup is offline. The two subgroups might execute two independent VSR sessions. How do the two subgroups reconcile their shares once the groups are able to get back into contact?

@conduition
Copy link

In your blog post you mention [Novel Secret Sharing and Commitment Schemes for Cryptographic Applications by Mehrdad Nojoumian] (https://uwspace.uwaterloo.ca/bitstream/handle/10012/6858/nojoumian_mehrdad.pdf?sequence=1#subsection.4.3.1) as the origin of your writeup. This thesis only seems to consider VSR in the context of the "passive adversary" model which seems weaker than what we would want to have ideally.

@jonasnick Thanks for the reminder! I actually wrote that blog post before i read Wong/Wing/Wang's paper. i happened to come up with the same verifiability extension which they did. When I have some time, I promise I'll update it to point to the WWW paper and remove my scary disclaimer about homebrewed crypto 👍

@jesseposner
Copy link

I've added support for refresh, repair, enrollment, disenrollment, threshold increase, and threshold decrease here: https://github.com/jesseposner/FROST-BIP340.

@jesseposner
Copy link

In "On Dealer-free Dynamic Threshold Schemes", the threshold increase and decrease protocols in the active adversary model use a bivariate polynomial. I assume that a bivariate polynomial is incompatible with FROST, which uses a univariate polynomial. However, the bivariate polynomial is a VSS instantiation, and is used to validate the outputs of each participant and to be able to assign blame to participants who produce a faulty output.

In the “On Dealer-free Dynamic Threshold Schemes” paper it states that for the threshold decrease/increase: “We note that these protocols could be described in terms of other VSS schemes, as well.” This seems to imply that we can use Feldman VSS instead of the bivariate polynomial for our VSS.

For threshold increase, with public verification shares of each participant (or the coefficient commitments, which can derive the public verification shares), then the share output (the product of the Lagrange coefficient and a participant’s share) can be verified. Similarly, the threshold decrease outputs can be verified by reference to coefficient commitments and public verification shares.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment