-
-
Save mcelrath/b71e10ea99f4d258c50fe7081aa46868 to your computer and use it in GitHub Desktop.
Each bead commits to: | |
| Bitcoin Block header | Slots Merkle Root | | |
| Coinbase | timestamp | nonce | TX Merkle Root | prev block | | |
Where the block header is a standard bitcoin block header as expanded in the second line | |
and each bead commits to <Slots Merkle Root> which should be added to the <metadata> | |
field as described in: | |
https://github.com/mcelrath/braidcoin/blob/master/braidpool_spec.md | |
The Slots data structure is a key-value table containing: | |
| slot pubkey | data | | |
We can Merkleize the entries in this table to create a root hash which is the "Slots | |
Merkle Root". (We could also just hash the entire table in any deterministically | |
serialized manner) Each <slot pubkey> is an identifier that uniquely identifies the | |
owner of the slot. This data structure needs to have the property that all nodes can | |
look up <data> for each relevant <slot pubkey>. At any given time defined by the current | |
bead tip (or a bead tip buried a defined number of beads in the past), there is a unique | |
value for <data> for each pubkey. This forms a linear chain of history for each | |
<slot pubkey>. By parsing beads, each <slot pubkey> will have over time: | |
<data1> <data2> <data3> ... <dataN> | |
<bead 345> <bead 349> <bead 384> <bead tip> | |
It is similar to the UTXO set and can be kept in a similar manner, with each bead only | |
broadcasting transactions which update the Slots Table. Conflict resolution is not | |
necessary and can only be self-inflicted, however if that is necessary, the highest- | |
weight parent can be used for each slot as usual in Nakamoto consensus. | |
The Slots table is updated by owners of the <slot pubkey> by sending a "transaction" | |
over the p2p network: | |
Slot Transaction: | |
----------------- | |
| slot pubkey | newdata | signature | | |
Braidpool nodes accept a slot transaction if properly signed (with <data> as the | |
message) by the privkey corresponding to <slot pubkey> that replaces <data> with | |
<newdata>. The data is expected to be a 32-byte hash, as this data structure is for | |
commitments, not data broadcast. However to simplify the logic one can imagine that | |
<data> is the actual data itself and not a commitment. | |
The value of <data> is intended for use by external consensus protocols. These include | |
DKG protocols, which must have a minimum of 2 rounds [https://eprint.iacr.org/2023/1094], | |
FROST and MuSig2 signing, or other protocols requiring low latency consensus. The | |
exact contents of <data> is therefore protocol dependent. | |
In many protocols, consensus moves forward between "epochs" which are defined by all | |
relevant participants receiving all data necessary to move to the next epoch. In such | |
protocols, <data> is a message saying "I have received all data for epoch N". Let us | |
abbreviate this as "epoch N" indicating readiness to move to epoch N. Actual data is | |
not necessarily included in the table beyond the epoch number, and in most protocols | |
a private channel between participants is assumed through which non-public data must | |
be sent. Because the entire Slots Table is visible to all participants, one can define | |
a function given a set {<participant>} of pubkeys of participating parties in the | |
protocol: | |
MoveToNextEpoch({<participant>}): | |
return all(data[i] = "epoch N" foreach participant[i]) | |
The "all()" in the above pseudo-code may be replaced by a threshold in some protocols | |
if appropriate for that protocol. | |
Since most protocols require private data to be exchanged between participants, where | |
that is needed, the p2p network of braidpool and ECIES using the <slot pubkey> can be | |
used to broadcast a private message to all participants though only one participant | |
can decrpyt it. This is not optimal and it is better that peers make direct IP | |
connections to each other for sending this data. However public broadcast of IP and | |
authentication information for each peer might be sent this way, with further protocol | |
private messages sent directly between peers. | |
Before starting, all protocols must agree upon a set of {participant} pubkeys. How this | |
happens is protocol dependent and happens out-of-band with respect to braidpool. For | |
Braidpool FROST signing, the set of participants is decided by looking at the last N | |
blocks successfully mined by braidpool, and the <comm_pubkey> published in the | |
corresponding bead for that block in the <Braidpool Metadata>. This <comm_pubkey> will | |
be used as the <slot pubkey> for that miner, as well as ECIES encrypted communication | |
with him by other peers. The value <N> is a bikeshedding parameter that will be decided | |
in the future based on the speed of FROST signing in the global network, but it is | |
expected to be in the range 20-2016 where we will decrease N if FROST is slow and | |
increase it if FROST is fast. | |
All consensus protocols have a notion of "failure" which is often set by a timeout in | |
"partially synchronous" protocols. When failure occurs it is necessary to restart the | |
protocol with a different set of participants. We suggest using a block-height based | |
timeout, which is monotonic in time. e.g. if we missing F epoch updates for the last | |
T_F beads (T_F = "threshold of failure") a new protocol may be kicked off using the | |
N-T_F responding participants. We can keep 2 (or more) simultanous instantiations of | |
the protocol in-flight simultaneously, and take whichever one finishes first and with the | |
largest number of participants. As such, each participant needs to have more than one | |
slot allocated to themselves. Therefore we might consider expanding the Slots Table to | |
| slot pubkey | purpose | data | | |
where as a key-value table, (<slot pubkey>|purpose) is the key and <data> is the value. | |
It may also be possible to use BIP32 pubkey derivation to derive a new pubkey for the | |
same participant. In the case of braidpool FROST signing, we can let <comm_pubkey> be | |
an xpub, upon which we can derive a new pubkey following a BIP32 public derivation path: | |
m/<blockheight>/<protocol>/<instance>/ | |
where <blockheight> is the bitcoin block height for which we are signing the coinbase, | |
<protocol> is an integer, 1=DKG, 2=FROST, etc, and <instance> is an integer n=1.. | |
indicating that this is the <instance>th retry of that protocol. The set of particpants | |
necessary for the <instance>th iteration of the protocol is derivable from the slots | |
table as committed in beads, in a decentralized manner by each participant | |
independently. This allows multiple instances of each protocol to be in-flight at the | |
same time (a la ROAST) and multiple protocols to also be in-flight, for multiple blocks, | |
all at the same time. If for instance two bitcoin blocks are mined in quick succession, | |
it's natural to have all this in flight simultaneously without additional coordination. | |
How a node knows it is ready to move to the next epoch is protocol dependent, and generally is decided over the private communication it has received. If for instance the protocol requires an aggregable Schnorr signature from a threshold number of participants delivered privately, he knows that he has or has not received this data, and knows from the protocol definition that this must be complete before moving to the next epoch.
As another example, consider re-sharding keyshares used in FROST. There we're changing the threshold t and/or participants n (because, say a new block was found and we want to expire the oldest participant). In that case, each node needs to have received t shares-of-shares of the old (VSS) key distribution. Once he receives t shares, he can construct (via Shamir reconstruction) his new keyshare for the next epoch. He then broadcasts his readiness to move to the next epoch in his slot (and possibly proof of possession of the relevant data). Once a threshold number of participants have broadcast their readiness in this way, everyone has distributed consensus (by looking at the slots and knowing the threshold) that the new set of keyshares can be used, and new e.g. FROST signing rounds can be kicked off with the set of participants from the new keyshare distribution. At that point participants delete the old keyshares, effectively making it impossible to sign with the old keyshare. The remaining n-t participants that did not broadcast their readiness can continue with the protocol to receive their shares for this new epoch. So by looking at the slots all participants can see that (for instance) one participant never broadcast his readiness for the new epoch. So the key distribution is now (n-1) of t and one miner is not up to date. There are several ways we could use that information (e.g. to increase reliability).
Have the same question that came to me again and again in different contexts. Created a clone of this gist and added questions there so they have the context where I am asking those.
https://gist.github.com/pool2win/d84c42616b64f04c05f663a4ca0274a9
Copying them here too: