Skip to content

Instantly share code, notes, and snippets.

@pool2win
Created September 14, 2023 13:55
Show Gist options
  • Save pool2win/d84c42616b64f04c05f663a4ca0274a9 to your computer and use it in GitHub Desktop.
Save pool2win/d84c42616b64f04c05f663a4ca0274a9 to your computer and use it in GitHub Desktop.
Each bead commits to:
| Bitcoin Block header | Slots Merkle Root |
| Coinbase | timestamp | nonce | TX Merkle Root | prev block |
Where the block header is a standard bitcoin block header as expanded in the second line
and each bead commits to <Slots Merkle Root> which should be added to the <metadata>
field as described in:
https://github.com/mcelrath/braidcoin/blob/master/braidpool_spec.md
The Slots data structure is a key-value table containing:
| slot pubkey | data |
We can Merkleize the entries in this table to create a root hash which is the "Slots
Merkle Root". (We could also just hash the entire table in any deterministically
serialized manner) Each <slot pubkey> is an identifier that uniquely identifies the
owner of the slot. This data structure needs to have the property that all nodes can
look up <data> for each relevant <slot pubkey>. At any given time defined by the current
bead tip (or a bead tip buried a defined number of beads in the past), there is a unique
value for <data> for each pubkey. This forms a linear chain of history for each
<slot pubkey>. By parsing beads, each <slot pubkey> will have over time:
<data1> <data2> <data3> ... <dataN>
<bead 345> <bead 349> <bead 384> <bead tip>
It is similar to the UTXO set and can be kept in a similar manner, with each bead only
broadcasting transactions which update the Slots Table. Conflict resolution is not
necessary and can only be self-inflicted, however if that is necessary, the highest-
weight parent can be used for each slot as usual in Nakamoto consensus.
The Slots table is updated by owners of the <slot pubkey> by sending a "transaction"
over the p2p network:
Slot Transaction:
-----------------
| slot pubkey | newdata | signature |
Braidpool nodes accept a slot transaction if properly signed (with <data> as the
message) by the privkey corresponding to <slot pubkey> that replaces <data> with
<newdata>. The data is expected to be a 32-byte hash, as this data structure is for
commitments, not data broadcast. However to simplify the logic one can imagine that
<data> is the actual data itself and not a commitment.
[kp]
This was really handy to understand the concept of slots. I get it. It
can be really useful actually.
I am still not sure it can work to define rounds for DKG. Have some
questions below about it.
[/kp]
The value of <data> is intended for use by external consensus protocols. These include
DKG protocols, which must have a minimum of 2 rounds [https://eprint.iacr.org/2023/1094],
FROST and MuSig2 signing, or other protocols requiring low latency consensus. The
exact contents of <data> is therefore protocol dependent.
In many protocols, consensus moves forward between "epochs" which are defined by all
relevant participants receiving all data necessary to move to the next epoch. In such
protocols, <data> is a message saying "I have received all data for epoch N".
[kp]
Question: How does the node know it has receieved all data in an
epoch? If an epoch is defined by all miners having received all data -
we can do that using the concept of message stability from the
DAG. However, that begs the question of group membership. We can't say
everyone has received a certain message without defining who
"everyone" is?
[/kp]
Let us
abbreviate this as "epoch N" indicating readiness to move to epoch N. Actual data is
not necessarily included in the table beyond the epoch number, and in most protocols
a private channel between participants is assumed through which non-public data must
be sent. Because the entire Slots Table is visible to all participants, one can define
a function given a set {<participant>} of pubkeys of participating parties in the
protocol:
MoveToNextEpoch({<participant>}):
return all(data[i] = "epoch N" foreach participant[i])
[kp]
Question: The problem is a participant is required to say it is ready to move to
next epoch. How does that participant know this?
[/kp]
The "all()" in the above pseudo-code may be replaced by a threshold in some protocols
if appropriate for that protocol.
[kp]
Question: The rest from here on is talking about how we can manage slots and
pubkeys. The question in my head still is how does a node know it is
ready to move to the next epoch?
[/kp]
Since most protocols require private data to be exchanged between participants, where
that is needed, the p2p network of braidpool and ECIES using the <slot pubkey> can be
used to broadcast a private message to all participants though only one participant
can decrpyt it. This is not optimal and it is better that peers make direct IP
connections to each other for sending this data. However public broadcast of IP and
authentication information for each peer might be sent this way, with further protocol
private messages sent directly between peers.
Before starting, all protocols must agree upon a set of {participant} pubkeys. How this
happens is protocol dependent and happens out-of-band with respect to braidpool.For
Braidpool FROST signing, the set of participants is decided by looking at the last N
blocks successfully mined by braidpool, and the <comm_pubkey> published in the
corresponding bead for that block in the <Braidpool Metadata>. This <comm_pubkey> will
be used as the <slot pubkey> for that miner, as well as ECIES encrypted communication
with him by other peers. The value <N> is a bikeshedding parameter that will be decided
in the future based on the speed of FROST signing in the global network, but it is
expected to be in the range 20-2016 where we will decrease N if FROST is slow and
increase it if FROST is fast.
All consensus protocols have a notion of "failure" which is often set by a timeout in
"partially synchronous" protocols. When failure occurs it is necessary to restart the
protocol with a different set of participants. We suggest using a block-height based
timeout, which is monotonic in time. e.g. if we missing F epoch updates for the last
T_F beads (T_F = "threshold of failure") a new protocol may be kicked off using the
N-T_F responding participants. We can keep 2 (or more) simultanous instantiations of
the protocol in-flight simultaneously, and take whichever one finishes first and with the
largest number of participants. As such, each participant needs to have more than one
slot allocated to themselves. Therefore we might consider expanding the Slots Table to
| slot pubkey | purpose | data |
where as a key-value table, (<slot pubkey>|purpose) is the key and <data> is the value.
It may also be possible to use BIP32 pubkey derivation to derive a new pubkey for the
same participant. In the case of braidpool FROST signing, we can let <comm_pubkey> be
an xpub, upon which we can derive a new pubkey following a BIP32 public derivation path:
m/<blockheight>/<protocol>/<instance>/
where <blockheight> is the bitcoin block height for which we are signing the coinbase,
<protocol> is an integer, 1=DKG, 2=FROST, etc, and <instance> is an integer n=1..
indicating that this is the <instance>th retry of that protocol. The set of particpants
necessary for the <instance>th iteration of the protocol is derivable from the slots
table as committed in beads, in a decentralized manner by each participant
independently. This allows multiple instances of each protocol to be in-flight at the
same time (a la ROAST) and multiple protocols to also be in-flight, for multiple blocks,
all at the same time. If for instance two bitcoin blocks are mined in quick succession,
it's natural to have all this in flight simultaneously without additional coordination.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment