Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
The purpose of this note is to describe a modification to proof of stake algorithms, as well as a proof of work variant, which uses a notion of increasing timeouts to ensure convergence in a partially synchronous environment. It builds on Andrew Miller et al's work [here](, and its primary contribution is intended to be relaxing the network model from synchronous to partially synchronous. It also builds heavily on Sompolinsky and Zohar's work on [GHOST](
* A synchronous network model is one where messages between correct nodes are guaranteed to arrive at the recipient within D seconds for some known D. Within that D, however, an attacker has absolute freedom to manipulate latencies of individual messages to attempt to prevent convergence.
* A partially synchronous model is similar, except D is unknown.
* GHOST is a blockchain fork choice rule that tries to determine the "head" of a chain bottom-to-top rather than top-to-bottom; essentially, a pointer starts at the genesis block, and then climbs up the "block tree" emanating from the genesis; if at any step there is a choice of two or more blocks to go up, then it picks the one that has more descendants. This has strong convergence properties, but it also ensures that even blocks that do not make it as part of the main chain are taken into account; this allows for blockchains with very fast block times without being vulnerable to attacks from attackers with much less than 50% hashpower who can have zero "stale rates" due to carrying out their mining in a centralized way.
* GHOST blockchains can also be "inclusive", in the sense that a block can specify alongside its parent any number of other blocks that have not yet been included into the chain; transactions from these blocks can be processed and possibly a reduced block reward applied in order. This increases efficiency, and also reduces centralization risks as blocks that are not quite at the "center" of the chain also get rewarded, and so the incentive to centralize is not as high as it otherwise would be.
Proof of stake algorithm:
* Suppose that there exist timeslots that are sequentially numbered from 0, during which blocks can be produced. If some mathematical condition hits, then a given validator can produce a block during timeslot `n`; other validators will accept this block only when their local timestamp is greater than or equal to the time corresponding to that timeslot. The probability that a given validator will be able to produce a block during any given timeslot is proportional to the validator's staked balance. (this is how all basic PoS algorithms work)
* A GHOST rule is used to determine the "main chain", with one exception: when "scoring" a block whose parent is associated with timeslot `n`, only descendants associated with timeslot `n + k^2` for integer `k >= 1` are counted.
We can create a sketch argument that this converges in a partially synchronous model: define "virtual time" as being `sqrt(t - n)` where `t` is the current timeslot and `n` is the timeslot of the parent block that has already been "decided" as being part of the main chain. We can see that up to one new message relevant to determining the child of that parent is created each tick (zero if some validator is offline or forgets). Furthermore, in virtual time, we can see that the network latency decreases, and so eventually is permanently less than one tick, at which point already established arguments can be used to show that the system will converge toward selecting one child.
(Note: Andrew Miller's arguments [here]( were made with respect to proof of work; they have not yet been rigorously adapted to proof of stake, so the claim that similar logic works in a proof of stake blockchain should currently be viewed as conjecture)
Proof of work algorithm:
* Start with a GHOST blockchain, but when determining which child to "decide" of a given (already decided) parent, count a maximum of one block with mining result less than `t` (where `t` is the target) toward the score of each child, a maximum of two blocks with mining result less than `t / 2`, a maximum of four blocks with mining result less than `t / 4`, etc.
Here, the argument is that if convergence keeps failing, then the lower "slots" will completely fill, and so the "block time" that is relevant to making the given decision will continue doubling until eventually it is sufficiently high that convergence is assured.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment