Skip to content

Instantly share code, notes, and snippets.

@KarolTrzeszczkowski
Last active May 28, 2020 07:48
Show Gist options
  • Save KarolTrzeszczkowski/ef9bda1528f6ef1790e1c4ed207fb09f to your computer and use it in GitHub Desktop.
Save KarolTrzeszczkowski/ef9bda1528f6ef1790e1c4ed207fb09f to your computer and use it in GitHub Desktop.
A proposal of Bitcoin Cash difficulty adjustment algo
Title: DAA improvement proposal (Billy the Algorithm)
Author: Karol Trzeszczkowski (Licho)
Status:
Type:
Created: 2020-05-01

Abstract

Current Bitcoin Cash difficulty adjustment algorithm exhibits oscillatory behavior that incentivize switch mining, causing irregular block times and increases the average transaction confirmation time. This document describes and specifies a new algorithm, that has no oscillatory behavior and possess all of the good qualities of the current algorithm.

Introduction

The fundamental role of a difficulty adjustment algorithm (DAA) is estimation of the computing power working on block production (hashrate) to keep the production steady in the changing environment. It has a crucial role in a blockchain life - it is responsible for currency emission schedule, chain shock tolerance and user experience. Dynamic DAA, adjusting difficulty on every block, tend to misinterpret some events for changes in the hashrate. Current Bitcoin Cash DAA misinterprets a hashrate spike or ditch event leaving the averaging window of 144 blocks as a change in the current computing power, affecting difficulty when it happens. It results in a change in relative coin profitability and incentivizes miners to switch a coin they are mining on. It creates a vicious cycle, where earlier spikes provoke new spikes. To solve this problem, many alternative algorithms were proposed. However none of them possess other positive qualities of the current DAA.

Requirements

A DAA should:

  1. Accurately estimate the hashrate currently working on chain,
  2. React symmetrically to hashrate changes,
  3. Be deterministic - next work is known at the time of block construction,
  4. Backward compatible,
  5. Fast,
  6. Be SPV-friendly,
  7. Be simple, easy to understand and evaluate from the security point of view,
  8. Be a low-pass filter,
  9. Use integer math.
  10. Keep the block production steady.

Algorithm Specification

Key feature of the proposed algorithm is performing exponential moving average on both difficulty and solve time separately. For this purpose an additional information, nAvgSolveTime shall be defined. It can be calculated iterating over the past blocks only once and then it can be held in memory and used to calculate the next nAvgSolveTime recursively.

Current block nAvgSolveTime is calculated from the previous block solve time and previous nAvgSolveTime by the equation:

current_nAvgSolveTime = α * previous_solve_time +(1-α) * previous_nAvgSolveTime,

where α is a coefficient in the interval (0,1).

To establish n-th previous_solve_time, first two suitable timestamps are chosen as median of timestamps at height n, n-1 and n-2 and median of timestamps at height n-1, n-2 and n-3.

t1 = median(t[n], t[n-1], t[n-2])

t2 = median(t[n-1], t[n-2], t[n-3])

previous_solve_time = t1-t2

Next block difficulty is calculated from the current block values difficulty and nAvgSolveTime by the equation:

next_difficulty = α * (current_difficulty / current_nAvgSolveTime) * BLOCK_INTERVAL + (1-α) * current_difficulty,

where α is the same as before and BLOCK_INTERVAL is the target solve time either equal to 600 or determined by the BLOCK_INTERVAL adjustment mechanism.

Rationale

This is an exponential moving average type algorithm. Normally (e.g.EMA-Z) exponential moving average DAA uses only one variable and on every iteration and adds a term weighted by the recent solve time. It breaks Req. 4 because it won't accept a negative solve time. This is different to what current BCH algorithm does - current BCH DAA averages both time and work done during that time period separately, with the same weight. The proposed algorithm is similar to this approach. Averaging both solve time and difficulty contains information of how much work was used to result in such a time (Req. 1). (current_difficulty / current_nAvgSolveTime) is a measure of the current hashrate. (current_difficulty / current_nAvgSolveTime) * BLOCK_INTERVAL can be interpreted as quantity proportional to the number of hashes done during the single BLOCK_INTERVAL period, if the hashrate is what we measured. Difficulty is then averaged by the partition of unit interval. The proposed algorithm also reduces the risk of having a negative weight, allowing for unordered solve times and preserving compatibility with the current ASIC circuits (Req. 4). Additional safeguard of median time of three blocks is added.

Additionally, EMA algorithms do not average difficulty, but target, which is inversely proportional to solve probability, thus nonlinear. As a result, expected time goes down faster than it goes up. Difficulty, being a proxy to projected work necessary to solve a block is directly proportional to solve probability. Using it results in a symmetric, linear response of the proposed algorithm (Req. 2), although there is the additional minor nonlinearity in both approaches originating from faster sampling during a hashrate spike and slower sampling during a hashrate ditch. The effect is still few times smaller than in the current DAA. In a scenario of 10 days 10x hashrate increase and then back to the previous hashrate value the described algorithm made on average 5x smaller error than the current DAA.

Next, difficulty can be easily calculated based on the data contained in the header. It is deterministic (Req. 3.), SPV friendly (Req. 6.), fast (Req. 5) can be implemented using integer math (Req. 9) and is indeed simple to understand and evaluate (Req. 7).

Finally, the constant nAvgSolveTime value is a discrete low-pass filter of the solve time signal with the cutoff frequency of:

fc = α/[(1-α)2π ΔT]

where ΔT is 600 seconds. Filtered data is then used for difficulty adjustment. (Req. 9)

Hashrate estimation

The requirement 1. states that the algorithm should estimate the hashrate currently working on chain. It seems circular, because work is estimated from the difficulty that we adjust. It is not the case because after the target is established the process that leads to finding another block is known. It is known how many operations will it take to solve the block the moment we set the target, therefore solve time carries the information about the hashrate. This is also why req. 3. is important. Requirement 1. is equivalent to preventing so called "DAA gaming". Gaming means tricking the algorithm, that the hashrate working on chain is different that it is. Current BCH DAA is a finite response low-pass filter and the miners reacting to changes are equivalent to Inductance in a circuit, so a narrow band of resonant frequencies may appear near the cutoff frequency, opening opportunity for gaming. The proposed algorithm has infinite response, thus events fade away smoothly. Since all frequencies are dampened, there should be no amplification and no resonance, provided the smoothing parameter α is large enough.

Timestamp attacks

Reverse timestamp attack:

In case of a single out of order timestamps surrounded by steady solve times, median of three timestamps makes a one previous_solve_time equal to zero and next one is two times larger. Following the equation for current_nAvgSolveTime, the zero time will decrease the nAvgSolveTime variable by α and the doubled solve time will even out the difference to be α² times higher than when no attack occurs. α is a small parameter so the square of it is accordingly smaller. The difficulty is first increased by α²/(1-α) and then evened out differing by a factor of the third order of α. For α around 0.01 the difficulty drop is of the order 0.000001.

Forward Time Limit attack:

MAX_FUTURE_BLOCK_TIME constant for Bitcoin Cash is 2 hours. A single timestamp like this would be worked around with the median so the behavior is identical as in reverse timestamp attack. If however two subsequent blocks were around such time in future, the 20x block solve time would increase the nAvgSolveTime by 21α, a zero solve time as before and a single negative previous_solve_time decreasing nAvgSolveTime by-18α. For α=0.01 it would mean multiplying nAvgSolveTime by a factor of 1.19, 0.99 and 0.80. Resulting change in averaged solve time is 0.94. Difficulty is initially decreased by 0.8% and finally increased by 0.06%. Two blocks with 2 hours future time would affect the current DAA decreasing difficulty by 8%. Proposed algorithm performs an order of magnitude better.

Further reading: attacks described by Zawy12.

Trade-offs

The obvious trade-off that is made is adding a new variable to be kept in memory. Some thin wallets do check difficulty so they would have to be updated to store the nAvgSolveTime once it is initially calculated.

Summary

The proposed algorithm fulfills the requirements listed at the beginning and the trade-offs are not prohibitive to implement it.

Acknowledgments

I want to thank @huckfinne for financially supporting my continuous work on BCH and Checksum0 for the exhaustive answers to my questions. Thanks to Imaginary username for the initial review of this document and the suggested simplification of this scheme. Also thanks for zawy12 for all the work he published. Thanks to Amaury Séchet for the review and suggestion of BLOCK_INTERVAL adjustment mechanism.

@jtoomim
Copy link

jtoomim commented May 27, 2020

Yes, that makes it much clearer what happened. You applied two approximation steps in the denominator but not the numerator. Those approximations have a larger error margin than you realized, and are the reason why α + β != 1 in most circumstances, and also why your code performs like a first-order EMA rather than a second-order EMA.

@KarolTrzeszczkowski
Copy link
Author

Changing it makes no impact on the result

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment