Skip to content

Instantly share code, notes, and snippets.

@DavidVorick
Last active October 17, 2017 21:13
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save DavidVorick/ba17fb7db50b9270d1c2131ff425422b to your computer and use it in GitHub Desktop.
Save DavidVorick/ba17fb7db50b9270d1c2131ff425422b to your computer and use it in GitHub Desktop.
Fee Market Mechanics Proposal

Purpose: Suggest a new method for choosing fees on a transaction. Goal is to get a transaction confirmed as cheap as possible within the provided timeframe, with emphasis on both not missing the time window, and also on not going over the user's maximum fee.

Existing Problems:

  1. Transaction fee setting is currently really 'loose'. Blocks will have transaction fees between 300 sats and 500 sats, which means that, at the very least, everyone at 500 sats could have gotten in at 300 sats. That's a huge spread.

  2. Loose fee markets just generally drives prices up quite a bit. People see '300' and '500' as the 0-1 block confirmation rate, and that's where they set their expectations. They don't like it, but they know that's what the market is, so that's where they put their fees. Basically, we've ended up with a market where people are trying hard to make sure that they are using fees that are good enough instead of using fees that are what they could reasonably get away with.

  3. The emphasis on getting confirmed immediately is exploitable. All you need is one miner to occasionally mine an empty block to really throw off the algorithms that figure out what sort of fee is needed to guarantee you get into the next block. Ultimately, it's not that cheap for miners to throw blocks here and there, and if by doing so they can drive the fee rates up by 25% or more for the rest of the market, they are more than making that money back.

  4. The mechanism for setting the fee rate does not respond hardly at all to reduced fee pressure for a given block. When a block gets mined, all of the most expensive fees get taken out of the market, and that should be responded to by submitting lower fee transactions. If you have 20 minutes of slump in the number of high fee tranasctions being created, there should be an immediate response by the fee estimation to reduce the fee required.

----- proposed mechanics -----

Keep a historic record of the 90%-ile value fee in each block. I assert that transactions which do not fall right at the 90%-ile mark are really not relevant. You don't need to know what people are over-paying, and you want to ignore the bottom 10% because of weird optimization edge cases that crop up when a miner is trying to solve the knapsack problem - the bottom 10% may be unexpectedly/unsusatinably low for some blocks. We ignore empty space in the block, assume that it's a soft fork low-blocksize or something. If the block is completely empty, we store a '-' and simply don't use it during calculations.

This does mean that an attacker can manipulate us to under-estimate our required fees by adding 10% low-fee transactions to each block, but that's not incentive-compatible - why would a miner want the general public to reduce their fees? If that's an uncomfortable manipulation, change 90%-ile to 75%-ile or 50%-ile to make the malice more difficult, but that's also going to result in fee estimations that are un-necessarily high in the absense of malice.

Keep a historic record of the median 90%-ile value of the past 11 blocks for each block. This is going to help us figure out how dramatically the 90%-ile value tends to fluctuate over time, which is useful for fee estimation. We use the medians to help filter out attacker blocks.

Keep a historic record of the amount of your mempool that clears each block. Each time there's a block, record how much (in bytes) of your mempool cleared, and then remember that value. We'll call these values the 'clear values'.

To know what fee is needed to get into the next block, look at your mempool. Set a fee that gets you in the top BLOCK_SIZE * ADJUST_FACTOR, where ADJUST_FACTOR is determined by looking at the median 'clear value' over the past 7 blocks. Then multiply that by 0.9 so that you've got some extra wiggle room for malice and such.

Use rapid RBF to keep updating your fees As the mempool evolves, RBF your transactions in the event that you no longer fit in the top BLOCK_SIZE * ADJUST_FACTOR to put yourself back near the top of the mempool. If the wallet cannot stay online for some reason, it's probably not all that urgent to get into the next block, and the value set before doing any RBF is probably going to get in within a few blocks anyway. Maybe multiply ADJUST_FACTOR by 0.5 again to give yourself extra margin.

Use the 90%-ile if predicting for N blocks, or if you don't have a good mempool. The mempool strategy is great to get you into the next block with a minimum fee, but isn't as strong if you are comfortable waiting N blocks, or if you don't have a mempool, or if you just turned your node on and you don't really know what the current state of the mempool is. So if we can't use the mempool strategy, we'll fall back to using 90%-ile block fees that we've been saving.

If we're trying to get confirmed quickly, we need to know the biggest variance from one median to the next. So we'll look at our historic '11-block medians' for the 90%-ile value over the past 144 blocks and find the largest increase from one to the next. Then we'll add that to the most recent 11 block median to get the fee we should use.

If we're trying to save money and willing to wait up to 'N' blocks, we need to estimate how far the fee is likely to fall over the next 'N' blocks. To get the 'swing' value of a range of 11-block medians, we'll take the highest value for the 11-block medians in that range and subtract from it the lowest value of 11-block medians. For example, if N=6 and our 11-block medians look like '5,6,7,4,5', the 'swing' over that range is 7-4 = 3. We'll scroll back through some reasonable amount of history (say 144 blocks) and find the lowest swing value in the medians for ranges of size 'N'. We'll then assume that we're going to be able to swing at least that much in the coming 'N' blocks. We don't know however if we're already at a local minimum or not though. So we'll look at the highest 11-block median value over the previous 'N' blocks, and then we'll substract our swing value from that block. We can then safely assume that in the next 'N' blocks, it'll likely swing that low again.


I realize this is mostly words for something that is largely mathematical. But I believe the system above is much better than the current system for fee estimation, and will have the following advantages:

  • Less likely to go 'N' blocks without a confirmation, even if the mempool is starting to fill up because we are approaching a rush hour

  • Much, much less likely to spend a large amount on fees when a lower amount would have gotten accepted in an equal number of confirmations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment