Instantly share code, notes, and snippets.

What would you like to do?
O(1) block propagation

O(1) Block Propagation

The problem

Bitcoin miners want their newly-found blocks to propagate across the network as quickly as possible, because every millisecond of delay increases the chances that another block, found at about the same time, wins the "block race."

With today's p2p protocol, this gives miners an incentive to limit the number of transactions included in their blocks. A transaction must pay more fees to the miner than they are statistically likely to lose due to the increased chance of losing a block race, since new block announcements include all of the data for all of the transactions in the block. This is inefficient (transaction data is transmitted across the network twice, using twice as much bandwidth) and artificially increases transaction fees to be much higher than they need to be.

Proposed approach

Each fully-validating node on the Bitcoin network maintains a transaction memory pool, which is a list of valid but unconfirmed transactions. Transactions are added to nodes' memory pools as they are relayed across the network, and are removed from memory pools when new blocks are found. Mining nodes select a subset of memory pool transactions to include in their new blocks.

If memory pools were perfectly synchronized and all miners had exactly the same policy for choosing which transactions to include in their blocks and in what order to include them, then miners could just announce new blocks as the fixed 80-byte block header and the coinbase transaction; there would be no need to include any transaction data at all, it could be reconstructed from their peers' memory pools.

In the real world, memory pools are not perfectly synchronized. However, they tend to be very similar, and we can take advantage of that similarity to optimize the amount of information that needs to be transmitted when a new block is found.

Set reconciliation and Invertible Bloom Lookup Tables

Set reconciliation means finding the difference between two sets of data. To optimize new block broadcasts, the sets we care about are the set of transactions transactions in a new block (call that set "B") and the set of transactions that a node already knows about (call that set "P").

Ideally we want to transmit just the 80-byte block header and transaction data for B-P. Both B and P will grow in size as transaction volume and the size of blocks grows, but if transaction propagation is mostly reliable B-P will be a constant size (see the section below on incentives for more discussion).

Invertible Bloom Lookup Tables (IBLTs) are a new (described in 2011) data structure that can be used to solve the set reconciliation problem.

The idea of an IBLT is to squash together (using hashing and XOR) all of the transaction data into a fixed-size data structure. Then transmit that squashed data to a peer, who mostly knows what data you are sending them, but might be missing a few transactions or might be expecting that you put a few transaction in your block that you actually didn't.

The size of the IBLT needed for all of that to work is proportional to the number of differences between what you send and what your peer expects.

There are three key ideas to make this work:

  1. canonical ordering of transactions in blocks
  2. similar (but not identical!) policy for selecting which mempool transactions go into blocks
  3. Peer sends us an IBLT(newblock) large enough that we are very likely to be able to recover transaction data that is not yet in our mempool.

Canonical ordering of transactions

I propose the following algorithm as the canonical ordering for transactions in a block. Canonical ordering will not be a consensus rule; it is just used to eliminate the need to communicate the order of transactions when IBLTs are used.

  1. Start with the set of transactions in the block (including the coinbase transaction)
  2. Sort them in ascending order, using the smallest (previous-transaction-hash, index) of each transaction's inputs as the sort key. Note that the coinbase transaction will be first, since its single input's prevout/index is all zero.
  3. Add the first transaction on the sorted list that does not depend on a transaction later in the list to the block (and remove it from the sorted list).
  4. Continue adding transactions until the sorted list is empty.

Communicating transaction selection policy

The reference implementation allows miners to specify: Maximum size (in bytes) of their blocks Maximum size set aside for "high priority" transactions, included regardless of fee.

Transactions are selected based on priority and then fee-paid-per-kilobyte.

Communicating the 80-byte block header, the size (in bytes) of the transaction data, and the size (in bytes) of the high-priority transactions is sufficient for the IBLT set reconciliation algorithm to work.

Encoding transaction data in the IBLT

Invertible Bloom Lookup Tables store key-value pairs. So given a set of transactions in a new block, what is the best way to encode them in an IBLT? What should be used as the key, and what the value?

We want the values in the IBLT to be a fixed, not-too-long length. Transmitting an IBLT with values that can hold the maximum possible transaction size (100K bytes) would not save any bandwidth. So I propose that the transaction data be serialized and split into 8-byte (64-bit) chunks. Two bytes of the key can then be used as a sequence number to reassemble the transaction.

We want keys to be unique, but they only have to be unique for the transactions in this block, they don't have to be globally unique. Using 48 bits of the transaction data's hash gives about a 0.1% chance of a collision even if we scale up to blocks with a million transactions in them, and combined with a 16-bit sequence number that give 64-bit keys. The transaction selection code can simply refuse to select transactions whose hashes collide with a transaction already in the block (they will just have to wait and get confirmed in the next block; a 0.1% chance of collision in million-transaction blocks means one transaction in a billion gets delayed).

Using the transaction hash directly opens up a possible denial-of-service attack; an attacker might try to brute-force the first 48 bits of the transaction id to keep somebody else's transaction from getting included. Or they might try to create transaction ids that collide in the hash buckets used internally by the IBLT. Those attacks can be prevented by combining the transaction ID with 48 bits of random "salt" (which an attacker cannot know) before using it as a key in the IBLT.

Pseudo-code algorithm for encoding a new block in an IBLT:

Pick transactions from the memory pool to be in the new block, and
put them in the block in canonical order.

Create a new empty IBLT that is large enough to encode expected
differences between the block we created and the blocks our peers
are creating. Select a random 48-bit salt.

Foreach transaction, including the coinbase:
  Calculate the 48-bit txid+salt
  Serialize the transaction and split it into 64-bit chunks
  Foreach chunk i:
    insert key=txid48+i, value=chunk

Search for a solution to the new block.

When a block solution is found:

Foreach peer running IBLT-capable version of the protocol:
  Transmit block header, sizeof(txdata), sizeof(high_priority_tx), salt, IBLT
Foreach peer running an older version of the protocol:
  Send the whole block in a 'block' message

Securely calculating txid48

To prevent brute-force attacks, we want at least 128 bits of the transaction ID to contribute to the IBLT key. I propose hashing the transaction ID with a 256-bit salt chosen by the miner, and taking 48 bits of the result.

Decoding an IBLT new block message

Peers receiving the block header and IBLT could immediately relay it to their peers after checking the proof-of-work in the block header-- although it is debatable as to whether or not they should before decoding the IBLT and validating the transactions.

Pseudo-code for decoding the data in an IBLT:

When an IBLT for a new block ("IBLT_new") is received from a peer:

Create a new, empty IBLT ("IBLT_us") and fill it with transactions from the memory
pool (based on byte-size hints received from peer).

Calculate IBLT_diff = IBLT_new - IBLT_us  (this takes constant time)

Decode IBLT_diff; if decoding succeeds, the result is two lists of
(txid48+sequence, chunk) : transactions that the peer included
that need to be added to the block we created from our memory pool, and
transactions that the peer did not include that we need to remove.

Concatenate the chunks and deserialize the transactions, then insert
them into the new block in canonical order. Then proceed with all the
normal block and transaction validation.

If decoding fails (memory pools or mining policies differ too much, or a bug
in the peer's IBLT creation code leaves out a chunk of transaction data or
something), fall back to getdata/block messages or maybe ask for an extension
of the IBLT that will successfully decode.

What about blockchain re-orgs?

All of the above assumes we are receiving a new block building on our current idea of the best block chain.

If our current best chain is A->B, and we receive IBLTs for an alternative longer chain with block B' and C', how do we reconstruct B' ?

The simplest answer is to just request the full block data for B' -- re-orgs are rare.

But the set of transactions in B and B' should be very similar, so it might be worthwhile to reconstruct B' from it's IBLT and an IBLT created from block B.

Alternative or complementary approaches

"I know that you know..."

Matt Corallo has implemented a fast block relayer tool. It keeps track of which transactions have been sent to each peer, and then when a new block is found it replaces transaction data that it knows a peer has with 10 bytes of the transaction ID.

Transmitting just 10 bytes of the transaction ID instead of the full average-250-byte transaction gives a 25-times bandwidth savings.

TODO: measure Matt's approach against an IBLT approach. My guess is at small block sizes the simpler approach works better, but at some point IBLTs win.

THOUGHT EXPERIMENT: IBLT-ifiy the data Matt's fast relayer sends? Should theoretically get the same constant factor of 25 improvement but be O(1) instead of O(n)....

Relay just headers

Several people have proposed relaying just the 80-byte block header, with miners checking the proof-of-work and then beginning to mine an empty block on top of whichever header has the most proof-of-work (assuming that nobody waste time creating a valid-proof-of-work block that contained invalid transactions). Once the full transaction data was transmitted miners would then switch to mining a block full of transactions.

See for discussion on why this is not a good idea.

Pre-broadcast blocks

Miners could broadcast the blocks they are working on to the network before they have solved any blocks. If they then do find a block, they could then just broadcast the winning nonce and coinbase transaction (and some identifier for "that unsolved block I sent you a while ago").

I think broadcasting "working blocks" is a great idea, for a couple of reasons.

First, it gives merchants an idea of what percentage of hashing power is working on including their transction in the blockchain, helping make 0-confirmation transactions more secure (if 90% of active miners are working to include your transaction after a few seconds, and none are working on including a double-spend, then it is reasonably likely it will be confirmed in the next block or two).

Second, it gives other miners a view into what the majority of hashing power is working on, and should allow miners to better monitor the health of their network connections.

A proposal for pre-broadcasting unsolved blocks would have to address bandwidth usage and denial-of-service attacks; perhaps such a proposal could use IBLTs to minimize the amount of memory and bandwidth used by the nodes on the network.

Road map / TODO

Instrument some bitcoind's running on the production network to find out how much memory pools and transaction selection policies differ today.

Calculate the IBLT size needed for that set size difference to achieve 1% decoding failure.

Create a test bench environment with real-world transactions and blocks, so the IBLT approach can be compared against other approaches or optimizations to the IBLT code can be measured to make sure they actually improve latency.

Write code, work out all the details, benchmark/test along the way.

Once stable and starting to be deployed: write up BIPs describing new p2p protocol changes.

Will this skew incentives?

Making propagation of new block announcements O(1) instead of O(n_transactions_in_block) means miners should include more fee-paying transactions in their blocks.

However, propagation is only O(1) for blocks containing transactions already seen and accepted by a majority of the network. There is an incentive for all miners to include roughly the same set of transactions in their blocks; no miner will want to create a block containing mostly transactions that some of the network refuses to relay or mine, because they will have to include that transaction data in their new block announcement, which slows down block propagation.

The converse is also true: miners that refuse to include transactions that the rest of the network would accept will have to transmit bigger IBLTs.

A miner trying to optimize the propagation of their blocks would only include transactions that are in all of their peers' memory pools; note that this gives power to "relay nodes" that are not mining. If a significant number of relay nodes refuse to remember very-small-fee transactions in their memory pools (perhaps they are dropping them because they run into CPU or bandwidth limits) then miners have an incentive not to include them in their blocks. This will put a floor on how low transaction fees will go.

But I heard that Bitcoin Doesn't Scale...

People claiming that "Bitcoin Doesn't Scale" are theoretically correct: you still need O(n) bandwidth and CPU to fully validate n transactions-per-second.

Someday, when Bitcoin is the number 2 payment network in the world, we might have to start worrying about that. Here are a couple of back-of-the-envelope calculations that show that we should be able to scale up to n=15,000 transactions per second before running into that O(n) bandwidth limit.

For perspective, the number 1 payment network in the world today (Visa) handles about 212 million transactions per day; 2,500 transactions per second on average. Their peak processing capacity, needed on the busiest shopping days, is reported to be 40,000 tps.

My home Internet connection is getting about 90 megabits download bandwidth per second right now. An average Bitcoin transaction is about 2,000 bits, so my current consumer-level Internet connection could download 45,000 transactions per second, over ten times average Visa transaction volume.

While it is nice to know that I could run a full node handling more-than-Visa-scale transaction volume from my house, running a dedicated machine in a data center somewhere makes more sense. 15,000 250-byte transactions per second works out to about 7 terabytes of bandwidth per month. One of my hosting providers charges $20 per month for a virtual private server with 8 TB of bandwidth per month-- or $240 per year to handle MasterCard-level transaction volume today (August 2014).


Goodrich, Michael T., and Michael Mitzenmacher. "Invertible bloom lookup tables." Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton Conference on. IEEE, 2011.

Eppstein, David, et al. "What's the difference?: efficient set reconciliation without prior context." ACM SIGCOMM Computer Communication Review. Vol. 41. No. 4. ACM, 2011.


Thanks to Gregory Maxwell and Emin Gün Sirer for pointing out relevant results from the information coding and set reconciliation literature.


This comment has been minimized.

stevedekorte commented Aug 10, 2014

Typo: "transactions transactions".


This comment has been minimized.

ashleyholman commented Aug 11, 2014

Foreach transaction, including the coinbase:
  Calculate the 48-bit txid^blockhash
  Serialize the transaction and split it into 64-bit chunks
  Foreach chunk i:
    insert key=txid^blockhash+i, value=chunk

Search for a solution to the new block (increment nonce, if nonce space
runs out replace coinbase in the block and IBLT).

If constructing the IBLT uses the block hash, wouldn't that step need to come after the block solution is found?


This comment has been minimized.

jl2012 commented Aug 11, 2014

@ashleyholman, I think it should be "last blockhash"


This comment has been minimized.

ashleyholman commented Aug 11, 2014

@jl2012 oh, but then wouldn't that open it up to the same DoS attacks that it was trying to prevent?


This comment has been minimized.


gavinandresen commented Aug 11, 2014

@ashleyholman : excellent point. I'll switch the proposal to use the merkle root from the block header, which IS known before the nonce is found.


This comment has been minimized.

MarkPfennig commented Aug 11, 2014

At the risk of asking a stupid question, what if the approach was simply for blocks to be rejected which did not have a number of transactions greater than xx-percent of the number of transactions in the memory pool?

Generally the problem seems to reduce to 'require a fair amount of transactions in each block'?


This comment has been minimized.

kronrod commented Aug 11, 2014

Another alternative approach: when a miner finds a block, it initially just broadcasts its header. That information suffices for others to start temporarily mining a subsequent (empty) block. Once the others receive the whole block, they verify it, stop the temporary empty mining and continue as normal with a block that includes transactions. That way, the delay between the first miner finding a block and the others continuing with the next block is minimal - even smaller than with the IBLT approach as no table needs to be submitted.

The drawback of this approach is that the other miners will temporarily accept a block without having verified it and without knowing its contents. Consequently, they cannot decide what transactions to include in the temporary block, forcing them to keep it empty. Furthermore, there is a risk of the first block being invalid (e.g. containing double-spends). In that case, they would temporarily help an attacker with their computing power. The impact of such an attack can be kept small by setting a time-out for receiving and verifying the block data (e.g. one minute).

Obviously, this approach is not as sophisticated as using IBLT and does not save bandwidth (is this really so costly?), but it is simple and effective.


This comment has been minimized.

ashleyholman commented Aug 11, 2014

Using the merkle root would require full IBLT regeneration every time the coinbase is updated, which with 1 Petahash would be at least ~200K times per second. With pooled mining, the clients of the pool can vary the coinbase locally (extra nonce field), so the pool operator wouldn't be able pre-generate the IBLT in those cases.

What about just generating the IBLT after the block solution is found, using the last 48 bits of the block hash? That guarantees it's impossible for anyone to predict it, including the miner themself.

The downside would be that IBLT generation adds to the propagation delay, but the same O(n) work is required to decode it anyway before it can be validated, so doubling that work wouldn't change the scaling properties.

How long does it take to generate the IBLT for a million transactions?


This comment has been minimized.


gavinandresen commented Aug 12, 2014

@MarkPfenning: we can't force miners to upgrade; we have to make the technology better so they WANT to upgrade. Miners that adopt this change will see fewer orphans. Miners that do not will lose out.

@swissbit: see bitcoin/bitcoin#3195 for a discussion on the drawbacks of relay-before-validation.

@ashleyholman : Good points, but using the merkleroot gives miners the OPTION to overlap nonce-finding with IBLT construction if they want to save every teeny-tiny bit of latency. My guess is version 1.0 of this will be implemented as you suggest, and just compute the IBLT just before the new block is announced.

I don't know how long it will take to generate the IBLT for a million transactions, haven't written the code yet. Looks to me like it should parallelize nicely on multi-core machines...


This comment has been minimized.

SergioDemianLerner commented Aug 12, 2014

Each node currently maintains a list of INV hashes received by each other peer. So a simple header push, followed by the push of the transactions hash list plus the transactions that the node knows are missing in the peer's pool is enough. Also, as someone said, miners should tend to include transactions not recently received (e.g. 10 seconds old), to make sure every other node has them. The IBLT idea is great and I'd like to see it implemented and benchmarked, but a simpler strategy also works.

Here there was (among other things) a list of improvements to the block-propagation protocol:


This comment has been minimized.

MarkPfennig commented Aug 12, 2014

@gavinandresen theoretically (and I ask in the context of another currency) if the minimum percentage approach had always been implemented, would it in your opinion have addressed the problem and solved the issues? After all, the job of the miners is to support the network and be rewarded fairly for it, not to dictate / unfairly profit / manipulate.


This comment has been minimized.

jl2012 commented Aug 12, 2014

@ashleyholman, why couldn't a miner use a random salt for this purpose, and broadcast the salt with the IBLT when he finds a block? So the miner could generate IBLT and mine in parallel. A few bytes of salt should be enough for the anti-DoS purpose. Do I miss something?


This comment has been minimized.


gavinandresen commented Aug 13, 2014

@SergioDemianLerner: Yes, Matt Corallo has implemented and deployed a fast-block-relayer that does as you suggest. It transmits a 10-byte transaction ID instead of the transaction data, so should give about a 25-times speedup, which is great.

@jl2012 : the miner choosing and transmitting a 48-bit salt sounds like the right way to go. I'll update this gist.


This comment has been minimized.


gavinandresen commented Aug 14, 2014

@MarkPfennig : only build on or relay blocks that include some minimum percentage of transactions in the memory pool?

That sounds like a great recipe for an attacker to shatter the blockchain into lots of competing forks (they just submit blocks that are right at that percentage, and rely on memory pool differences -- maybe combined with intentional double-spends submitted to different halves of the network-- to split the network).


This comment has been minimized.

leishman commented Aug 15, 2014

@gavinandresen Very interesting proposal. Thank you for all of the work you are doing. It is greatly appreciated!!


This comment has been minimized.

MarkPfennig commented Aug 15, 2014

@gavinandresen, great point, that's why I asked you :) Thank you.


This comment has been minimized.

luke-jr commented Aug 15, 2014

Optimising one specific policy (even with a few variables) at the expense of others is too biased, although this could probably be made to work without that requirement. The summary in "Relay just headers" is not a fair representation of #3195, which remains a good idea, despite the logic that must be carefully thought through to implement it safely.


This comment has been minimized.

gubatron commented Aug 20, 2014

here are some related ideas to the Canonical Ordering and how we do it for sorting ip addresses on the bittorrent network

the sorting comparison function is a simple CRC32-C, maybe something similar can be applied to the transaction data. no clue what i'm talking about, just resonated with that bittorrent bep.


This comment has been minimized.

petertodd commented Aug 20, 2014

repost of:

Unfortunately IBLT only works if all the participants are "honest" and run the algorithm as-is, rather than try to maximize revenue. The problem is still large miners: if I'm a big miner like GHash.IO, I can earn more revenue than my competitors by including transactions that they don't yet have, which pushes their orphan rate up much more than it does mine. The miner will take a hit on subsidy profit temporarily, but their competitors take a much bigger hit.(1) In the future if transaction fees dominate revenue even that won't be true: you'll earn more money outright because you get a higher % of the transactions.

One of the more disturbing things to come out of the GHash.IO "51% attack risk" meeting a few weeks ago was that GHash.IO runs their pool at a significant loss. They don't charge any fees, so they get no direct revenue from the pool at all, yet the idea of making their pool any smaller was something they argued strongly against. Where do they make their revenue? Well, they operate an exchange, which the pool drives customers too, and they've hinted at offering "value-added" mining services - things like tx confirmation guarantees that only large pools can offer. In an environment like that it's easy to see why a pool would take a temporary loss to gain market share at the expense of their smaller competitors.

Of course, that's not even getting into the many unsolved problems in Gavin's scalability calculations. For instance, if I'm running flat out on my home connection just to keep up, how am I supposed to catch up and verify history properly when I start my node for the first time? How can I resist government regulation of mining if my pool can't mine over Tor? Low resource requirements are part of Bitcoin's "honeybadger" resilience; sacrificing that resilience isn't something we should take lightly.

  1. And actually, the current Bitcoin protocol is flawed in that you can mine blocks without bothering to validate the previous ones; a profit-driven miner can take advantage of IBLT to quickly figure out what mempool transactions might be spent, create blocks guaranteed to not be invalid on that basis, and never even bother to check what the previous blocks contents actually were. We're going to have to fix this.

This comment has been minimized.


gavinandresen commented Aug 20, 2014

@petertodd : I don't understand how propagating new blocks faster makes the supposed problem you describe any better or worse. Maybe you can take another shot at explaining it to me.

RE: catching up if you're on a home network connection: I'll write up a "scalability road map" that paints the entire picture, UTXO commitments solves the "how do I catch up" problem (I know you have opinions on UTXO commitments, please don't "gish gallop" and start an off-topic discussion of them here).


This comment has been minimized.

petertodd commented Aug 20, 2014

The problem is not propagating new blocks faster, it's making scalability assumptions based on that speed when largish miners have economic incentives to sabotage it. Remember that if your goal is to find more blocks than other miners, you have no incentive to propagate your blocks to more than 30% of the hashing power.(1)

re: UTXO commitments, perfectly fine so long as you use TXO archiving + TXO commitments to have a fixed upper limit on the UTXO set size. See -dev IRC discussion from earlier today. Doesn't solve the catchup problem directly though; you need to "go back" a significant number of blocks.


This comment has been minimized.

instagibbs commented Aug 20, 2014

@petertodd can we separate criticisms of of phrase "Bitcoin can scale" from criticisms of IBLT in general, just for clarity?

Do you think IBLT make it more likely that pools with withhold transactions beyond 30%? Less? No obvious affect at least in principle?


This comment has been minimized.

petertodd commented Aug 20, 2014

@instagibbs Yeah, removing the discussion about scalability from the document would be a good idea; it's really given the community the wrong impression. Again, to be clear, IBLT's by themselves are either good, or ineffective for the most part - they're not worse than the status quo. (modulo concerns about forcing policy uniformity)

@gavinandresen "Pre-broadcast blocks" may want to reference my "Near-block broadcasts for proof of tx propagation" writeup:


This comment has been minimized.

sattath commented Sep 3, 2014

What about the following DOS attack?

The attack builds on the following assumption:
"However, they [memory pools] tend to be very similar, and we can take advantage of that similarity to optimize the amount of information that needs to be transmitted when a new block is found."

The attacker would try to break the similarity of the mempools and by that get a much slower effective propagation time. This can be achieved easily: the attacker creates tx1,1, tx1,2,...,tx1,n,tx2,1,...tx2,n,...,txm,1,...txm,n where txi,j and txk,l are conflicting if and only if i=k (for example tx1,5 and tx1,7 are conflicting.). The attacker sends conflicting transactions to differnet random nodes simultaneously. Choosing m big enough (larger than IBLT's ability to reconstruct) will make this proposal inefficient.

In fact, a miner may be able to take advantage of it: the attacker will know which transactions are not in the "consensus" and hence the attackers blocks will propagate faster than the others.

To conclude, in order for this proposal to work, I think more attention is needed on how to reach consensus on the transactions inside the mempool.


This comment has been minimized.


gavinandresen commented Sep 11, 2014

@petertodd: cool, I'll add a link when I update this.

@sattath: Good point. If you detect a double-spend, the best miner policy would be to drop both versions of the spend from their memory pools, since they would have no way of knowing which one their peers were going to include (or a more complicated policy like keep highest fee-per-kilobyte, drop both if they pay the same ... but I don't like complicated).

Near-block broadcasts should quickly tell you that an attacker is broadcasting a double-spend, and assuming the attacker has a limited number of unspent transaction outputs to play with, black-listing those double-spent outputs for some period of time should mitigate any attacks.


This comment has been minimized.

pennyfx commented Oct 14, 2014

If bloom filters allow miners to include as many transaction as possible with no repercussions like propagation time, then won't this allow malicious miners to spam dust transactions? This, in combination with the recent improvement to dynamically increase the blocksize may create a problem... right? The miners incentives will be reversed and the hard limit on block size is essentially removed. Sounds like a perfect opportunity for block chain bloat.


This comment has been minimized.

martinus commented Oct 22, 2014

Hi Gavin, you might be interested in this paper that was just released: "Cuckoo Filter: Practically Better Than Bloom" [1]. It supports dynamic removals as well, while still outperforming bloom filters. They even have a open source implementation here [2].



This comment has been minimized.

shelby3 commented Oct 23, 2014



Your math appears to be incorrect.

Q + (1-Q)_Q is the probability of the adversary winning 2 blocks, because the adversary can find a second block on his private chain with Q probability before rest of the honest network finds a block, plus with (1-Q)_Q probability before rest of the honest network finds two blocks. (1-Q)^2 is the probability of the rest of the honest network instead winning those 2 blocks. But the initial condition is that the adversary has already won 1 block and is risking losing it to try to win 2 blocks.

At the Q ~= 29.2% break even point you calculated, both of those probabilities are 50%. Thus the adversary forsakes 1 + Q blocks to win only 0.5_2 blocks, because if he had published the block he would have 1 block for sure plus Q probability to win the next block. And the rest of the honest network wins 2_((1-Q)^2) blocks instead of only 1-Q blocks.

Thus the correct equation is (Q + (1-Q)_Q)_2 = 1 + Q, i.e. Q=0.5.

Since your linked post apparently occurred after the publishing of the selfish-mining attack[1], I can only assume you gained inspiration from it but incompletely understood it. The reason selfish-mining gains an advantage is because the adversary sometimes gets further ahead than 1 block and in these cases doesn't have to mine on the honest network's public chain except in the rarer probability (as you pointed out) that chain catches up to within 1 block of the private chain.

Thus it appears you were incorrect to assert (or were you just positing?) that an adversary with less than 25% of the hashrate can attack by sending his blocks to only some of the hashrate, because the selfish-miner needs to be in control of the hashrate employed in the attack algorithm[1]. And with the proposed fix[1], that percentage rises to 33%.

Also the selfish mining attack can be entirely defeated by distributing block rewards to all forks without enabling double-spends from minority chains, but this is outside of the scope of what I will explain here.

[1] Ittay Eyal, Emin Gun Sirer. Majority is not Enough: Bitcoin Mining is Vulnerable


This comment has been minimized.

shelby3 commented Oct 24, 2014

I don't see how the proposed use of IBLT's can solve the problem considered. The collision rate of the bloom filter is the compression rate of the IBLT because on collision the value is added (or xor) instead of stored.

But if the collision rate is high as desired to solve the problem considered, then equivalently so the rate of failure of the intersection differencing.


This comment has been minimized.

rustyrussell commented Nov 10, 2014

Hi; I hacked up an implementation and did some analysis on example transactions. The TLDR is that 64 bits is too small, but the scheme seems workable:


This comment has been minimized.

ghost commented Nov 28, 2014

This proposal is nice for academics but it's not enforceable on the network since it isn't part of the Proof-of-Work. I say this because you are assuming some sort of consensus in your proposal and assuming that miners actually want their block to propagate fast. If a broadcasted block reaches less than 1/3 of the network then you've mined it, it's in the main chain, why not let the other miners find out sometime later while you are working on the next block in the meantime?

The larger scalability problem is the bitcoin network code in it's current state will not scale to 1000's of transactions per second given how poorly it is written. The bitcoin wiki details only several of the problems.


This comment has been minimized.

Azulan commented Apr 29, 2015

"are the set of transactions transactions in a new block"


This comment has been minimized.

r8921039 commented Dec 17, 2015

@gavinandresen, thank you for another great work. if possible, could you provide a reference to your hosting provider? Couldn't find a place to beat your quote - 8TB $20 per month, though i know price may have increased. 👍


This comment has been minimized.

kanzure commented Dec 22, 2015


This comment has been minimized.

illuzen commented Feb 8, 2016

@john-connor presumably you would be able to just ask any full node for the difference between your mem-pool and the most recent block. You don't have to rely on the block's miner to tell you. Or maybe I misunderstand you.

As per the networking code issues, would you be willing to start a document with all the known issues? I think that could be tremendously valuable for the community.


This comment has been minimized.

illuzen commented Feb 8, 2016

@shelby3 The selfish-miner attack collapses when there are more than two groups of miners. As in, say there are honest miners, selfish miners, selfisher miners, etc. Then you never know who else is withholding blocks and is going to reduce your private chain to ashes after you finally announce it, thus changing the expected value of such a strategy. Games with > 2 players are a lot more complicated than games with 2 players. Related: small-game fallacy.

I also think the instability of cartels applies here. If selfish mining is a successful strategy, shouldn't you be able to do it against the selfish pool as a sub-pool of that same selfish pool? Recurse ad absurdum: arbitrarily small sub-sub-sub-...pools outrunning the entire network because magic! The myth of Eris and the golden apple is a good metaphor for this phenomenon.

TL;DR all miners are selfish. Bitcoin was designed with this in mind. Selfish mining attack is a myth.


This comment has been minimized.

yunquliu commented Jan 10, 2018

The scale or throughput of Blockchain has been discussed. However, the transaction latency or TPS is way too long if we want to use Blockchain into the daily business transactions. It takes 10minutes (and even longer) to confirm.

Should we simply change the parameters, the networking layer likely wouldn't support it anymore.

Any thoughts?


This comment has been minimized.

awb715 commented Jan 12, 2018

Was this implemented?


This comment has been minimized.

shade commented Jul 26, 2018

@awb715, it was just merged into Bitcoin Unlimited

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment