Skip to content

Instantly share code, notes, and snippets.

@gavinandresen
Last active June 28, 2024 08:17
Show Gist options
  • Save gavinandresen/e20c3b5a1d4b97f79ac2 to your computer and use it in GitHub Desktop.
Save gavinandresen/e20c3b5a1d4b97f79ac2 to your computer and use it in GitHub Desktop.
O(1) block propagation

O(1) Block Propagation

The problem

Bitcoin miners want their newly-found blocks to propagate across the network as quickly as possible, because every millisecond of delay increases the chances that another block, found at about the same time, wins the "block race."

With today's p2p protocol, this gives miners an incentive to limit the number of transactions included in their blocks. A transaction must pay more fees to the miner than they are statistically likely to lose due to the increased chance of losing a block race, since new block announcements include all of the data for all of the transactions in the block. This is inefficient (transaction data is transmitted across the network twice, using twice as much bandwidth) and artificially increases transaction fees to be much higher than they need to be.

Proposed approach

Each fully-validating node on the Bitcoin network maintains a transaction memory pool, which is a list of valid but unconfirmed transactions. Transactions are added to nodes' memory pools as they are relayed across the network, and are removed from memory pools when new blocks are found. Mining nodes select a subset of memory pool transactions to include in their new blocks.

If memory pools were perfectly synchronized and all miners had exactly the same policy for choosing which transactions to include in their blocks and in what order to include them, then miners could just announce new blocks as the fixed 80-byte block header and the coinbase transaction; there would be no need to include any transaction data at all, it could be reconstructed from their peers' memory pools.

In the real world, memory pools are not perfectly synchronized. However, they tend to be very similar, and we can take advantage of that similarity to optimize the amount of information that needs to be transmitted when a new block is found.

Set reconciliation and Invertible Bloom Lookup Tables

Set reconciliation means finding the difference between two sets of data. To optimize new block broadcasts, the sets we care about are the set of transactions transactions in a new block (call that set "B") and the set of transactions that a node already knows about (call that set "P").

Ideally we want to transmit just the 80-byte block header and transaction data for B-P. Both B and P will grow in size as transaction volume and the size of blocks grows, but if transaction propagation is mostly reliable B-P will be a constant size (see the section below on incentives for more discussion).

Invertible Bloom Lookup Tables (IBLTs) are a new (described in 2011) data structure that can be used to solve the set reconciliation problem.

The idea of an IBLT is to squash together (using hashing and XOR) all of the transaction data into a fixed-size data structure. Then transmit that squashed data to a peer, who mostly knows what data you are sending them, but might be missing a few transactions or might be expecting that you put a few transaction in your block that you actually didn't.

The size of the IBLT needed for all of that to work is proportional to the number of differences between what you send and what your peer expects.

There are three key ideas to make this work:

  1. canonical ordering of transactions in blocks
  2. similar (but not identical!) policy for selecting which mempool transactions go into blocks
  3. Peer sends us an IBLT(newblock) large enough that we are very likely to be able to recover transaction data that is not yet in our mempool.

Canonical ordering of transactions

I propose the following algorithm as the canonical ordering for transactions in a block. Canonical ordering will not be a consensus rule; it is just used to eliminate the need to communicate the order of transactions when IBLTs are used.

  1. Start with the set of transactions in the block (including the coinbase transaction)
  2. Sort them in ascending order, using the smallest (previous-transaction-hash, index) of each transaction's inputs as the sort key. Note that the coinbase transaction will be first, since its single input's prevout/index is all zero.
  3. Add the first transaction on the sorted list that does not depend on a transaction later in the list to the block (and remove it from the sorted list).
  4. Continue adding transactions until the sorted list is empty.

Communicating transaction selection policy

The reference implementation allows miners to specify: Maximum size (in bytes) of their blocks Maximum size set aside for "high priority" transactions, included regardless of fee.

Transactions are selected based on priority and then fee-paid-per-kilobyte.

Communicating the 80-byte block header, the size (in bytes) of the transaction data, and the size (in bytes) of the high-priority transactions is sufficient for the IBLT set reconciliation algorithm to work.

Encoding transaction data in the IBLT

Invertible Bloom Lookup Tables store key-value pairs. So given a set of transactions in a new block, what is the best way to encode them in an IBLT? What should be used as the key, and what the value?

We want the values in the IBLT to be a fixed, not-too-long length. Transmitting an IBLT with values that can hold the maximum possible transaction size (100K bytes) would not save any bandwidth. So I propose that the transaction data be serialized and split into 8-byte (64-bit) chunks. Two bytes of the key can then be used as a sequence number to reassemble the transaction.

We want keys to be unique, but they only have to be unique for the transactions in this block, they don't have to be globally unique. Using 48 bits of the transaction data's hash gives about a 0.1% chance of a collision even if we scale up to blocks with a million transactions in them, and combined with a 16-bit sequence number that give 64-bit keys. The transaction selection code can simply refuse to select transactions whose hashes collide with a transaction already in the block (they will just have to wait and get confirmed in the next block; a 0.1% chance of collision in million-transaction blocks means one transaction in a billion gets delayed).

Using the transaction hash directly opens up a possible denial-of-service attack; an attacker might try to brute-force the first 48 bits of the transaction id to keep somebody else's transaction from getting included. Or they might try to create transaction ids that collide in the hash buckets used internally by the IBLT. Those attacks can be prevented by combining the transaction ID with 48 bits of random "salt" (which an attacker cannot know) before using it as a key in the IBLT.

Pseudo-code algorithm for encoding a new block in an IBLT:

Pick transactions from the memory pool to be in the new block, and
put them in the block in canonical order.

Create a new empty IBLT that is large enough to encode expected
differences between the block we created and the blocks our peers
are creating. Select a random 48-bit salt.

Foreach transaction, including the coinbase:
  Calculate the 48-bit txid+salt
  Serialize the transaction and split it into 64-bit chunks
  Foreach chunk i:
    insert key=txid48+i, value=chunk

Search for a solution to the new block.

When a block solution is found:

Foreach peer running IBLT-capable version of the protocol:
  Transmit block header, sizeof(txdata), sizeof(high_priority_tx), salt, IBLT
Foreach peer running an older version of the protocol:
  Send the whole block in a 'block' message

Securely calculating txid48

To prevent brute-force attacks, we want at least 128 bits of the transaction ID to contribute to the IBLT key. I propose hashing the transaction ID with a 256-bit salt chosen by the miner, and taking 48 bits of the result.

Decoding an IBLT new block message

Peers receiving the block header and IBLT could immediately relay it to their peers after checking the proof-of-work in the block header-- although it is debatable as to whether or not they should before decoding the IBLT and validating the transactions.

Pseudo-code for decoding the data in an IBLT:

When an IBLT for a new block ("IBLT_new") is received from a peer:

Create a new, empty IBLT ("IBLT_us") and fill it with transactions from the memory
pool (based on byte-size hints received from peer).

Calculate IBLT_diff = IBLT_new - IBLT_us  (this takes constant time)

Decode IBLT_diff; if decoding succeeds, the result is two lists of
(txid48+sequence, chunk) : transactions that the peer included
that need to be added to the block we created from our memory pool, and
transactions that the peer did not include that we need to remove.

Concatenate the chunks and deserialize the transactions, then insert
them into the new block in canonical order. Then proceed with all the
normal block and transaction validation.

If decoding fails (memory pools or mining policies differ too much, or a bug
in the peer's IBLT creation code leaves out a chunk of transaction data or
something), fall back to getdata/block messages or maybe ask for an extension
of the IBLT that will successfully decode.

What about blockchain re-orgs?

All of the above assumes we are receiving a new block building on our current idea of the best block chain.

If our current best chain is A->B, and we receive IBLTs for an alternative longer chain with block B' and C', how do we reconstruct B' ?

The simplest answer is to just request the full block data for B' -- re-orgs are rare.

But the set of transactions in B and B' should be very similar, so it might be worthwhile to reconstruct B' from it's IBLT and an IBLT created from block B.

Alternative or complementary approaches

"I know that you know..."

Matt Corallo has implemented a fast block relayer tool. It keeps track of which transactions have been sent to each peer, and then when a new block is found it replaces transaction data that it knows a peer has with 10 bytes of the transaction ID.

Transmitting just 10 bytes of the transaction ID instead of the full average-250-byte transaction gives a 25-times bandwidth savings.

TODO: measure Matt's approach against an IBLT approach. My guess is at small block sizes the simpler approach works better, but at some point IBLTs win.

THOUGHT EXPERIMENT: IBLT-ifiy the data Matt's fast relayer sends? Should theoretically get the same constant factor of 25 improvement but be O(1) instead of O(n)....

Relay just headers

Several people have proposed relaying just the 80-byte block header, with miners checking the proof-of-work and then beginning to mine an empty block on top of whichever header has the most proof-of-work (assuming that nobody waste time creating a valid-proof-of-work block that contained invalid transactions). Once the full transaction data was transmitted miners would then switch to mining a block full of transactions.

See bitcoin/bitcoin#3195 for discussion on why this is not a good idea.

Pre-broadcast blocks

Miners could broadcast the blocks they are working on to the network before they have solved any blocks. If they then do find a block, they could then just broadcast the winning nonce and coinbase transaction (and some identifier for "that unsolved block I sent you a while ago").

I think broadcasting "working blocks" is a great idea, for a couple of reasons.

First, it gives merchants an idea of what percentage of hashing power is working on including their transction in the blockchain, helping make 0-confirmation transactions more secure (if 90% of active miners are working to include your transaction after a few seconds, and none are working on including a double-spend, then it is reasonably likely it will be confirmed in the next block or two).

Second, it gives other miners a view into what the majority of hashing power is working on, and should allow miners to better monitor the health of their network connections.

A proposal for pre-broadcasting unsolved blocks would have to address bandwidth usage and denial-of-service attacks; perhaps such a proposal could use IBLTs to minimize the amount of memory and bandwidth used by the nodes on the network.

Road map / TODO

Instrument some bitcoind's running on the production network to find out how much memory pools and transaction selection policies differ today.

Calculate the IBLT size needed for that set size difference to achieve 1% decoding failure.

Create a test bench environment with real-world transactions and blocks, so the IBLT approach can be compared against other approaches or optimizations to the IBLT code can be measured to make sure they actually improve latency.

Write code, work out all the details, benchmark/test along the way.

Once stable and starting to be deployed: write up BIPs describing new p2p protocol changes.

Will this skew incentives?

Making propagation of new block announcements O(1) instead of O(n_transactions_in_block) means miners should include more fee-paying transactions in their blocks.

However, propagation is only O(1) for blocks containing transactions already seen and accepted by a majority of the network. There is an incentive for all miners to include roughly the same set of transactions in their blocks; no miner will want to create a block containing mostly transactions that some of the network refuses to relay or mine, because they will have to include that transaction data in their new block announcement, which slows down block propagation.

The converse is also true: miners that refuse to include transactions that the rest of the network would accept will have to transmit bigger IBLTs.

A miner trying to optimize the propagation of their blocks would only include transactions that are in all of their peers' memory pools; note that this gives power to "relay nodes" that are not mining. If a significant number of relay nodes refuse to remember very-small-fee transactions in their memory pools (perhaps they are dropping them because they run into CPU or bandwidth limits) then miners have an incentive not to include them in their blocks. This will put a floor on how low transaction fees will go.

But I heard that Bitcoin Doesn't Scale...

People claiming that "Bitcoin Doesn't Scale" are theoretically correct: you still need O(n) bandwidth and CPU to fully validate n transactions-per-second.

Someday, when Bitcoin is the number 2 payment network in the world, we might have to start worrying about that. Here are a couple of back-of-the-envelope calculations that show that we should be able to scale up to n=15,000 transactions per second before running into that O(n) bandwidth limit.

For perspective, the number 1 payment network in the world today (Visa) handles about 212 million transactions per day; 2,500 transactions per second on average. Their peak processing capacity, needed on the busiest shopping days, is reported to be 40,000 tps.

My home Internet connection is getting about 90 megabits download bandwidth per second right now. An average Bitcoin transaction is about 2,000 bits, so my current consumer-level Internet connection could download 45,000 transactions per second, over ten times average Visa transaction volume.

While it is nice to know that I could run a full node handling more-than-Visa-scale transaction volume from my house, running a dedicated machine in a data center somewhere makes more sense. 15,000 250-byte transactions per second works out to about 7 terabytes of bandwidth per month. One of my hosting providers charges $20 per month for a virtual private server with 8 TB of bandwidth per month-- or $240 per year to handle MasterCard-level transaction volume today (August 2014).

References

Goodrich, Michael T., and Michael Mitzenmacher. "Invertible bloom lookup tables." Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton Conference on. IEEE, 2011. http://arxiv.org/pdf/1101.2245

Eppstein, David, et al. "What's the difference?: efficient set reconciliation without prior context." ACM SIGCOMM Computer Communication Review. Vol. 41. No. 4. ACM, 2011. http://conferences.sigcomm.org/sigcomm/2011/papers/sigcomm/p218.pdf

Acknowledgements

Thanks to Gregory Maxwell and Emin Gün Sirer for pointing out relevant results from the information coding and set reconciliation literature.

@jl2012
Copy link

jl2012 commented Aug 12, 2014

@ashleyholman, why couldn't a miner use a random salt for this purpose, and broadcast the salt with the IBLT when he finds a block? So the miner could generate IBLT and mine in parallel. A few bytes of salt should be enough for the anti-DoS purpose. Do I miss something?

@gavinandresen
Copy link
Author

@SergioDemianLerner: Yes, Matt Corallo has implemented and deployed a fast-block-relayer that does as you suggest. It transmits a 10-byte transaction ID instead of the transaction data, so should give about a 25-times speedup, which is great.

@jl2012 : the miner choosing and transmitting a 48-bit salt sounds like the right way to go. I'll update this gist.

@gavinandresen
Copy link
Author

@MarkPfennig : only build on or relay blocks that include some minimum percentage of transactions in the memory pool?

That sounds like a great recipe for an attacker to shatter the blockchain into lots of competing forks (they just submit blocks that are right at that percentage, and rely on memory pool differences -- maybe combined with intentional double-spends submitted to different halves of the network-- to split the network).

@leishman
Copy link

@gavinandresen Very interesting proposal. Thank you for all of the work you are doing. It is greatly appreciated!!

@MarkPfennig
Copy link

@gavinandresen, great point, that's why I asked you :) Thank you.

@luke-jr
Copy link

luke-jr commented Aug 15, 2014

Optimising one specific policy (even with a few variables) at the expense of others is too biased, although this could probably be made to work without that requirement. The summary in "Relay just headers" is not a fair representation of #3195, which remains a good idea, despite the logic that must be carefully thought through to implement it safely.

@gubatron
Copy link

here are some related ideas to the Canonical Ordering and how we do it for sorting ip addresses on the bittorrent network

http://www.bittorrent.org/beps/bep_0040.html

the sorting comparison function is a simple CRC32-C, maybe something similar can be applied to the transaction data. no clue what i'm talking about, just resonated with that bittorrent bep.

@petertodd
Copy link

repost of: http://www.reddit.com/r/Bitcoin/comments/2e1ijs/gavin_andresen_explains_how_bitcoin_is_very/cjv8z00

Unfortunately IBLT only works if all the participants are "honest" and run the algorithm as-is, rather than try to maximize revenue. The problem is still large miners: if I'm a big miner like GHash.IO, I can earn more revenue than my competitors by including transactions that they don't yet have, which pushes their orphan rate up much more than it does mine. The miner will take a hit on subsidy profit temporarily, but their competitors take a much bigger hit.(1) In the future if transaction fees dominate revenue even that won't be true: you'll earn more money outright because you get a higher % of the transactions.

One of the more disturbing things to come out of the GHash.IO "51% attack risk" meeting a few weeks ago was that GHash.IO runs their pool at a significant loss. They don't charge any fees, so they get no direct revenue from the pool at all, yet the idea of making their pool any smaller was something they argued strongly against. Where do they make their revenue? Well, they operate an exchange, which the pool drives customers too, and they've hinted at offering "value-added" mining services - things like tx confirmation guarantees that only large pools can offer. In an environment like that it's easy to see why a pool would take a temporary loss to gain market share at the expense of their smaller competitors.

Of course, that's not even getting into the many unsolved problems in Gavin's scalability calculations. For instance, if I'm running flat out on my home connection just to keep up, how am I supposed to catch up and verify history properly when I start my node for the first time? How can I resist government regulation of mining if my pool can't mine over Tor? Low resource requirements are part of Bitcoin's "honeybadger" resilience; sacrificing that resilience isn't something we should take lightly.

  1. And actually, the current Bitcoin protocol is flawed in that you can mine blocks without bothering to validate the previous ones; a profit-driven miner can take advantage of IBLT to quickly figure out what mempool transactions might be spent, create blocks guaranteed to not be invalid on that basis, and never even bother to check what the previous blocks contents actually were. We're going to have to fix this.

@gavinandresen
Copy link
Author

@petertodd : I don't understand how propagating new blocks faster makes the supposed problem you describe any better or worse. Maybe you can take another shot at explaining it to me.

RE: catching up if you're on a home network connection: I'll write up a "scalability road map" that paints the entire picture, UTXO commitments solves the "how do I catch up" problem (I know you have opinions on UTXO commitments, please don't "gish gallop" and start an off-topic discussion of them here).

@petertodd
Copy link

The problem is not propagating new blocks faster, it's making scalability assumptions based on that speed when largish miners have economic incentives to sabotage it. Remember that if your goal is to find more blocks than other miners, you have no incentive to propagate your blocks to more than 30% of the hashing power.(1)

re: UTXO commitments, perfectly fine so long as you use TXO archiving + TXO commitments to have a fixed upper limit on the UTXO set size. See -dev IRC discussion from earlier today. Doesn't solve the catchup problem directly though; you need to "go back" a significant number of blocks.

  1. http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg03200.html

@instagibbs
Copy link

@petertodd can we separate criticisms of of phrase "Bitcoin can scale" from criticisms of IBLT in general, just for clarity?

Do you think IBLT make it more likely that pools with withhold transactions beyond 30%? Less? No obvious affect at least in principle?

@petertodd
Copy link

@instagibbs Yeah, removing the discussion about scalability from the document would be a good idea; it's really given the community the wrong impression. Again, to be clear, IBLT's by themselves are either good, or ineffective for the most part - they're not worse than the status quo. (modulo concerns about forcing policy uniformity)

@gavinandresen "Pre-broadcast blocks" may want to reference my "Near-block broadcasts for proof of tx propagation" writeup: http://www.mail-archive.com/bitcoin-development%40lists.sourceforge.net/msg02868.html

@sattath
Copy link

sattath commented Sep 3, 2014

What about the following DOS attack?

The attack builds on the following assumption:
"However, they [memory pools] tend to be very similar, and we can take advantage of that similarity to optimize the amount of information that needs to be transmitted when a new block is found."

The attacker would try to break the similarity of the mempools and by that get a much slower effective propagation time. This can be achieved easily: the attacker creates tx1,1, tx1,2,...,tx1,n,tx2,1,...tx2,n,...,txm,1,...txm,n where txi,j and txk,l are conflicting if and only if i=k (for example tx1,5 and tx1,7 are conflicting.). The attacker sends conflicting transactions to differnet random nodes simultaneously. Choosing m big enough (larger than IBLT's ability to reconstruct) will make this proposal inefficient.

In fact, a miner may be able to take advantage of it: the attacker will know which transactions are not in the "consensus" and hence the attackers blocks will propagate faster than the others.

To conclude, in order for this proposal to work, I think more attention is needed on how to reach consensus on the transactions inside the mempool.

@gavinandresen
Copy link
Author

@petertodd: cool, I'll add a link when I update this.

@sattath: Good point. If you detect a double-spend, the best miner policy would be to drop both versions of the spend from their memory pools, since they would have no way of knowing which one their peers were going to include (or a more complicated policy like keep highest fee-per-kilobyte, drop both if they pay the same ... but I don't like complicated).

Near-block broadcasts should quickly tell you that an attacker is broadcasting a double-spend, and assuming the attacker has a limited number of unspent transaction outputs to play with, black-listing those double-spent outputs for some period of time should mitigate any attacks.

@pennyfx
Copy link

pennyfx commented Oct 14, 2014

If bloom filters allow miners to include as many transaction as possible with no repercussions like propagation time, then won't this allow malicious miners to spam dust transactions? This, in combination with the recent improvement to dynamically increase the blocksize may create a problem... right? The miners incentives will be reversed and the hard limit on block size is essentially removed. Sounds like a perfect opportunity for block chain bloat.

@martinus
Copy link

Hi Gavin, you might be interested in this paper that was just released: "Cuckoo Filter: Practically Better Than Bloom" [1]. It supports dynamic removals as well, while still outperforming bloom filters. They even have a open source implementation here [2].

[1] https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf
[2] https://github.com/efficient/cuckoofilter

@shelby3
Copy link

shelby3 commented Oct 23, 2014

@petertodd:

  1. http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg03200.html

Your math appears to be incorrect.

Q + (1-Q)_Q is the probability of the adversary winning 2 blocks, because the adversary can find a second block on his private chain with Q probability before rest of the honest network finds a block, plus with (1-Q)_Q probability before rest of the honest network finds two blocks. (1-Q)^2 is the probability of the rest of the honest network instead winning those 2 blocks. But the initial condition is that the adversary has already won 1 block and is risking losing it to try to win 2 blocks.

At the Q ~= 29.2% break even point you calculated, both of those probabilities are 50%. Thus the adversary forsakes 1 + Q blocks to win only 0.5_2 blocks, because if he had published the block he would have 1 block for sure plus Q probability to win the next block. And the rest of the honest network wins 2_((1-Q)^2) blocks instead of only 1-Q blocks.

Thus the correct equation is (Q + (1-Q)_Q)_2 = 1 + Q, i.e. Q=0.5.

Since your linked post apparently occurred after the publishing of the selfish-mining attack[1], I can only assume you gained inspiration from it but incompletely understood it. The reason selfish-mining gains an advantage is because the adversary sometimes gets further ahead than 1 block and in these cases doesn't have to mine on the honest network's public chain except in the rarer probability (as you pointed out) that chain catches up to within 1 block of the private chain.

Thus it appears you were incorrect to assert (or were you just positing?) that an adversary with less than 25% of the hashrate can attack by sending his blocks to only some of the hashrate, because the selfish-miner needs to be in control of the hashrate employed in the attack algorithm[1]. And with the proposed fix[1], that percentage rises to 33%.

Also the selfish mining attack can be entirely defeated by distributing block rewards to all forks without enabling double-spends from minority chains, but this is outside of the scope of what I will explain here.

[1] Ittay Eyal, Emin Gun Sirer. Majority is not Enough: Bitcoin Mining is Vulnerable

@shelby3
Copy link

shelby3 commented Oct 24, 2014

I don't see how the proposed use of IBLT's can solve the problem considered. The collision rate of the bloom filter is the compression rate of the IBLT because on collision the value is added (or xor) instead of stored.

But if the collision rate is high as desired to solve the problem considered, then equivalently so the rate of failure of the intersection differencing.

@rustyrussell
Copy link

Hi; I hacked up an implementation and did some analysis on example transactions. The TLDR is that 64 bits is too small, but the scheme seems workable:

http://rustyrussell.github.io/pettycoin/2014/11/05/Playing-with-invertible-bloom-lookup-tables-and-bitcoin-transactions.html

Copy link

ghost commented Nov 28, 2014

This proposal is nice for academics but it's not enforceable on the network since it isn't part of the Proof-of-Work. I say this because you are assuming some sort of consensus in your proposal and assuming that miners actually want their block to propagate fast. If a broadcasted block reaches less than 1/3 of the network then you've mined it, it's in the main chain, why not let the other miners find out sometime later while you are working on the next block in the meantime?

The larger scalability problem is the bitcoin network code in it's current state will not scale to 1000's of transactions per second given how poorly it is written. The bitcoin wiki details only several of the problems.

@Azulan
Copy link

Azulan commented Apr 29, 2015

"are the set of transactions transactions in a new block"

@r8921039
Copy link

@gavinandresen, thank you for another great work. if possible, could you provide a reference to your hosting provider? Couldn't find a place to beat your quote - 8TB $20 per month, though i know price may have increased. 👍

@kanzure
Copy link

kanzure commented Dec 22, 2015

@illuzen
Copy link

illuzen commented Feb 8, 2016

@john-connor presumably you would be able to just ask any full node for the difference between your mem-pool and the most recent block. You don't have to rely on the block's miner to tell you. Or maybe I misunderstand you.

As per the networking code issues, would you be willing to start a document with all the known issues? I think that could be tremendously valuable for the community.

@illuzen
Copy link

illuzen commented Feb 8, 2016

@shelby3 The selfish-miner attack collapses when there are more than two groups of miners. As in, say there are honest miners, selfish miners, selfisher miners, etc. Then you never know who else is withholding blocks and is going to reduce your private chain to ashes after you finally announce it, thus changing the expected value of such a strategy. Games with > 2 players are a lot more complicated than games with 2 players. Related: small-game fallacy. http://unenumerated.blogspot.de/2015/05/small-game-fallacies.html

I also think the instability of cartels applies here. If selfish mining is a successful strategy, shouldn't you be able to do it against the selfish pool as a sub-pool of that same selfish pool? Recurse ad absurdum: arbitrarily small sub-sub-sub-...pools outrunning the entire network because magic! The myth of Eris and the golden apple is a good metaphor for this phenomenon.

TL;DR all miners are selfish. Bitcoin was designed with this in mind. Selfish mining attack is a myth.

@yunquliu
Copy link

The scale or throughput of Blockchain has been discussed. However, the transaction latency or TPS is way too long if we want to use Blockchain into the daily business transactions. It takes 10minutes (and even longer) to confirm.

Should we simply change the parameters, the networking layer likely wouldn't support it anymore.

Any thoughts?

@awb715
Copy link

awb715 commented Jan 12, 2018

Was this implemented?

@shade
Copy link

shade commented Jul 26, 2018

@awb715, it was just merged into Bitcoin Unlimited

@rebroad
Copy link

rebroad commented Mar 14, 2023

@awb715, it was just merged into Bitcoin Unlimited

Can you link to the relevant pull request please?

@adminvsrm
Copy link

The O(1) Block Propagation proposal, while innovative, has several potential vulnerabilities. First, incomplete synchronization of mempools can lead to decoding failures, necessitating full transaction data transmission. Additionally, variances in miners' transaction selection policies can cause significant differences in mempools, which undermines IBLT efficiency. There is also a potential for denial-of-service (DoS) attacks through manipulation of transaction IDs or hashes. Moreover, large differences between mempools can increase IBLT size, reducing bandwidth savings. Handling blockchain re-organizations can be inefficient, often requiring full block data. Lastly, missing or delayed transactions can desynchronize mempools, impacting IBLT accuracy. Addressing these issues is crucial for the proposal's success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment