Skip to content

Instantly share code, notes, and snippets.

@sipa
Last active September 22, 2023 08:22
Show Gist options
  • Save sipa/c65665fc360ca7a176a6 to your computer and use it in GitHub Desktop.
Save sipa/c65665fc360ca7a176a6 to your computer and use it in GitHub Desktop.
Block size according to technological growth.

Published as BIP 103

@Alex-Linhares
Copy link

10MB blocks in fifteen years???

that simply means no Metcalfe's law, hence: bitcoin is not for everyone.

@digitsu
Copy link

digitsu commented Aug 17, 2015

This sounds like the best way forward. A technology lagging block limit growth rate is conservative and safe. Gavin's growth rate is designed to lead the market, and that I believe is a mistake, similar to the one's that central banks use when they do monetary policy based on GDP predictions. We can't predict the future bandwidth growth rates, let alone plan for any unforeseen black swans that may affect the global economy which would stifle growth in all sectors. As bitcoin is supposed to be a hedge against such unfortunate future scenarios, it is better to be overly conservative instead of loose with a limit increase.

@LukeParker
Copy link

Is there really no way to link the block size to a metric like # of transactions? Something that adjust to reality, not our guess in 2015?

@onsightit
Copy link

Most transactional systems put the limit on the transaction size. If the txn size limit were 1MB and blocks were limited to 100 txns, the max block size would be 100MB. Then, if you needed larger than 1MB txns you could create a txn chain within the block, as long as it did not exceed 100 txns when combined with any other txns in the block. Still, it's just kicking the can down the road, as they say.

@KeyvanJS
Copy link

1.5 Years to upgrade nodes is not required, six month at most.
In 24 hours nearly 500 nodes upgraded to Bitcoin XT.

This growth pattern is not consistent with actual growth pattern.

A compromise would be:

  1. 4 MB Block starts end of JAN 2016
  2. Then that doubles every 2 years
  3. Capped at 1GB in 20 years.

@danielpbarron
Copy link

anyway, if anyone got a github account, leave a link to this convo on there, curious if sipa/anyone groks wtf's going on.

http://log.bitcoin-assets.com/?date=17-08-2015#1240286

@taoeffect
Copy link

I sympathize with @davout's comment, which does not seem like trolling, but very much on point:

This isn't really "Block size following technological growth" it's more like "Hey, I've revised Gavin's optimistic estimates".

  1. We don't know what technological growth will be.
  2. This proposal "guestimates" at block sizes. It uses zero "live" intelligence. All the intelligence in this algorithm amounts to guesses that were baked into it at its creation.
  3. Therefore, at some point, this BIP expects a soft or a hard fork down the line.

So my question is: is this BIP necessary due to some sort of urgency? If so, then can we all agree to call it a hack?

A real solution to the scaling problem would be a solution that is capable of scaling up and down as needed, at any point in time.

Some solutions that fit that property (I think) include:

  • Lightning network (maybe).
  • Any solution that is economics based (definitely).
  • Vitalik & co's various suggestions (i.e. block size based on rate of orphaned blocks, their scalability paper, etc.)
  • Block size limit based on difficulty adjustments [1] [2]
  • Combinations of the above.

All of Mike Hearn's concerns have to do not with block sizes, but with node instability related to full blocks:

  1. The node might become incredibly slow as it enters swap hell.
  2. The node might crash when it tries to allocate memory and fails.
  3. The node might be killed by the operating system kernel.

The solution to those problems has little to do with adjusting block sizes, and more to do with how nodes and thin clients handle blocks that are filling up. If a block size adjustment is needed for some reason, then it cannot be prescribed in advance like this. The network should figure it out by itself.

So are there any txn fee (& similar) economic BIP proposals out there?

@taoeffect
Copy link

(* to email readers: made some edits to the paragraphs near the end, check github for the latest version of that comment.)

@piratelinux
Copy link

I am not too happy with this proposal. While it's relatively modest, it is not a robust way to scale the system.

First, you keep claiming that sidechains/subchains/block extensions cannot be used for scaling purposes, yet you never provided a formal proof of this. I proposed a subchain/block extension mechanism on the forum https://bitcointalk.org/index.php?topic=1083345.0 and I think I addressed the concerns you had since we last talked about it. Yes, it is a bit messy (and I do plan to rewrite it), but I just want to highlight that the question of whether you can scale Bitcoin with such methods has not been carefully solved, so we can not just hand wave it away.

Also, sticking an arbitrary number like 17%, that is simply based on historical data, does not seem like a robust way to future-proof the system. I am not against increasing the block size of the "main" chain/block, BUT it should be done as softfork (e.g. using a sidechain) so that it is not forced upon users, AND, it should be automatically varying with the computing power of the users of the system.

For the last point above, consider for example the current mining scheme where an 80 byte block header must be hashed to form a valid 1 MB block. The ratio of header size to block size is 80/1000000 = 1/12500. Thus, we can set a rule that blocks must be at most 1 MB if an 80 byte header is hashed or x MB if an x*80 byte header is hased. That way the same ratio of header to block size is preserved, which means that as technology improves, it will be worth it to hash more than 80 bytes and gain the extra fees associated with more transaction, especially once a fee market develops. This is just a suggestion, and would need to be studied more. The actual function that maps block size to header size can be different or maybe, just increase another measure of difficulty rather than the header size. Meni Rosenfeld also proposed a scheme that penalizes miners for mining larger blocks, which is a similar idea.

In any case, we need more formal proofs associated with proposals, that carefully lay out the assumptions the author is making, rather than just high profile people throwing out proposals and expecting others to trust them.

@jtoomim
Copy link

jtoomim commented Oct 25, 2015

The 17.7% figure appears to come from sources like http://rusty.ozlabs.org/?p=493. It's worth noting that most of these metrics aggregate mobile bandwidth and fixed-line bandwidth. As mobile devices' share of total internet usage has been increasing dramatically over the last few years, and since mobile devices are much slower than fixed-line devices, this pushes the estimates down considerably. Examining just fixed-line devices, such as in https://en.wikipedia.org/wiki/Internet_traffic#Global_Internet_traffic, results in estimates closer to 50% per year growth.

@eragmus
Copy link

eragmus commented Apr 10, 2016

@jtoomim

From 2014-2019, Cisco data projects 16%/year increase in global fixed bandwidth speed, and 8-11%/year increase in global mobile bandwidth speed (smartphones and tablets).

Fixed:
http://i.imgur.com/wPXnpOc.png

Mobile:
http://i.imgur.com/ECzCjrq.png


Source:
http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html

@sesam
Copy link

sesam commented Mar 18, 2017

Now after headers first and various thin blocks, maybe it's time to forget bandwidth and focus on latency, packet loss and block verification which seem to still be the narrowest bottlenecks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment