Skip to content

Instantly share code, notes, and snippets.

@sipa
Last active September 22, 2023 08:22
  • Star 15 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save sipa/c65665fc360ca7a176a6 to your computer and use it in GitHub Desktop.
Block size according to technological growth.

Published as BIP 103

@kanzure
Copy link

kanzure commented Jul 30, 2015

What sort of failure modes might be observed, and how would they look like and what possible corrective behaviors are there? I know it's a broad request, but perhaps that could be included in this BIP draft. Also, it would be interesting to add more specific text regarding "beyond what the actual technology offers"- how will we know that in the future? By what means do we have to know that in a rational manner?

edit: whoops didn't see the replies on the mailing list, http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009770.html

@davout
Copy link

davout commented Jul 30, 2015

This isn't really "Block size following technological growth" it's more like "Hey, I've revised Gavin's optimistic estimates".

@anduck
Copy link

anduck commented Jul 30, 2015

Davout could you please stop trolling in these Github repos etc..

@heysam1337
Copy link

While I appreciate the work towards a compromise and hope that this is incorporated in core, this would only give us 10MB blocks by 2030. This is extremely conservative and ignores technological progress over the 6 years that Bitcoin has existed. Could we at least start at 2MB with the initial hard fork?

@luke-jr
Copy link

luke-jr commented Jul 30, 2015

I think this is reasonable.

@LeMiner
Copy link

LeMiner commented Jul 30, 2015

Ignoring that I disagree with the motivation in this proposal I'll stay on topic;

Unacceptably pessimistic/conservative, average growth of the big internet exchanges is above 27%/year, last year AMSIX grew 37%. Home connections have been increasing at 24%/year and that has no signs of slowing. Not to mention fiber is getting introduced to more and more areas in the world as well.

10MB blocks in 2030 is way too little, way too late. This might be a bit more interesting if started at 8MB.

I do appreciate the attempt at reaching a compromise as well tough, but in this state it's a big no from me.

@akstunt
Copy link

akstunt commented Jul 30, 2015

This just makes a lot more sense than any other suggestion I have seen thus far. I agree with samuelpj that we should start at 2MB block, but also withhold changing core in the immediate future with the intention of implementing this some time in the next 6-24 months. LeMiner, I agree this does seem to pessimistic if we start from 1MB but user base and the number of miners right now doesn't quite warrant such a large increase. There are still other things that need to be focused on like the current mempool optimizations that can help keep things in check in the immediate future.

@LeMiner
Copy link

LeMiner commented Jul 30, 2015

@akstunt, I don't think we should be "lagging" behind user-adoption-rate when it comes to network capacity, rather prepare for the future. Just because the current user base might not warrant a large increase, the next big influx might.

Gavins proposal seems to be the solution for me, but it's good to see even the most conservative devs are admitting that bigger blocks are needed, the possible reasoning behind a proposal "from the other side of the argument" at such a late time is not something to be discussed in this thread but it does raise questions.

For me all this comes all a bit too little, bit too late. But now that it seems like we're simply talking numbers, perhaps it's possible to reach a consensus, taking the communities opinion, as well as the miners into account.

Copy link

ghost commented Jul 30, 2015

@sipa I respect your work a lot, and even you must admit that this is a very conservative growth formula. Perhaps a linear growth mechanic is not even the best way to scale the block size.

If we want a linear growth rate, are we then just arguing about the scaling and offset of the y = mx + c line?

Or do we want to quickly scale now, just once, to a value of xMb/block in order to kick the can down the road until we can develop more intelligent algorithms for autoscaling the block size outside (corruptible) human influence according to the total network value/penetration/transaction rate/etc.

@bitcartel
Copy link

Telegeography are forecasting an annual growth rate of 40% for international bandwidth from 2014 to 2021.

https://www.telegeography.com/research-services/global-bandwidth-forecast-service/

@DavidAJohnston
Copy link

Firstly, thanks for putting forth this proposal. I believe a block size limit that tracks technology trends is a rational way to approach this topic.

I recently did a statistical analysis of the history of bitcoin transaction growth from January 3rd 2009 to June 22nd 2015.

See raw data here: https://docs.google.com/spreadsheets/d/1OtvLuUJmy4seXt2Rc8fskbTuGRIOtbAgD_QwdCE2Fbc/edit?usp=sharing

In short, after discounting the early 2010 to 2013 hyper growth stage, in the last 12 months the average annual growth rate of bitcoin transactions has grown from 34% to 74%. Please note this date range does not include the recent spam attack transaction numbers, so I believe it to be a fair representation of organic growth in transactions over time.

I also conducted a broader analysis of the technological trends in compute, storage and as you mention most importantly bandwidth. https://medium.com/@DJohnstonEC/johnston-s-law-quantified-f1a4d93bbc19

My one comment would be to agree with others that have voiced 17.7% being a bit too conservative a growth number. As mentioned above the established trend has been meaningfully above this growth rate for some time. Nielsen's Law of Bandwidth growth (50% annually) has held up well given the last 31 years of data we have since the observation. http://www.nngroup.com/articles/law-of-bandwidth/

I'm not suggesting 50% annual growth is the right number for this proposal. Other factors such as uneven progress and distribution of bandwidth growth must certainly be taken into account. The estimates @LeMiner mentioned above in the mid 20 percentages to mid 30 percentages seem to accommodate real world bandwidth growth factors well.

@TheBlueMatt
Copy link

I'm rather concerned by this proposal for several reasons.

Firstly, the selection of a growth rate based on bandwidth ignores reality. The speed of block propogation has absolutely nothing to do with bandwidth growth at the edges. Ignoring whether we want to ensure we can support miners running over Tor, most miners run propogation from servers with 100Mbps or 1Gbps connections to backbone networks in well-placed datacenters. Small-size transfers (ie probably any block size that people have been suggesting) between them are limited nearly entirely by latency and packet loss, not the speed of their connection. Sadly, latency is bound by physical laws and packet loss is fundamental to how packet routing works on the Internet. Without fundamentally changing how packet routing works, or advancing speed of transfers to eek out another 50% improvement in latency (ie, run a perfectly-straight vacuum with lasers between your servers). I'm not against picking constants based on engineering reality, but trying to pick a growth rate based on bandwidth increases is, to me, just picking a random number.

Additionally, I'm hugely concerned about any suggestion that includes exponential increases. As mentioned above, many of the limiting factors here do not scale well with improvements in technology. Though I dont want to wade into the debate around when we reach 2MB and when we reach 20MB, I dont want to sign up to a hardfork that grows exponentially, eventually far outpacing engineering limits. This implies that, at some point, we are going to have to agree to softfork a smaller limit in, which I think is the exact opposite of the position we want to be in. The default becomes "allow the system to grow into something centralized" instead of "push some use-cases off-chain into systems which still potentially have decent trustless properties".

@todomodo
Copy link

IMHO this is a good proposal and current block limit (1 MB ) should be the starting point. The 1 MB block limit has proven itself in a number of different network environments and its reliability remains largely unchallenged.

The main merit of this proposal is (psychologically) preparing the audience for dealing with variable block caps. On the question of whether the growth function should be exponential or not – I don’t think it matters, as long as the exponent is low enough and is tied to actual block chain metrics. The proposed value of 0.177 (17.7%) does seem a bit aggressive to me (I would personally prefer something between 4% and 8%). However, adjusting the curve (via soft fork) would be way easier (and uncontroversial) than switching from a scalar to a function.

@ahdefga
Copy link

ahdefga commented Jul 31, 2015

It might make sense to raise the starting point to 2 MB blocks if this can be tolerated by the entire system today. It seems wasteful not to use the resources that are already present.

Is it possible to soft fork the scaling curve when the hard cap has been lifted? If so, the rate can be adjusted for any unforeseen advances (or absence thereof) in technology and this would make a good proposal for scaling bitcoin into the future.

You might want to spell januari January in your comment on line 2. Also, I think == is a boolean operator, you might want to fix this in your comment on line 12.

@yrral86
Copy link

yrral86 commented Jul 31, 2015

@todomodo @TheBlueMatt
I would also be concerned about unsconstrained exponential growth, but in this case it is limited to 11 steps, which would be 2 GB blocks. This would take 45 minutes to upload with my current home connection with 6Mbps upload, which is indeed unfeasible for 10 minute blocks. However, to transfer that much in one minute would only require 275Mbps, which is easily feasible for modern hardware. If we don't have this kind of bandwidth available to home users by 2063 I would be terribly surprised. This could only really happen if we're still stuck on copper lines.

@valiz
Copy link

valiz commented Jul 31, 2015

Everyone, we cannot forsee the future. We cannot be certain of any degree of bandwith growth. It can go anywhere. We are talking about decades here. Don't forget that right now major central banks are engaged in ZIRP and QE, massively distorting interest rates across the world. There is always the risk of a major global economic crisis appearing out of nowhere, and this would affect technology and bandwith availability. This is why flexibility regarding the block size should be a top concern.

Personally I like this proposal, and if it is implemented and the block size growth can be changed via soft fork, then it's great!

Thank you for your work everyone!

@Alex-Linhares
Copy link

10MB blocks in fifteen years???

that simply means no Metcalfe's law, hence: bitcoin is not for everyone.

@digitsu
Copy link

digitsu commented Aug 17, 2015

This sounds like the best way forward. A technology lagging block limit growth rate is conservative and safe. Gavin's growth rate is designed to lead the market, and that I believe is a mistake, similar to the one's that central banks use when they do monetary policy based on GDP predictions. We can't predict the future bandwidth growth rates, let alone plan for any unforeseen black swans that may affect the global economy which would stifle growth in all sectors. As bitcoin is supposed to be a hedge against such unfortunate future scenarios, it is better to be overly conservative instead of loose with a limit increase.

@LukeParker
Copy link

Is there really no way to link the block size to a metric like # of transactions? Something that adjust to reality, not our guess in 2015?

@onsightit
Copy link

Most transactional systems put the limit on the transaction size. If the txn size limit were 1MB and blocks were limited to 100 txns, the max block size would be 100MB. Then, if you needed larger than 1MB txns you could create a txn chain within the block, as long as it did not exceed 100 txns when combined with any other txns in the block. Still, it's just kicking the can down the road, as they say.

@KeyvanJS
Copy link

1.5 Years to upgrade nodes is not required, six month at most.
In 24 hours nearly 500 nodes upgraded to Bitcoin XT.

This growth pattern is not consistent with actual growth pattern.

A compromise would be:

  1. 4 MB Block starts end of JAN 2016
  2. Then that doubles every 2 years
  3. Capped at 1GB in 20 years.

@danielpbarron
Copy link

anyway, if anyone got a github account, leave a link to this convo on there, curious if sipa/anyone groks wtf's going on.

http://log.bitcoin-assets.com/?date=17-08-2015#1240286

@taoeffect
Copy link

I sympathize with @davout's comment, which does not seem like trolling, but very much on point:

This isn't really "Block size following technological growth" it's more like "Hey, I've revised Gavin's optimistic estimates".

  1. We don't know what technological growth will be.
  2. This proposal "guestimates" at block sizes. It uses zero "live" intelligence. All the intelligence in this algorithm amounts to guesses that were baked into it at its creation.
  3. Therefore, at some point, this BIP expects a soft or a hard fork down the line.

So my question is: is this BIP necessary due to some sort of urgency? If so, then can we all agree to call it a hack?

A real solution to the scaling problem would be a solution that is capable of scaling up and down as needed, at any point in time.

Some solutions that fit that property (I think) include:

  • Lightning network (maybe).
  • Any solution that is economics based (definitely).
  • Vitalik & co's various suggestions (i.e. block size based on rate of orphaned blocks, their scalability paper, etc.)
  • Block size limit based on difficulty adjustments [1] [2]
  • Combinations of the above.

All of Mike Hearn's concerns have to do not with block sizes, but with node instability related to full blocks:

  1. The node might become incredibly slow as it enters swap hell.
  2. The node might crash when it tries to allocate memory and fails.
  3. The node might be killed by the operating system kernel.

The solution to those problems has little to do with adjusting block sizes, and more to do with how nodes and thin clients handle blocks that are filling up. If a block size adjustment is needed for some reason, then it cannot be prescribed in advance like this. The network should figure it out by itself.

So are there any txn fee (& similar) economic BIP proposals out there?

@taoeffect
Copy link

(* to email readers: made some edits to the paragraphs near the end, check github for the latest version of that comment.)

@piratelinux
Copy link

I am not too happy with this proposal. While it's relatively modest, it is not a robust way to scale the system.

First, you keep claiming that sidechains/subchains/block extensions cannot be used for scaling purposes, yet you never provided a formal proof of this. I proposed a subchain/block extension mechanism on the forum https://bitcointalk.org/index.php?topic=1083345.0 and I think I addressed the concerns you had since we last talked about it. Yes, it is a bit messy (and I do plan to rewrite it), but I just want to highlight that the question of whether you can scale Bitcoin with such methods has not been carefully solved, so we can not just hand wave it away.

Also, sticking an arbitrary number like 17%, that is simply based on historical data, does not seem like a robust way to future-proof the system. I am not against increasing the block size of the "main" chain/block, BUT it should be done as softfork (e.g. using a sidechain) so that it is not forced upon users, AND, it should be automatically varying with the computing power of the users of the system.

For the last point above, consider for example the current mining scheme where an 80 byte block header must be hashed to form a valid 1 MB block. The ratio of header size to block size is 80/1000000 = 1/12500. Thus, we can set a rule that blocks must be at most 1 MB if an 80 byte header is hashed or x MB if an x*80 byte header is hased. That way the same ratio of header to block size is preserved, which means that as technology improves, it will be worth it to hash more than 80 bytes and gain the extra fees associated with more transaction, especially once a fee market develops. This is just a suggestion, and would need to be studied more. The actual function that maps block size to header size can be different or maybe, just increase another measure of difficulty rather than the header size. Meni Rosenfeld also proposed a scheme that penalizes miners for mining larger blocks, which is a similar idea.

In any case, we need more formal proofs associated with proposals, that carefully lay out the assumptions the author is making, rather than just high profile people throwing out proposals and expecting others to trust them.

@jtoomim
Copy link

jtoomim commented Oct 25, 2015

The 17.7% figure appears to come from sources like http://rusty.ozlabs.org/?p=493. It's worth noting that most of these metrics aggregate mobile bandwidth and fixed-line bandwidth. As mobile devices' share of total internet usage has been increasing dramatically over the last few years, and since mobile devices are much slower than fixed-line devices, this pushes the estimates down considerably. Examining just fixed-line devices, such as in https://en.wikipedia.org/wiki/Internet_traffic#Global_Internet_traffic, results in estimates closer to 50% per year growth.

@eragmus
Copy link

eragmus commented Apr 10, 2016

@jtoomim

From 2014-2019, Cisco data projects 16%/year increase in global fixed bandwidth speed, and 8-11%/year increase in global mobile bandwidth speed (smartphones and tablets).

Fixed:
http://i.imgur.com/wPXnpOc.png

Mobile:
http://i.imgur.com/ECzCjrq.png


Source:
http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html

@sesam
Copy link

sesam commented Mar 18, 2017

Now after headers first and various thin blocks, maybe it's time to forget bandwidth and focus on latency, packet loss and block verification which seem to still be the narrowest bottlenecks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment