Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Block size according to technological growth.

  BIP: ??
  Title: Block size following technological growth
  Author: Pieter Wuille <pieter.wuille@gmail.com>
  Status: Draft
  Type: Standards Track
  Created: 2015-07-21

Table of Contents

Abstract

This BIP proposes a block size growth intended to accommodate for hardware and other technological improvements for the foreseeable future.

Motivation

Many people want to see Bitcoin scale over time, allowing an increasing number of transactions on the block chain. It would come at an increased cost for the ecosystem (bandwidth, processing, and storage for relay nodes, as well as an impact on propagation speed of blocks on the network), but technology also improves over time. When all technologies depended on have improved as well as their availability on the market, there is no reason why Bitcoin's fundamental transaction rate cannot improve proportionally.

Currently, there is a consensus rule in place that limits the size of blocks to 1000000 bytes. Changing this requires a hard-forking change: one that will require every full node in the network to implement the new rules. The new chain created by those changed nodes will be rejected by old nodes, so this would effectively be a request to the ecosystem to migrate to a new and incompatible network. Doing this while controversy exists is dangerous to the network and the ecosystem.

Furthermore, the effective space available is always constrained by a hash rate majority and its ability to process transactions. No hard forking change that relaxes the block size limit can be guaranteed to provide enough space for every possible demand - or even any particular demand - unless strong centralization of the mining ecosystem is expected. Because of that, the development of a fee market and the evolution towards an ecosystem that is able to cope with block space competition should be considered healthy. This does not mean the block size or its limitation needs to be constant forever. However, the purpose of such a change should be evolution with technological growth, and not kicking the can down the road because of a fear of change in economics.

Bitcoin's advantage over other systems does not lie in scalability. Well-designed centralized systems can trivially compete with Bitcoin's on-chain transactions in terms of cost, speed, reliability, convenience, and scale. Its power lies in transparency, lack of need for trust in network peers, miners, and those who influence or control the system. Wanting to increase the scale of the system is in conflict with all of those. Attempting to buy time with a fast increase is not wanting to face that reality, and treating the system as something whose scale trumps all other concerns. A long term scalability plan should aim on decreasing the need for trust required in off-chain systems, rather than increasing the need for trust in Bitcoin.

In summary, hard forks are extremely powerful, and we need to use them very responsibly as a community. They have the ability to fundamentally change the technology or economics of the system, and can be used to disadvantage those who expected certain rules to be immutable. They should be restricted to uncontroversial changes, or risk eroding the expectation of low trust needed in the system in the longer term. As the block size debate has been controversial so far - for good or bad reasons - this BIP aims for gradual change and its effects start far enough in the future.

Specification

The block size limitation is replaced by the function below, applied to the median of the timestamps of the previous 11 blocks, or in code terms: the block size limit for pindexBlock is GetMaxBlockSize(pindexBlock->pprev->GetMedianTimePast()).

The sigop limit scales proportionally.

It implements a series of block size steps, one every ~97 days, between January 2017 and July 2063, each increasing the maximum block size by 4.4%. This allows an overall growth of 17.7% per year.

uint32_t GetMaxBlockSize(int64_t nMedianTimePast) {
    // The first step is on January 1st 2017.
    if (nMedianTimePast < 1483246800) {
        return 1000000;
    }
    // After that, one step happens every 2^23 seconds.
    int64_t step = (nMedianTimePast - 1483246800) >> 23;
    // Don't do more than 11 doublings for now.
    step = std::min<int64_t>(step, 175);
    // Every step is a 2^(1/16) factor.
    static const uint32_t bases[16] = {
        // bases[i] == round(1000000 * pow(2.0, (i + 1) / 16.0))
        1044274, 1090508, 1138789, 1189207,
        1241858, 1296840, 1354256, 1414214,
        1476826, 1542211, 1610490, 1681793,
        1756252, 1834008, 1915207, 2000000
    };
    return bases[step & 15] << (step / 16);
}

Rationale

Waiting 1.5 years before the hard fork takes place should provide ample time to minimize the risk of a hard fork, if found uncontroversial.

Because every increase (including the first) is only 4.4%, risk from large market or technological changes is minimized.

The growth rate of 17.7% growth per year is consistent with the average growth rate of bandwidth the last years, which seems to be the bottleneck. If over time, this growth factor is beyond what the actual technology offers, the intention should be to soft fork a tighter limit.

Using a time-based check is very simple to implement, needs little context, is efficient, and is trivially reviewable. Using the "median time past" guarantees monotonic behaviour, as this median is required to be increasing, according to Bitcoin's existing consensus rules. Using the "median time past" of the block before means we know in advance what the limit of each block will be, without depending on the actual block's timestamp.

Compatibility

This is a hard forking change, thus breaks compatbility with old fully-validating node. It should not be deployed without widespread consensus.

Acknowledgements

Thanks to Gregory Maxwell and Wladimir J. van der Laan for their suggestions.

kanzure commented Jul 30, 2015

What sort of failure modes might be observed, and how would they look like and what possible corrective behaviors are there? I know it's a broad request, but perhaps that could be included in this BIP draft. Also, it would be interesting to add more specific text regarding "beyond what the actual technology offers"- how will we know that in the future? By what means do we have to know that in a rational manner?

edit: whoops didn't see the replies on the mailing list, http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009770.html

davout commented Jul 30, 2015

This isn't really "Block size following technological growth" it's more like "Hey, I've revised Gavin's optimistic estimates".

anduck commented Jul 30, 2015

Davout could you please stop trolling in these Github repos etc..

While I appreciate the work towards a compromise and hope that this is incorporated in core, this would only give us 10MB blocks by 2030. This is extremely conservative and ignores technological progress over the 6 years that Bitcoin has existed. Could we at least start at 2MB with the initial hard fork?

luke-jr commented Jul 30, 2015

I think this is reasonable.

LeMiner commented Jul 30, 2015

Ignoring that I disagree with the motivation in this proposal I'll stay on topic;

Unacceptably pessimistic/conservative, average growth of the big internet exchanges is above 27%/year, last year AMSIX grew 37%. Home connections have been increasing at 24%/year and that has no signs of slowing. Not to mention fiber is getting introduced to more and more areas in the world as well.

10MB blocks in 2030 is way too little, way too late. This might be a bit more interesting if started at 8MB.

I do appreciate the attempt at reaching a compromise as well tough, but in this state it's a big no from me.

akstunt commented Jul 30, 2015

This just makes a lot more sense than any other suggestion I have seen thus far. I agree with samuelpj that we should start at 2MB block, but also withhold changing core in the immediate future with the intention of implementing this some time in the next 6-24 months. LeMiner, I agree this does seem to pessimistic if we start from 1MB but user base and the number of miners right now doesn't quite warrant such a large increase. There are still other things that need to be focused on like the current mempool optimizations that can help keep things in check in the immediate future.

LeMiner commented Jul 30, 2015

@Akstunt, I don't think we should be "lagging" behind user-adoption-rate when it comes to network capacity, rather prepare for the future. Just because the current user base might not warrant a large increase, the next big influx might.

Gavins proposal seems to be the solution for me, but it's good to see even the most conservative devs are admitting that bigger blocks are needed, the possible reasoning behind a proposal "from the other side of the argument" at such a late time is not something to be discussed in this thread but it does raise questions.

For me all this comes all a bit too little, bit too late. But now that it seems like we're simply talking numbers, perhaps it's possible to reach a consensus, taking the communities opinion, as well as the miners into account.

@sipa I respect your work a lot, and even you must admit that this is a very conservative growth formula. Perhaps a linear growth mechanic is not even the best way to scale the block size.

If we want a linear growth rate, are we then just arguing about the scaling and offset of the y = mx + c line?

Or do we want to quickly scale now, just once, to a value of xMb/block in order to kick the can down the road until we can develop more intelligent algorithms for autoscaling the block size outside (corruptible) human influence according to the total network value/penetration/transaction rate/etc.

Telegeography are forecasting an annual growth rate of 40% for international bandwidth from 2014 to 2021.

https://www.telegeography.com/research-services/global-bandwidth-forecast-service/

Firstly, thanks for putting forth this proposal. I believe a block size limit that tracks technology trends is a rational way to approach this topic.

I recently did a statistical analysis of the history of bitcoin transaction growth from January 3rd 2009 to June 22nd 2015.

See raw data here: https://docs.google.com/spreadsheets/d/1OtvLuUJmy4seXt2Rc8fskbTuGRIOtbAgD_QwdCE2Fbc/edit?usp=sharing

In short, after discounting the early 2010 to 2013 hyper growth stage, in the last 12 months the average annual growth rate of bitcoin transactions has grown from 34% to 74%. Please note this date range does not include the recent spam attack transaction numbers, so I believe it to be a fair representation of organic growth in transactions over time.

I also conducted a broader analysis of the technological trends in compute, storage and as you mention most importantly bandwidth. https://medium.com/@DJohnstonEC/johnston-s-law-quantified-f1a4d93bbc19

My one comment would be to agree with others that have voiced 17.7% being a bit too conservative a growth number. As mentioned above the established trend has been meaningfully above this growth rate for some time. Nielsen's Law of Bandwidth growth (50% annually) has held up well given the last 31 years of data we have since the observation. http://www.nngroup.com/articles/law-of-bandwidth/

I'm not suggesting 50% annual growth is the right number for this proposal. Other factors such as uneven progress and distribution of bandwidth growth must certainly be taken into account. The estimates @LeMiner mentioned above in the mid 20 percentages to mid 30 percentages seem to accommodate real world bandwidth growth factors well.

I'm rather concerned by this proposal for several reasons.

Firstly, the selection of a growth rate based on bandwidth ignores reality. The speed of block propogation has absolutely nothing to do with bandwidth growth at the edges. Ignoring whether we want to ensure we can support miners running over Tor, most miners run propogation from servers with 100Mbps or 1Gbps connections to backbone networks in well-placed datacenters. Small-size transfers (ie probably any block size that people have been suggesting) between them are limited nearly entirely by latency and packet loss, not the speed of their connection. Sadly, latency is bound by physical laws and packet loss is fundamental to how packet routing works on the Internet. Without fundamentally changing how packet routing works, or advancing speed of transfers to eek out another 50% improvement in latency (ie, run a perfectly-straight vacuum with lasers between your servers). I'm not against picking constants based on engineering reality, but trying to pick a growth rate based on bandwidth increases is, to me, just picking a random number.

Additionally, I'm hugely concerned about any suggestion that includes exponential increases. As mentioned above, many of the limiting factors here do not scale well with improvements in technology. Though I dont want to wade into the debate around when we reach 2MB and when we reach 20MB, I dont want to sign up to a hardfork that grows exponentially, eventually far outpacing engineering limits. This implies that, at some point, we are going to have to agree to softfork a smaller limit in, which I think is the exact opposite of the position we want to be in. The default becomes "allow the system to grow into something centralized" instead of "push some use-cases off-chain into systems which still potentially have decent trustless properties".

IMHO this is a good proposal and current block limit (1 MB ) should be the starting point. The 1 MB block limit has proven itself in a number of different network environments and its reliability remains largely unchallenged.

The main merit of this proposal is (psychologically) preparing the audience for dealing with variable block caps. On the question of whether the growth function should be exponential or not – I don’t think it matters, as long as the exponent is low enough and is tied to actual block chain metrics. The proposed value of 0.177 (17.7%) does seem a bit aggressive to me (I would personally prefer something between 4% and 8%). However, adjusting the curve (via soft fork) would be way easier (and uncontroversial) than switching from a scalar to a function.

ahdefga commented Jul 31, 2015

It might make sense to raise the starting point to 2 MB blocks if this can be tolerated by the entire system today. It seems wasteful not to use the resources that are already present.

Is it possible to soft fork the scaling curve when the hard cap has been lifted? If so, the rate can be adjusted for any unforeseen advances (or absence thereof) in technology and this would make a good proposal for scaling bitcoin into the future.

You might want to spell januari January in your comment on line 2. Also, I think == is a boolean operator, you might want to fix this in your comment on line 12.

yrral86 commented Jul 31, 2015

@todomodo @TheBlueMatt
I would also be concerned about unsconstrained exponential growth, but in this case it is limited to 11 steps, which would be 2 GB blocks. This would take 45 minutes to upload with my current home connection with 6Mbps upload, which is indeed unfeasible for 10 minute blocks. However, to transfer that much in one minute would only require 275Mbps, which is easily feasible for modern hardware. If we don't have this kind of bandwidth available to home users by 2063 I would be terribly surprised. This could only really happen if we're still stuck on copper lines.

valiz commented Jul 31, 2015

Everyone, we cannot forsee the future. We cannot be certain of any degree of bandwith growth. It can go anywhere. We are talking about decades here. Don't forget that right now major central banks are engaged in ZIRP and QE, massively distorting interest rates across the world. There is always the risk of a major global economic crisis appearing out of nowhere, and this would affect technology and bandwith availability. This is why flexibility regarding the block size should be a top concern.

Personally I like this proposal, and if it is implemented and the block size growth can be changed via soft fork, then it's great!

Thank you for your work everyone!

10MB blocks in fifteen years???

that simply means no Metcalfe's law, hence: bitcoin is not for everyone.

digitsu commented Aug 17, 2015

This sounds like the best way forward. A technology lagging block limit growth rate is conservative and safe. Gavin's growth rate is designed to lead the market, and that I believe is a mistake, similar to the one's that central banks use when they do monetary policy based on GDP predictions. We can't predict the future bandwidth growth rates, let alone plan for any unforeseen black swans that may affect the global economy which would stifle growth in all sectors. As bitcoin is supposed to be a hedge against such unfortunate future scenarios, it is better to be overly conservative instead of loose with a limit increase.

Is there really no way to link the block size to a metric like # of transactions? Something that adjust to reality, not our guess in 2015?

Most transactional systems put the limit on the transaction size. If the txn size limit were 1MB and blocks were limited to 100 txns, the max block size would be 100MB. Then, if you needed larger than 1MB txns you could create a txn chain within the block, as long as it did not exceed 100 txns when combined with any other txns in the block. Still, it's just kicking the can down the road, as they say.

1.5 Years to upgrade nodes is not required, six month at most.
In 24 hours nearly 500 nodes upgraded to Bitcoin XT.

This growth pattern is not consistent with actual growth pattern.

A compromise would be:

  1. 4 MB Block starts end of JAN 2016
  2. Then that doubles every 2 years
  3. Capped at 1GB in 20 years.

anyway, if anyone got a github account, leave a link to this convo on there, curious if sipa/anyone groks wtf's going on.

http://log.bitcoin-assets.com/?date=17-08-2015#1240286

I sympathize with @davout's comment, which does not seem like trolling, but very much on point:

This isn't really "Block size following technological growth" it's more like "Hey, I've revised Gavin's optimistic estimates".

  1. We don't know what technological growth will be.
  2. This proposal "guestimates" at block sizes. It uses zero "live" intelligence. All the intelligence in this algorithm amounts to guesses that were baked into it at its creation.
  3. Therefore, at some point, this BIP expects a soft or a hard fork down the line.

So my question is: is this BIP necessary due to some sort of urgency? If so, then can we all agree to call it a hack?

A real solution to the scaling problem would be a solution that is capable of scaling up and down as needed, at any point in time.

Some solutions that fit that property (I think) include:

  • Lightning network (maybe).
  • Any solution that is economics based (definitely).
  • Vitalik & co's various suggestions (i.e. block size based on rate of orphaned blocks, their scalability paper, etc.)
  • Block size limit based on difficulty adjustments [1] [2]
  • Combinations of the above.

All of Mike Hearn's concerns have to do not with block sizes, but with node instability related to full blocks:

  1. The node might become incredibly slow as it enters swap hell.
  2. The node might crash when it tries to allocate memory and fails.
  3. The node might be killed by the operating system kernel.

The solution to those problems has little to do with adjusting block sizes, and more to do with how nodes and thin clients handle blocks that are filling up. If a block size adjustment is needed for some reason, then it cannot be prescribed in advance like this. The network should figure it out by itself.

So are there any txn fee (& similar) economic BIP proposals out there?

(* to email readers: made some edits to the paragraphs near the end, check github for the latest version of that comment.)

I am not too happy with this proposal. While it's relatively modest, it is not a robust way to scale the system.

First, you keep claiming that sidechains/subchains/block extensions cannot be used for scaling purposes, yet you never provided a formal proof of this. I proposed a subchain/block extension mechanism on the forum https://bitcointalk.org/index.php?topic=1083345.0 and I think I addressed the concerns you had since we last talked about it. Yes, it is a bit messy (and I do plan to rewrite it), but I just want to highlight that the question of whether you can scale Bitcoin with such methods has not been carefully solved, so we can not just hand wave it away.

Also, sticking an arbitrary number like 17%, that is simply based on historical data, does not seem like a robust way to future-proof the system. I am not against increasing the block size of the "main" chain/block, BUT it should be done as softfork (e.g. using a sidechain) so that it is not forced upon users, AND, it should be automatically varying with the computing power of the users of the system.

For the last point above, consider for example the current mining scheme where an 80 byte block header must be hashed to form a valid 1 MB block. The ratio of header size to block size is 80/1000000 = 1/12500. Thus, we can set a rule that blocks must be at most 1 MB if an 80 byte header is hashed or x MB if an x*80 byte header is hased. That way the same ratio of header to block size is preserved, which means that as technology improves, it will be worth it to hash more than 80 bytes and gain the extra fees associated with more transaction, especially once a fee market develops. This is just a suggestion, and would need to be studied more. The actual function that maps block size to header size can be different or maybe, just increase another measure of difficulty rather than the header size. Meni Rosenfeld also proposed a scheme that penalizes miners for mining larger blocks, which is a similar idea.

In any case, we need more formal proofs associated with proposals, that carefully lay out the assumptions the author is making, rather than just high profile people throwing out proposals and expecting others to trust them.

jtoomim commented Oct 25, 2015

The 17.7% figure appears to come from sources like http://rusty.ozlabs.org/?p=493. It's worth noting that most of these metrics aggregate mobile bandwidth and fixed-line bandwidth. As mobile devices' share of total internet usage has been increasing dramatically over the last few years, and since mobile devices are much slower than fixed-line devices, this pushes the estimates down considerably. Examining just fixed-line devices, such as in https://en.wikipedia.org/wiki/Internet_traffic#Global_Internet_traffic, results in estimates closer to 50% per year growth.

eragmus commented Apr 10, 2016

@jtoomim

From 2014-2019, Cisco data projects 16%/year increase in global fixed bandwidth speed, and 8-11%/year increase in global mobile bandwidth speed (smartphones and tablets).

Fixed:
http://i.imgur.com/wPXnpOc.png

Mobile:
http://i.imgur.com/ECzCjrq.png


Source:
http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html

sesam commented Mar 18, 2017

Now after headers first and various thin blocks, maybe it's time to forget bandwidth and focus on latency, packet loss and block verification which seem to still be the narrowest bottlenecks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment