The consensus protocol limits on blocks are:
1 million or fewer bytes canonically-serialized size. 20,000 or fewer "sigops"
Unfortunately, Satoshi implemented those as quick fixes with zero code review and little testing (Bitcoin was not a Big Deal back then, it was the right decision at the time), and the way sigop counting is done is... well, just wrong.
Here's how Satoshi did it, as pseudo-code (see GetLegacySigOpCount in main.cpp for actual code):
For all the transactions in a block:
For all the inputs of the transaction:
scan through the scriptSig Script and count each CHECKSIG as 1 and each CHECKMULTISIG as 20
For all the outputs of the transaction:
scan through the scriptPubKey Script, and count each CHECKSIG as 1 and each CHECKMULTISIG as 20
Problem number 1: it tries to count stuff in the coinbase scriptSig. It is the only place in the code where the coinbase scriptSig is interpreted as a Script and not just "whatever the miner wants to put there"
Problem number 2: it treats all CHECKMULTISIGs as if they are maximum N-of-twenty multisigs (almost all are 2-of-2 or 2-of-3).
Problem number 3: the Scripts in a scriptPubKey don't cost anything until they are redeemed by a future transaction, and their validation cost shouldn't be counted until then.
Problem number 4: 'scans through the Script' isn't the same as executing the Script-- CHECKSIGs inside branches of IF/ELSE opcodes are counted even if it would be impossible for that branch to be taken.
Problem number 5: the scary, O(n^2) cost (computing signature hashes for SIGHASH_ALL signatures) isn't counted at all.
When I implemented BIP16 (pay to script hash-- moving the ScriptPubKey Script to the redeeming transaction's scriptSig) I fixed problems 3 and and mostly fixed problem 2 (see main.cpp GetP2SHSigOpCount() for actual code), and part of the BIP16 soft fork was a modification of the consensus rule:
- Before: inaccurate_sigop_count <= 20,000 (expressed in consensus/consensus.h as MAX_BLOCK_SIZE/5)
- After: inaccurate_sigop_count+accurate_p2sh_count <= 20,000 (soft fork because it is more strict than old rule)
My BIP101 implementation cleans up the mess and fixes problem 4 and 5 by counting exactly how many bytes are hashed to compute signature hashes and exactly how many ECDSA signature verifies are required to validate all the transactions in a block. And then making the consensus rules:
- Block size <= (the BIP101 growth formula)
- accurately_counted_sigops <= Block Size / 5
- accurately_counted_sighash_bytes <= Block Size * some big number that means any previously-valid-1MB-transaction is still valid.
I don't think the BIP101 approach is right for the Bitcoin Classic hard fork, for three reasons:
- it touches more code.
- developer consensus is moving away from "have multiple limits" and towards "combine costs into one 'validation metric'" because that makes CreateNewBlock transaction selection and fee computation simpler (it becomes fee-per-cost-doohickey instead of fee-per-kilobyte).
- BIP 143 (segwit CHECKSIG rules) will fix problem number 5 for segwit transactions
The simplest possible thing that would work for the Classic fork would be to essentially just use the old rules multiplied by larger block size:
- Block size <= ...2MB or 2 growing to 4 or whatever...
- inaccurate_sigop_count+accurate_p2sh_count <= Block size / 5
... and to solve the sighash O(n^2) problem add another consensus rule for each transaction in the block:
- serialized(transaction inputs) bytes + serialized(transaction outputs) bytes < 100,000 bytes
Here's my reasoning for why that's a good idea:
Transactions larger than 100,000 bytes are already non-standard-- you have to run custom mining code to mine them. All wallets should already refuse to create >100,000 byte transactions.
The N^2 hashing only becomes a problem with transactions that are big. 100,000 byte transactions simply don't trigger the problem (that is why the IsStandard rule was added).
I'm anticipating segregated witness happening, which is why I made the rule inputs+outputs and not just number-of-bytes-in-serialized-transaction. BIP143 fixes the problem, so no reason to count witness data in the 100,000 byte limit.