Skip to content

Instantly share code, notes, and snippets.

Last active December 21, 2019 00:17
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
What would you like to do?
Quadratic sighash based on scriptCode

Quadratic sighash remains in Segwitv0/BCH/BSV digest algorithms.

Mark Lundeberg 2018 Oct 17

Abstract: Both BCH post-forkday signatures and the BIP143 Segwit signatures are ostensibly designed to remove the 'quadratic hashing problem', however as I will show, they are still vulnerable for long scripts. Back-of-the-envelope calculations show that it will become a serious concern if the existing script limits are relaxed.


  • Every OP_CHECKSIG requires hashing a potentially large amount of data, limited only by the size of scriptCode. The precise length is 159 + len(scriptCode) for scriptCodes longer than 255 bytes, up to 65535 bytes.
  • Since many OP_CHECKSIG calls are possible within a given script, this means transactions can be made where the required hashing time is quadratic in the length of script. (though, see the non-push opcode limit below)
  • Caching of the digest is not a defense since OP_CODESEPARATOR can be used an unlimited number of times to create many different scriptCodes in the same script.
  • Limits do exist: 201 non-push opcodes and 10000 bytes for a script (see script.h). However there have been suggestions of greatly extending script limits (1 2).


In this example I have tried to maximize the amount of hashed data within the limits (201 op / 10000 bytes / 6 sighash modes), making sure that every CHECKSIG uses a distinct digest so that caching of the digest is useless. The primary constraint is the scriptSig's 10000 bytes that limits to 138 distinct signatures.

Tx1: output to a scriptPubKey of length N, with a 211-byte, 7-op form repeated 23 times (161 ops, 4853 bytes) and the remainder padded with garbage data (9 ops, bringing up to total of 10000 bytes):

... [22 more times]
PUSH(520 bytes)
... [8 more times]
PUSH(466 bytes)

(the last push is left on stack to make evaluate TRUE)

Tx2: spend the above UTXO with the following scriptSig (138 pushes of 71 byte sigs -- 0 ops and 9936 bytes)


Each sig?a...sig?f uses a different SIGHASH combination.

The first six CHECKSIGs each require hashing 10159 bytes, then 9948 for the next six, and so on, until the last six CHECKSIGs which each require hashing 5517 bytes.

In total, around 1.1 MB of data is hashed in 138 CHECKSIG calls. We can expect this to take something like 20M cycles (2M cycles with hardware accel). For comparison, benchmarks of Peter Wiulle's secp256k1 verifier appear to take ~0.1M cycles per ECDSA verify. Thus with the current script size limits, the quadratic sighash would only double the verification time, in the case of un-accelerated SHA256.


For the purpose of attacking a node implementation that does not cache the digest, it is possible to re-use signatures and thereby pack more than 138 CHECKSIG operations into 10KB. It may or may not be possible to reach 201 CHECKSIGs however since some opcodes must be spent on OP_DUP, and in some implementations it is forbidden to have these OP_DUP in the scriptSig.

1 MB script limits are a really bad idea

Supposing both opcode and script length limits are expanded by a factor of 100, this will allow adversaries to make a pair of 1 MB transactions that execute 14000 OP_CHECKSIGs, requiring the hashing of an accumulated ~10 GB of data and requiring 1 minute of one CPU core's time (far more than the half-second required for the ECDSA operations themselves). A 128 MB block filled with 64 such transaction pairs would saturate an 8-core system for nearly 10 minutes.


Practically this quadratic sighash does not appear to be a game breaker with the current limits. The limits appear to be at just at the right level that prevents it from acting as a DoS magnifier.

If these limits are relaxed, the scripCode quadratic hashing must be seriously considered as a practical DoS vector.

As OP_CODESEPARATOR does not seem to have much use, my recommendation would be to disable it (or limit it to being called only a few times per script) which will allow the digest to be cached and restore linearity to script runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment