Skip to content

Instantly share code, notes, and snippets.

@holiman
Last active September 10, 2020 07:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save holiman/7153e088af8941379cf21c0e4610d51f to your computer and use it in GitHub Desktop.
Save holiman/7153e088af8941379cf21c0e4610d51f to your computer and use it in GitHub Desktop.
Gas cost for calling precompiles

I reran the benchmarks from ethereum/go-ethereum#21207. That PR was using some optimizations that were since merged into master, so this time the code was executed on the current master.

The idea is to check the 'intrinsic' cost for geth to perform various types of calls, e.g. staticcalls and calls to precompiles, and compare that with a simple loop. The simple loop consists of pushes, jumps, pops, and gas.

The 'intrinsic' cost really represents how much time/resources geth spends on switching to a new call context, here assuming that the identity precompile is basically a no-op.

In the benchmarks, the base cost for identity has been set to 0, and 100MGas is used.

With 700 gas for calls,

BenchmarkSimpleLoop/staticcall-identity-100M-6         	      20	  59585633 ns/op	    3382 B/op	      36 allocs/op
BenchmarkSimpleLoop/call-identity-100M-6               	      14	  78339282 ns/op	    3432 B/op	      36 allocs/op
BenchmarkSimpleLoop/loop-100M-6                        	       2	 639754382 ns/op	    4108 B/op	      29 allocs/op```

Both static-call identity precompile and calling is vastly faster than a simple loop -- meaning that they are overpriced.

With 100 gas for calls

BenchmarkSimpleLoop/staticcall-identity-100M-6         	       4	 324414002 ns/op	    4278 B/op	      39 allocs/op
BenchmarkSimpleLoop/call-identity-100M-6               	       3	 429027514 ns/op	    4050 B/op	      38 allocs/op
BenchmarkSimpleLoop/loop-100M-6                        	       2	 614662618 ns/op	    4112 B/op	      30 allocs/op```

With 100, the times are 322ms, 429ms and 614ms, respectively. That's within 2x margin.

With 40 gas for calls:

BenchmarkSimpleLoop/staticcall-identity-100M-6         	       2	 643442665 ns/op	    5292 B/op	      42 allocs/op
BenchmarkSimpleLoop/call-identity-100M-6               	       2	 778024676 ns/op	    4444 B/op	      39 allocs/op
BenchmarkSimpleLoop/loop-100M-6                        	       2	 623502452 ns/op	    4108 B/op	      29 allocs/op

With 40, times are 643ms, 778ms and 623ms. The staticcall is slightly slower, and the call is markedly slower, meaning that the value 40 seems to be a bit too low.

In essence, even though 40 might be reachable, having 100 gives us some additional 'slack' to account for mispricings in precompiles, plus it makes the change less drastic. Going from 700 to 100 is a 7x reduction, as opposed to a 17x reduction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment