Last active

Latency Numbers Every Programmer Should Know

  • Download Gist
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Latency Comparison Numbers
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns
Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
Read 4K randomly from SSD* 150,000 ns 0.15 ms
Read 1 MB sequentially from memory 250,000 ns 0.25 ms
Round trip within same datacenter 500,000 ns 0.5 ms
Read 1 MB sequentially from SSD* 1,000,000 ns 1 ms 4X memory
Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150 ms
1 ns = 10-9 seconds
1 ms = 10-3 seconds
* Assuming ~1GB/sec SSD
By Jeff Dean:
Originally by Peter Norvig:
Some updates from:
Great 'humanized' comparison version:
Visual comparison chart:
Nice animated presentation of the data:

need a solar system type visualization for this, so we can really appreciate the change of scale.

I agree, would be fun to see. :-)

useful information & thanks

Looks nice kudos !
One comment about the Branch mispredict: if the cpu architecture is based on P4 or Bulldozer that would result in 20-30+ cycles on a mispredict that would translate to a much bigger number (and they do mispredict) :)

For SSD's would be something like:
Disk seek: 100 000 ns

@preinheimer Asia & Australasia have it bad.

"Latency numbers every programmer should know" - yet naturally, it has no information about humans!

maybe you want to incorporate some of this:

Curious to see numbers for SSD read time

I think the reference you want to cite is here:

This remind me of this Grace Hopper's video about Nanoseconds. Really worthy.

I find comparisons much more useful than raw numbers:

I'm surprised that mechanical disk reads are only 80x the speed of main memory reads.

my version : includes SSD number, would love some more

Does L1 and L2 cache latency depends on processor type? and what about L3 cache.

Ofc it does ... those are averages I think.

Would be nice to right-align the numbers so people can more easily compare orders of magnitude.

Good idea. Fixed.

And expanded even a bit more: (SSD numbers, relative comparisons, more links)

TLB misses would be nice to list too, so people see the value of large pages...

Context switches (for various OSes), ...

Also, regarding packet sends, that must be latency from send initiation to send completion -- I assume.

If you're going to list mutex lock/unlock, how about memory barriers?

Thanks! This is quite useful, particularly for flogging at others.

Quick pie chart of data with scales in time (1 sec -> 9.5 years) for fun.

Spreadsheet with chart

"Read 1 MB sequentially from disk - 20,000,000 ns". Is this with or without disk seek time?

I made a fusion table for this at:

Maybe be helpful for graphing, etc. Thanks for putting this together

Cool. Thanks.
Thanks everyone for all the great improvements.

Here is a chart version. It's a bit hard to read, but I hope it conveys the perspective.

It would also be very interesting to add memory allocation timings to that : )

How long does it take before this shows up in XKCD?

You guys are talking about is the powers of ten

If it does show up on xkcd it will be next to a gigantic "How much time it takes for a human to react to any results", hopefully with the intent to show people that any USE of this knowledge should be tempered with an understanding of what it will be used for--possibly showing how getting a bit from the cache is pretty much identical to getting a bit from china when it comes to a single fetch of information to show a human being.

@BillKress yes, this is specifically for Programmers, to make sure they have an understanding about the bottlenecks involved in programming. If you know these numbers, you know that you need to cut down on disk access before cutting down on in-memory shuffling.
If you don't properly follow these numbers and what they stand for, you will make programs that don't scale well. That is why they are important on their own and (in this context) should not be dwarfed by human reaction times.

@BillKress If we were only concerned with showing information to a single human being at a time we could just as well shut down our development machines and go out into the sun and play. This is about scalability.

this is getting out of hand, how do i unsubscribe from this gist?

Saw this via @smashingmag . While you guys debate the fit for purpose, here is another visualization of your quick reference latency data with Prezi

Does anybody know how to stop receiving notifications from a gist's activity?

I just created flash cards for this: They can be downloaded using the Anki application:

I'm also missing something like "Send 1MB bytes over 1 Gbps network (within datacenter over TCP)". Or does that vary so much that it would be impossible to specify?

If L1 access is a second, then:

L1 cache reference : 0:00:01
Branch mispredict : 0:00:10
L2 cache reference : 0:00:14
Mutex lock/unlock : 0:00:50
Main memory reference : 0:03:20
Compress 1K bytes with Zippy : 1:40:00
Send 1K bytes over 1 Gbps network : 5:33:20
Read 4K randomly from SSD : 3 days, 11:20:00
Read 1 MB sequentially from memory : 5 days, 18:53:20
Round trip within same datacenter : 11 days, 13:46:40
Read 1 MB sequentially from SSD : 23 days, 3:33:20
Disk seek : 231 days, 11:33:20
Read 1 MB sequentially from disk : 462 days, 23:06:40
Send packet CA->Netherlands->CA : 3472 days, 5:20:00

You can add LTO4 tape seek/access time, ~ 55 sec, or ns

I'm missing things like sending 1K via Unix pipe/ socket / tcp to another process.
Has anybody numbers about that?

@metakeule its easily measurable.

Related page from "Systems Performance" with similar second scaling mentioned by @kofemann:

Please sign in to comment on this gist.

Something went wrong with that request. Please try again.