Skip to content

@jboner /latency.txt

Embed URL


Subversion checkout URL

You can clone with
Download ZIP
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns
Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
Read 4K randomly from SSD* 150,000 ns 0.15 ms
Read 1 MB sequentially from memory 250,000 ns 0.25 ms
Round trip within same datacenter 500,000 ns 0.5 ms
Read 1 MB sequentially from SSD* 1,000,000 ns 1 ms 4X memory
Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150 ms
1 ns = 10-9 seconds
1 ms = 10-3 seconds
* Assuming ~1GB/sec SSD
By Jeff Dean:
Originally by Peter Norvig:
Some updates from:
Great 'humanized' comparison version:
Visual comparison chart:
Nice animated presentation of the data:

need a solar system type visualization for this, so we can really appreciate the change of scale.


I agree, would be fun to see. :-)


useful information & thanks


Looks nice kudos !
One comment about the Branch mispredict: if the cpu architecture is based on P4 or Bulldozer that would result in 20-30+ cycles on a mispredict that would translate to a much bigger number (and they do mispredict) :)

For SSD's would be something like:
Disk seek: 100 000 ns


Latency numbers between large cities:


@preinheimer Asia & Australasia have it bad.


"Latency numbers every programmer should know" - yet naturally, it has no information about humans!


maybe you want to incorporate some of this:


Curious to see numbers for SSD read time


I think the reference you want to cite is here:


This remind me of this Grace Hopper's video about Nanoseconds. Really worthy.


I find comparisons much more useful than raw numbers:


I'm surprised that mechanical disk reads are only 80x the speed of main memory reads.


my version : includes SSD number, would love some more


Does L1 and L2 cache latency depends on processor type? and what about L3 cache.


Ofc it does ... those are averages I think.


Would be nice to right-align the numbers so people can more easily compare orders of magnitude.


Good idea. Fixed.


And expanded even a bit more: (SSD numbers, relative comparisons, more links)


TLB misses would be nice to list too, so people see the value of large pages...

Context switches (for various OSes), ...

Also, regarding packet sends, that must be latency from send initiation to send completion -- I assume.

If you're going to list mutex lock/unlock, how about memory barriers?

Thanks! This is quite useful, particularly for flogging at others.

lry commented

Quick pie chart of data with scales in time (1 sec -> 9.5 years) for fun.

Spreadsheet with chart


"Read 1 MB sequentially from disk - 20,000,000 ns". Is this with or without disk seek time?


I made a fusion table for this at:

Maybe be helpful for graphing, etc. Thanks for putting this together


Cool. Thanks.
Thanks everyone for all the great improvements.


Here is a chart version. It's a bit hard to read, but I hope it conveys the perspective.


It would also be very interesting to add memory allocation timings to that : )


How long does it take before this shows up in XKCD?


You guys are talking about is the powers of ten


If it does show up on xkcd it will be next to a gigantic "How much time it takes for a human to react to any results", hopefully with the intent to show people that any USE of this knowledge should be tempered with an understanding of what it will be used for--possibly showing how getting a bit from the cache is pretty much identical to getting a bit from china when it comes to a single fetch of information to show a human being.


@BillKress yes, this is specifically for Programmers, to make sure they have an understanding about the bottlenecks involved in programming. If you know these numbers, you know that you need to cut down on disk access before cutting down on in-memory shuffling.
If you don't properly follow these numbers and what they stand for, you will make programs that don't scale well. That is why they are important on their own and (in this context) should not be dwarfed by human reaction times.


@BillKress If we were only concerned with showing information to a single human being at a time we could just as well shut down our development machines and go out into the sun and play. This is about scalability.


this is getting out of hand, how do i unsubscribe from this gist?


Saw this via @smashingmag . While you guys debate the fit for purpose, here is another visualization of your quick reference latency data with Prezi


Does anybody know how to stop receiving notifications from a gist's activity?


Here's a tool to visualize these numbers over time:


I just created flash cards for this: They can be downloaded using the Anki application:


I'm also missing something like "Send 1MB bytes over 1 Gbps network (within datacenter over TCP)". Or does that vary so much that it would be impossible to specify?


If L1 access is a second, then:

L1 cache reference : 0:00:01
Branch mispredict : 0:00:10
L2 cache reference : 0:00:14
Mutex lock/unlock : 0:00:50
Main memory reference : 0:03:20
Compress 1K bytes with Zippy : 1:40:00
Send 1K bytes over 1 Gbps network : 5:33:20
Read 4K randomly from SSD : 3 days, 11:20:00
Read 1 MB sequentially from memory : 5 days, 18:53:20
Round trip within same datacenter : 11 days, 13:46:40
Read 1 MB sequentially from SSD : 23 days, 3:33:20
Disk seek : 231 days, 11:33:20
Read 1 MB sequentially from disk : 462 days, 23:06:40
Send packet CA->Netherlands->CA : 3472 days, 5:20:00


You can add LTO4 tape seek/access time, ~ 55 sec, or ns


I'm missing things like sending 1K via Unix pipe/ socket / tcp to another process.
Has anybody numbers about that?


@metakeule its easily measurable.


Related page from "Systems Performance" with similar second scaling mentioned by @kofemann:


L1D hit on a modern Intel CPU (Nehalem+) is at least 4 cycles. For a typical server/desktop at 2.5Ghz it is at least 1.6ns.
Fastest L2 hit latency is 11 cycles(Sandy Bridge+) which is 2.75x not 14x.
May be the numbers by Norwig were true at some time, but at least caches latency numbers are pretty constant since Nehalem which was 6 years ago.


Please note that Peter Norvig first published this expanded version (at that location - ~JUL2010 (see wayback machine). Also, note that it was "Approximate timing for various operations on a typical PC".


One light-nanosecond is roughly a foot, which is considerably less than the distance to my monitor right now. It's kind of surprising to realize just how much a CPU can get done in the time it takes light to traverse the average viewing distance...


@jboner, I would like to cite some numbers in a formal publication. Who is the author? Jeff Dean? Which url should I cite? Thanks.


I'd like to see the number for "Append 1 MB to file on disk".


The "Send 1K bytes over 1 Gbps network" doesn't feel right, if you were comparing the 1MB sequential read of memory, SSD, Disk, the Gbps network for 1MB would be faster than disk (x1024), that doesn't feel right.


I turned this into a set of flashcards on Quizlet:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.