Skip to content

Embed URL

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers
--------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns
Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
Read 4K randomly from SSD* 150,000 ns 0.15 ms
Read 1 MB sequentially from memory 250,000 ns 0.25 ms
Round trip within same datacenter 500,000 ns 0.5 ms
Read 1 MB sequentially from SSD* 1,000,000 ns 1 ms 4X memory
Disk seek 10,000,000 ns 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150 ms
Notes
-----
1 ns = 10-9 seconds
1 ms = 10-3 seconds
* Assuming ~1GB/sec SSD
Credit
------
By Jeff Dean: http://research.google.com/people/jeff/
Originally by Peter Norvig: http://norvig.com/21-days.html#answers
Contributions
-------------
Some updates from: https://gist.github.com/2843375
Great 'humanized' comparison version: https://gist.github.com/2843375
Visual comparison chart: http://i.imgur.com/k0t1e.png
Nice animated presentation of the data: http://prezi.com/pdkvgys-r0y6/latency-numbers-for-programmers-web-development/
@dominictarr

need a solar system type visualization for this, so we can really appreciate the change of scale.

@jboner
Owner

I agree, would be fun to see. :-)

@pmanvi

useful information & thanks

@dakull

Looks nice kudos !
One comment about the Branch mispredict: if the cpu architecture is based on P4 or Bulldozer that would result in 20-30+ cycles on a mispredict that would translate to a much bigger number (and they do mispredict) :)

For SSD's would be something like:
Disk seek: 100 000 ns

@preinheimer

Latency numbers between large cities: https://wondernetwork.com/pings/

@carignanboy1

@preinheimer Asia & Australasia have it bad.

@Eronarn

"Latency numbers every programmer should know" - yet naturally, it has no information about humans!

http://biae.clemson.edu/bpc/bp/lab/110/reaction.htm

@hellerbarde

maybe you want to incorporate some of this: https://gist.github.com/2843375

@christopherscott

Curious to see numbers for SSD read time

@klochner

I think the reference you want to cite is here: http://norvig.com/21-days.html#answers

@lucasces

This remind me of this Grace Hopper's video about Nanoseconds. Really worthy.
http://www.youtube.com/watch?v=JEpsKnWZrJ8

@mikea

I find comparisons much more useful than raw numbers: https://gist.github.com/2844130

@briangordon

I'm surprised that mechanical disk reads are only 80x the speed of main memory reads.

@dakull

my version : https://gist.github.com/2842457 includes SSD number, would love some more

@newphoenix

Does L1 and L2 cache latency depends on processor type? and what about L3 cache.

@dakull

Ofc it does ... those are averages I think.

@cayblood

Would be nice to right-align the numbers so people can more easily compare orders of magnitude.

@jboner
Owner

Good idea. Fixed.

@jhclark

And expanded even a bit more: https://gist.github.com/2845836 (SSD numbers, relative comparisons, more links)

@nicowilliams

TLB misses would be nice to list too, so people see the value of large pages...

Context switches (for various OSes), ...

Also, regarding packet sends, that must be latency from send initiation to send completion -- I assume.

If you're going to list mutex lock/unlock, how about memory barriers?

Thanks! This is quite useful, particularly for flogging at others.

@lry
lry commented

Quick pie chart of data with scales in time (1 sec -> 9.5 years) for fun.

Spreadsheet with chart

@vickychijwani

"Read 1 MB sequentially from disk - 20,000,000 ns". Is this with or without disk seek time?

@pgroth

I made a fusion table for this at:
https://www.google.com/fusiontables/DataSource?snapid=S523155yioc

Maybe be helpful for graphing, etc. Thanks for putting this together

@jboner
Owner

Cool. Thanks.
Thanks everyone for all the great improvements.

@ayshen

Here is a chart version. It's a bit hard to read, but I hope it conveys the perspective.
http://i.imgur.com/k0t1e.png

@gchatelet

It would also be very interesting to add memory allocation timings to that : )

@PerWiklander

How long does it take before this shows up in XKCD?

@talltyler

You guys are talking about is the powers of ten http://vimeo.com/819138

@BillKress

If it does show up on xkcd it will be next to a gigantic "How much time it takes for a human to react to any results", hopefully with the intent to show people that any USE of this knowledge should be tempered with an understanding of what it will be used for--possibly showing how getting a bit from the cache is pretty much identical to getting a bit from china when it comes to a single fetch of information to show a human being.

@hellerbarde

@BillKress yes, this is specifically for Programmers, to make sure they have an understanding about the bottlenecks involved in programming. If you know these numbers, you know that you need to cut down on disk access before cutting down on in-memory shuffling.
If you don't properly follow these numbers and what they stand for, you will make programs that don't scale well. That is why they are important on their own and (in this context) should not be dwarfed by human reaction times.

@PerWiklander

@BillKress If we were only concerned with showing information to a single human being at a time we could just as well shut down our development machines and go out into the sun and play. This is about scalability.

@klochner

this is getting out of hand, how do i unsubscribe from this gist?

@gemclass

Saw this via @smashingmag . While you guys debate the fit for purpose, here is another visualization of your quick reference latency data with Prezi ow.ly/bnB7q

@briangordon

Does anybody know how to stop receiving notifications from a gist's activity?

@colin-scott

Here's a tool to visualize these numbers over time: http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html

@JensRantil

I just created flash cards for this: https://ankiweb.net/shared/info/3116110484 They can be downloaded using the Anki application: http://ankisrs.net

@JensRantil

I'm also missing something like "Send 1MB bytes over 1 Gbps network (within datacenter over TCP)". Or does that vary so much that it would be impossible to specify?

@kofemann

If L1 access is a second, then:

L1 cache reference : 0:00:01
Branch mispredict : 0:00:10
L2 cache reference : 0:00:14
Mutex lock/unlock : 0:00:50
Main memory reference : 0:03:20
Compress 1K bytes with Zippy : 1:40:00
Send 1K bytes over 1 Gbps network : 5:33:20
Read 4K randomly from SSD : 3 days, 11:20:00
Read 1 MB sequentially from memory : 5 days, 18:53:20
Round trip within same datacenter : 11 days, 13:46:40
Read 1 MB sequentially from SSD : 23 days, 3:33:20
Disk seek : 231 days, 11:33:20
Read 1 MB sequentially from disk : 462 days, 23:06:40
Send packet CA->Netherlands->CA : 3472 days, 5:20:00

@kofemann

You can add LTO4 tape seek/access time, ~ 55 sec, or 55.000.000.000 ns

@fatura-

Fatura Ödeme Merkezi

Yaklaşık 6 Yıldır Sektörün İçerisinde, Sektöre Hakim ve Deneyim Sahibi Kişilerden oluşmuş Genç, Girişimci,ufku Geniş Personel ve Yöneticilerimiz İle Resmi olarak 2012 Kasım ayında Faaliyete Geçmiş Yeni Bir Firmayız...Fatura Ödeme Merkezi olarak kurulduğumuz yıldan beri güçlenmeye devam ediyoruz.

Fatura Ödeme

@metakeule

I'm missing things like sending 1K via Unix pipe/ socket / tcp to another process.
Has anybody numbers about that?

@shiplunc

@metakeule its easily measurable.

@mnem

Related page from "Systems Performance" with similar second scaling mentioned by @kofemann: https://twitter.com/rzezeski/status/398306728263315456/photo/1

@izard

L1D hit on a modern Intel CPU (Nehalem+) is at least 4 cycles. For a typical server/desktop at 2.5Ghz it is at least 1.6ns.
Fastest L2 hit latency is 11 cycles(Sandy Bridge+) which is 2.75x not 14x.
May be the numbers by Norwig were true at some time, but at least caches latency numbers are pretty constant since Nehalem which was 6 years ago.

@richa03

Please note that Peter Norvig first published this expanded version (at that location - http://norvig.com/21-days.html#answers) ~JUL2010 (see wayback machine). Also, note that it was "Approximate timing for various operations on a typical PC".

@pdjonov

One light-nanosecond is roughly a foot, which is considerably less than the distance to my monitor right now. It's kind of surprising to realize just how much a CPU can get done in the time it takes light to traverse the average viewing distance...

@junhe

@jboner, I would like to cite some numbers in a formal publication. Who is the author? Jeff Dean? Which url should I cite? Thanks.

@weidagang

I'd like to see the number for "Append 1 MB to file on disk".

@dhartford

The "Send 1K bytes over 1 Gbps network" doesn't feel right, if you were comparing the 1MB sequential read of memory, SSD, Disk, the Gbps network for 1MB would be faster than disk (x1024), that doesn't feel right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.