Skip to content

Instantly share code, notes, and snippets.

View semistrict's full-sized avatar

Ramon semistrict

View GitHub Profile
@hellerbarde
hellerbarde / latency.markdown
Created May 31, 2012 13:16 — forked from jboner/latency.txt
Latency numbers every programmer should know

Latency numbers every programmer should know

L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns             
Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
SSD random read ........................ 150,000 ns  = 150 µs

Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs

@headius
headius / gist:3491618
Created August 27, 2012 19:34
JVM + Invokedynamic versus CLR + DLR

Too much for teh twitterz :)

JVM + invokedynamic is in a completely different class than CLR + DLR, for the same reasons that JVM is in a different class than CLR to begin with.

CLR can only do its optimization up-front, before executing code. This is a large part of the reason why C# is designed the way it is: methods are non-virtual by default so they can be statically inlined, types can be specified as value-based so their allocation can be elided, and so on. But even with those language features CLR simply cannot optimize code to the level of a good, warmed-up JVM.

The JVM, on the other hand, optimizes and reoptimizes code while it runs. Regardless of whether methods are virtual/interface-dispatched, whether objects are transient, whether exception-handling is used heavily...the JVM sees through the surface and optimizes code appropriate for how it actually runs. This gives it optimization opportunities that CLR will never have without adding a comparable profiling JIT.

So how does this affect dynamic

The specific issue we care about is previously written data silently being corrupted by disk (eg. bit rot)

Consider paxos+log or Raft style approach. Once value is committed from log to replicated state/DB, it's assumed to be stable. But, there's no guarantee the disk doesn't lose that data later. While the current approach in Riak doesn't use traditional propose/commit log, the underlying problem is the same.

Example of problem we want to avoid.

We have 3-replicas, A/B/C with the following committed state (no other operations in flight):
A: a=100, b=200 (currently offline/partitioned)