- Solve the easiest possible problem in the dumbest possible way.
- Write a test for it.
- Is there a better name for this thing?
- Can we move work between query time (when we need the answer) and ingest time (when we see the data that eventually informs the answer)?
- Is it easier in a relational data store? A KV Store? A column store? A document store? A graph store?
- Can performance be improved by batching many small updates?
- Can clarity be improved by transforming a single update to more smaller updates?
- Can we more profitably apply a functional or declarative or imperative paradigm to the existing design?
- Can we profitably apply a change from synchronous to asynchronous, or vice versa?
- Can we profitably apply an inversion of control, moving logic between many individual call sites, a static central definition, and a reflectively defined description of the work to be done?
|Latency Comparison Numbers|
|L1 cache reference 0.5 ns|
|Branch mispredict 5 ns|
|L2 cache reference 7 ns 14x L1 cache|
|Mutex lock/unlock 25 ns|
|Main memory reference 100 ns 20x L2 cache, 200x L1 cache|
|Compress 1K bytes with Zippy 3,000 ns 3 us|
|Send 1K bytes over 1 Gbps network 10,000 ns 10 us|
|Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD|
|// Pipe creates a synchronous in-memory pipe.|
|// It can be used to connect code expecting an io.Reader|
|// with code expecting an io.Writer.|
|// Reads on one end are matched with writes on the other,|
|// copying data directly between the two; there is no internal buffering.|
|// It is safe to call Read and Write in parallel with each other or with|
|// Close. Close will complete once pending I/O is done. Parallel calls to|
|// Read, and parallel calls to Write, are also safe:|
|// the individual calls will be gated sequentially.|