Skip to content

Instantly share code, notes, and snippets.

@hellerbarde
hellerbarde / latency.markdown
Created May 31, 2012 13:16 — forked from jboner/latency.txt
Latency numbers every programmer should know

Latency numbers every programmer should know

L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns             
Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
SSD random read ........................ 150,000 ns  = 150 µs

Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs

@sevastos
sevastos / aws-multipartUpload.js
Last active May 28, 2024 15:02
Example AWS S3 Multipart Upload with aws-sdk for Node.js - Retries to upload failing parts
// Based on Glacier's example: http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/examples.html#Amazon_Glacier__Multi-part_Upload
var fs = require('fs');
var AWS = require('aws-sdk');
AWS.config.loadFromPath('./aws-config.json');
var s3 = new AWS.S3();
// File
var fileName = '5.pdf';
var filePath = './' + fileName;
var fileKey = fileName;
@compact
compact / dropzone-directive.js
Last active March 16, 2024 02:55
AngularJS directive for Dropzone.js
/**
* An AngularJS directive for Dropzone.js, http://www.dropzonejs.com/
*
* Usage:
*
* <div ng-app="app" ng-controller="SomeCtrl">
* <button dropzone="dropzoneConfig">
* Drag and drop files here or click to upload
* </button>
* </div>
@jkreps
jkreps / benchmark-commands.txt
Last active June 17, 2024 03:54
Kafka Benchmark Commands
Producer
Setup
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test-rep-one --partitions 6 --replication-factor 1
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test --partitions 6 --replication-factor 3
Single thread, no replication
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test7 50000000 100 -1 acks=1 bootstrap.servers=esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864 batch.size=8196
@antirez
antirez / streams_fields.md
Last active February 2, 2018 14:15
Why streams have elements that are actually like hashes?

Why stream items are small hashes instead of single strings like many other Redis types elements is a good question indeed. At the end it's just a design decision, so I don't have the definitive answer. However I can try to explain the design process leading to this design.

What I wanted "Streams" to be, was actually just an Abstract Log. I was not able to call the data structure "log" because it's confusing in many contexts, but that was the idea, and a log better represents what Redis Streams are. Perhaps it's the consumer groups part of the Redis Streams that better characterize the streaming part, but the data structure itself is a log.

Now, what constitutes a log? In the original form, is just lines of text ending with "\n", one after the other, added in an append only fashion. But in general is some data in append only mode.

XADD captures this append only mode of operation. While we have more powerful deletion mechanisms, and will add more, but that is the general idea.