Skip to content

Instantly share code, notes, and snippets.

@jboner
jboner / latency.txt
Last active May 3, 2024 15:17
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
From 504504f2f8c13f077f09e0906cd7e7d3ca405acc Mon Sep 17 00:00:00 2001
From: Vincent Bernat <vincent-5wNQGcxKXm8AvxtiuMwx3w@public.gmane.org>
Date: Wed, 7 May 2014 18:18:07 +0200
Subject: [PATCH] MINOR: dtrace: add dtrace support (WIP)
Both dtrace and systemtap are supported. Currently, only one tracepoint
is defined.
---
.gitignore | 1 +
Makefile | 18 +++++++++++++++++-
@alq666
alq666 / gist:3174101
Created July 25, 2012 02:50
Datadog functions

A short introduction to Datadog functions

You can apply functions to metric queries in the graph editor, as long as you use the JSON editor.

The general format is:

function(metric{scope} [by {filter}])

In case of binary operators (+, -, /, *), the format is:

# you can make a text file of request times (in ms, one number per line) and import it here, or you can use a probability distribution to simulate request times (see below where setting req_durations_in_ms)
# rq = read.table("~/Downloads/request_times.txt", header=FALSE)$V1
# argument notes:
# parallel_router_count is only relevant if router_mode is set to "intelligent"
# choice_of_two, power_of_two, and unicorn_workers_per_dyno are only relevant if router_mode is set to "naive"
# you can only select one of choice_of_two, power_of_two, and unicorn_workers_per_dyno
run_simulation = function(router_mode = "naive",
reqs_per_minute = 9000,
@benjchristensen
benjchristensen / index.html
Created August 16, 2011 03:21
Animated Line Graphs / Sparklines using SVG Path and d3.js
<html>
<head>
<title>Animated Sparkline using SVG Path and d3.js</title>
<script src="http://mbostock.github.com/d3/d3.v2.js"></script>
<style>
/* tell the SVG path to be a thin blue line without any area fill */
path {
stroke: steelblue;
stroke-width: 1;
fill: none;
@robertsosinski
robertsosinski / postgresql.py
Last active June 25, 2020 10:32
DataDog script for collecting PostgreSQL stats
# Create the datadog user with select only permissions:
# CREATE USER datadog WITH PASSWORD '<complex_password>';
#
# Grant select permissions on a table or view that you want to monitor:
# GRANT SELECT ON <schema>.<table> TO datadog;
#
# Grant permissions for a specific column on a table or view that you want to monitor:
# GRANT SELECT (id, name) ON <schema>.<table> TO datadog;
#
# Let non-superusers look at pg_stat_activity in a read-only fashon.
@alekstorm
alekstorm / README.md
Last active April 23, 2020 19:00
Ansible callback plugin that posts events to your Datadog event stream as you deploy

Installation

To install, place datadog.py in your callback plugins directory. If you don't yet have one, run:

mkdir -p plugins/callback

Then place the following in your ansible.cfg file:

[defaults]

callback_plugins = ./plugins/callback

@chrishamant
chrishamant / s3_multipart_upload.py
Created January 3, 2012 19:29
Example of Parallelized Multipart upload using boto
#!/usr/bin/env python
"""Split large file into multiple pieces for upload to S3.
S3 only supports 5Gb files for uploading directly, so for larger CloudBioLinux
box images we need to use boto's multipart file support.
This parallelizes the task over available cores using multiprocessing.
Usage:
s3_multipart_upload.py <file_to_transfer> <bucket_name> [<s3_key_name>]
@alq666
alq666 / latency.txt
Created January 2, 2019 16:58 — forked from jboner/latency.txt
Latency Numbers Every Programmer Should Know
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
@gane5h
gane5h / datadog-nginx
Created October 22, 2014 04:06
Nginx log parsing with datadog
"""
Custom parser for nginx log suitable for use by Datadog 'dogstreams'.
To use, add to datadog.conf as follows:
dogstreams: [path to ngnix log (e.g: "/var/log/nginx/access.log"]:[path to this python script (e.g "/usr/share/datadog/agent/dogstream/nginx.py")]:[name of parsing method of this file ("parse")]
so, an example line would be:
dogstreams: /var/log/nginx/access.log:/usr/share/datadog/agent/dogstream/nginx.py:parse
Log of nginx should be defined like that:
log_format time_log '$time_local "$request" S=$status $bytes_sent T=$request_time R=$http_x_forwarded_for';
when starting dd-agent, you can find the collector.log and check if the dogstream initialized successfully
"""