Skip to content

Instantly share code, notes, and snippets.

View davidguttman's full-sized avatar

David Guttman davidguttman

View GitHub Profile
@max-mapper
max-mapper / index.js
Last active August 29, 2015 14:06
node tcp + http speed test
var net = require('net')
var through = require('through2')
var server = process.argv[2]
var path = process.argv[3]
var host = process.argv[4]
var socket = net.connect(80, server)
var req = ["GET " + path + " HTTP/1.1\r\n",
@juliangruber
juliangruber / gist:7356625
Last active December 27, 2015 16:39
setup custom local.host

So you might have set up dnsmasq with this tutorial but then you notice that you can't set cookies to subdomains of localhost or for all of localhost's subdomains (see why).

So we set up a custom "localhost", that I'll call local.host, but you can pick whatever you want, as long as it contains at least one dot. Just adapt the first two commands.

$ # add to hosts list
$ sudo echo "127.0.0.1 local.host" >> /private/etc/hosts
$ # tell your dns server about it
@dotproto
dotproto / unicode_string_comparison.js
Last active January 20, 2017 02:38
Examining raw unicode values and their normalized forms. TL:DR; comparing unicode strings using a `.normalized()` and `. localeCompare()`
// References
//
// - https://tc39.github.io/ecma262/#sec-ecmascript-language-types-string-type
// - http://unicode.org/reports/tr15/#Norm_Forms
// - http://unicode.org/faq/normalization.html#7 (What is the difference is between W3C normalization and Unicode normalization?)
// - https://developer.mozilla.org/en-US/docs/Web/API/DOMParser
//
// Resources
//
// - http://stackoverflow.com/questions/8936984/uint8array-to-string-in-javascript
@pbailis
pbailis / list.md
Last active April 15, 2018 08:54
Quick and dirty (incomplete) list of interesting, mostly recent data warehousing/"big data" papers

A friend asked me for a few pointers to interesting, mostly recent papers on data warehousing and "big data" database systems, with an eye towards real-world deployments. I figured I'd share the list. It's biased and rather incomplete but maybe of interest to someone. While many are obvious choices (I've omitted several, like MapReduce), I think there are a few underappreciated gems.

###Dataflow Engines:

Dryad--general-purpose distributed parallel dataflow engine
http://research.microsoft.com/en-us/projects/dryad/eurosys07.pdf

Spark--in memory dataflow
http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf

@dimkir
dimkir / nightmare-on-amazon-linux.MD
Last active February 6, 2021 17:45
How to run nightmare on Amazon Linux

Running nightmare on Amazon Linux

You may have thought of running nightmare on AWS Lambda. But before we can run it on Lambda, we need first to make it run on Amazon Linux.

Provision instance which replicates Lambda environment

According to AWS Documentation on Lambda Execution Environment and available Libraries we would need this AMI image with this alias amzn-ami-hvm-2016.03.3.x86_64-gp2. Keep in mind that AMI-image-id for this instance would be different in different regions (eg):

  • In eu-west-1 - ami-f9dd458a
  • In us-east-1 - ami-6869aa05
for i in $HOME/local/*; do
[ -d $i/bin ] && PATH="${i}/bin:${PATH}"
[ -d $i/sbin ] && PATH="${i}/sbin:${PATH}"
[ -d $i/include ] && CPATH="${i}/include:${CPATH}"
[ -d $i/lib ] && LD_LIBRARY_PATH="${i}/lib:${LD_LIBRARY_PATH}"
[ -d $i/lib ] && LD_RUN_PATH="${i}/lib:${LD_RUN_PATH}"
# uncomment the following if you use macintosh
# [ -d $i/lib ] && DYLD_LIBRARY_PATH="${i}/lib:${DYLD_LIBRARY_PATH}"
[ -d $i/lib/pkgconfig ] && PKG_CONFIG_PATH="${i}/lib/pkgconfig:${PKG_CONFIG_PATH}"
[ -d $i/share/man ] && MANPATH="${i}/share/man:${MANPATH}"
@juliangruber
juliangruber / gist:7356251
Last active May 5, 2021 17:53
localhost subdomains on osx

First, install dnsmasq using brew:

$ brew update
$ brew install dnsmasq

Then create your configuration

@max-mapper
max-mapper / index.js
Last active May 9, 2021 02:20
fast loading of a large dataset into leveldb
// data comes from here http://stat-computing.org/dataexpo/2009/the-data.html
// download 1994.csv.bz2 and unpack by running: cat 1994.csv.bz2 | bzip2 -d > 1994.csv
// 1994.csv should be ~5.2 million lines and 500MB
// importing all rows into leveldb took ~50 seconds on my machine
// there are two main techniques at work here:
// 1: never create JS objects, leave the data as binary the entire time (binary-split does this)
// 2: group lines into 16 MB batches, to take advantage of leveldbs batch API (byte-stream does this)
var level = require('level')
@he9lin
he9lin / deploy_rails_app_on_ubuntu.md
Created October 1, 2011 09:51
Setup Rails application production environment on Ubuntu

Setup Rails application production environment on Ubuntu

Add deploy user

ssh root@YOURDOMAIN
adduser deploy
visudo # Add deploy ALL=(ALL) ALL

Install necessary libraries