Skip to content

Instantly share code, notes, and snippets.

View mattrobenolt's full-sized avatar
🤠
🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔

Matt Robenolt mattrobenolt

🤠
🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔
View GitHub Profile
try:
# Hang until we exit the script
while 1:
time.sleep(5)
except KeyboardInterrupt:
pass
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import time
import urllib2
import boto
try:
import simplejson as json
except ImportError:
#!/usr/bin/env sh
INSTALL_DIR=`pwd`/maxmind
log() {
printf "\033[90m...\033[0m $@\n"
}
abort() {
printf "\033[31mError: $@\033[0m\n" && exit 1
publish:
python setup.py sdist upload
clean:
rm -rf *.egg-info
rm -rf dist
rm -rf build
.PHONY: publish clean
class Logger(type):
def __new__(cls, name, bases, crap):
print cls, name, bases, crap
class foo(dict):
__metaclass__ = Logger
@mattrobenolt
mattrobenolt / server.js
Created August 5, 2012 16:32
DNS server backed by Redis, and resolves EC2 instance names
var dns = require('native-dns');
var server = dns.createServer();
var client = require('redis').createClient();
var aws = require('aws-lib');
server.on('request', function(req, res){
var parts = req.question[0].name.split('.');
var tag = parts[0];
var authority = parts.slice(1).join('.');
@mattrobenolt
mattrobenolt / gist:3239561
Created August 2, 2012 18:41
yield from all the things
all = lambda x: x
def doit(things):
yield from all(things)
list(doit([]))
import ec2
ec2.credentials.ACCESS_KEY_ID = 'xxx'
ec2.credentials.SECRET_ACCESS_KEY = 'xxx'
print ec2.instances.all()
for i in ec2.instances.filter(state='running', name__like='^production'):
print i.state, i.tags['Name']
@mattrobenolt
mattrobenolt / gist:3156887
Created July 21, 2012 19:19
Python unpacking with **
class foo(object):
a = 'b!'
b = 'b!'
def keys(self):
return ['a', 'b']
def __getitem__(self, item):
return getattr(self, item)
# Data set is a large SQL dump. Most lines are around 1MB.
# When the chunk size is at a smaller (default) chunk size, CPU jumps ridiculously high.
# If the chunk size is even smaller, there is obviously more CPU consumed as a high rate.
# When setting the chunk size of 0.5MB or even 1MB, it is smoothed out to a realistic usage.
# The obvious bottleneck is around testing each chunk with "splitlines()"
import requests
res = requests.get('https://s3.amazonaws.com/littlesis/public-data/littlesis-data.sql', prefetch=False)
bytes_total = int(res.headers['content-length'])