Skip to content

Instantly share code, notes, and snippets.

View ianunruh's full-sized avatar

Ian Unruh ianunruh

View GitHub Profile
#!/usr/bin/env python
from argparse import ArgumentParser
import json
import hashlib
import subprocess
import os
def main():
parser = ArgumentParser()
@ianunruh
ianunruh / keybase.md
Created April 9, 2016 20:43
keybase.md

Keybase proof

I hereby claim:

  • I am ianunruh on github.
  • I am ianunruh (https://keybase.io/ianunruh) on keybase.
  • I have a public key whose fingerprint is 21E9 CA13 ADF1 4A36 A024 D119 218C 4656 0BED 3206

To claim this, I am signing this object:

@ianunruh
ianunruh / cfssl_certificate.py
Created November 3, 2015 21:41
Ansible module for generating SSL certs from cfssl
#!/usr/bin/env python
import json
import os
import requests
def main():
module = AnsibleModule(
argument_spec=dict(
cert_path=dict(required=True),
@ianunruh
ianunruh / ember-cli-build.js
Created August 26, 2015 05:59
Use Bootstrap Sass in Ember.js
/* global require, module */
var EmberApp = require('ember-cli/lib/broccoli/ember-app');
module.exports = function(defaults) {
var app = new EmberApp(defaults, {
// Add Bootstrap to Sass include path
sassOptions: {
includePaths: [
'bower_components/bootstrap-sass/assets/stylesheets'
]
@ianunruh
ianunruh / dvr.md
Last active August 29, 2015 14:09

Neutron Distributed Virtual Router (DVR)

The L3 agent provided with Neutron uses the Linux networking stack to perform L3 forwarding and NAT between tenant networks and external networks. Before the Juno release of OpenStack, the L3 agent could only be made highly available using Pacemaker (active/passive). It could not be scaled out natively. However, in the Juno release, the concept of distributed routing was introduced. With distributed routing enabled, the L3 agent will on all the compute nodes and on a centralized "service" node.

On the compute nodes, the L3 agent provides NAT for instances that are associated with a floating IP address. This means that ingress traffic (traffic from external to tenant networks) capacity scales out with each additional compute node. It also means that when an instance is migrated off of a compute node (because of maintenance or failure), the floating IP address will be moved to the new compute node.

On the service nodes, the L3 agent provides NAT for egress traffic (

@ianunruh
ianunruh / join.ps
Created September 12, 2014 03:11
Join Windows to AD
#ps1_sysnative
$domain = "YOURDOMAIN"
$password = "YOURPASSWORD" | ConvertTo-SecureString -asPlainText -Force
$username = "$domain\Administrator"
$credential = New-Object System.Management.Automation.PSCredential($username,$password)
Add-Computer -DomainName $domain -Credential $credential
@ianunruh
ianunruh / onboard-tenant.py
Last active August 29, 2015 14:06
Onboard tenants to OpenStack
#!/usr/bin/env python
from argparse import ArgumentParser, RawTextHelpFormatter
import os
import sys
import keystoneclient.v2_0
import neutronclient.v2_0.client
NEUTRON_ROUTER_FORMAT = '{}-router'
NEUTRON_NETWORK_FORMAT = '{}-network'
@ianunruh
ianunruh / quickstart-corosync.sh
Last active September 7, 2016 15:15
Corosync + Pacemaker basics on Ubuntu 14.04
#!/bin/bash
BIND_NETWORK="192.168.5.0"
SHARED_VIP="192.168.5.30"
apt-get update
apt-get install -y pacemaker ntp
# Configure Corosync
echo "START=yes" > /etc/default/corosync
sed -i "s/bindnetaddr: 127.0.0.1/bindnetaddr: $BIND_NETWORK/g" /etc/corosync/corosync.conf
@ianunruh
ianunruh / start-sharded-cluster.sh
Last active August 29, 2015 14:04
MongoDB sharded cluster on single box
#!/bin/bash
DATA_PATH=/tmp/sharded-cluster
HOSTNAME=$(hostname -f)
killall mongod > /dev/null 2>&1
killall mongos > /dev/null 2>&1
rm -rf $DATA_PATH
mkdir -p $DATA_PATH/rs{0,1}{a,b,c}
package main
import (
"net/http"
"github.com/go-martini/martini"
"github.com/martini-contrib/encoder"
"github.com/martini-contrib/strict"
)