Skip to content

Instantly share code, notes, and snippets.

View purpleidea's full-sized avatar

James purpleidea

View GitHub Profile
@purpleidea
purpleidea / quick-devops-hacks
Created February 4, 2014 05:04
Code and slides from "Quick DevOps Hacks" lightning talk.
# slides
Available at:
https://dl.dropboxusercontent.com/u/48553683/quick-devops-hacks-devopsmtl-2014.pdf
# show the exit status in your $PS1
Article and code at:
https://ttboj.wordpress.com/2014/01/29/show-the-exit-status-in-your-ps1/
$brickdir = '/storage1';
$glusterdir = '/storage';
// if you only want to copy certain directories, specify them here
$directories = array('apt-mirror', 'apt-repo', 'downloads', 'torrents', 'storage');
$max_copies = 100; // concurrency
$file_size = 0; // total copied file size
{
"createdBy": "Redirector v3.0.4",
"createdAt": "2015-11-14T09:52:11.766Z",
"redirects": [
{
"description": "ghttps",
"exampleUrl": "https://example.com/whatever",
"exampleResult": "ghttps://example.com/whatever",
"error": null,
"includePattern": "https://example.com/*",
@purpleidea
purpleidea / gist:8202237
Created December 31, 2013 21:05
for your .bashrc
# run nethogs on the interface that is being used for the default route
function nethogs {
dev=`ip r | grep '^default' | awk '{print $5}'`
#echo $dev
if [ `id -u` -eq 0 ]; then
nethogs $@ $dev
else
sudo nethogs $@ $dev
fi
#!/bin/bash
if [ "$1" = '' ]; then
echo "Usage: ./`basename "$0"` <hostname>"
exit 1
fi
# NOTE: lets say you try to provision a previously provisioned host...
# $ vp foo
# [foo] Configuring cache buckets...
#!/usr/bin/python
# James Shubin, @purpleidea, 2016+, AGPLv3+
# Count number of files in each package, and figure out which has the most
# We took a string based parsing approach to the xml filelists for simplicity
# When I ran this, the max was: kcbench-data-4.0, with 52116 files
# Verify with dnf repoquery --quiet -l kcbench-data-4.0 | wc -l
# To run this script, do something like the following:
# wget http://mirror.its.dal.ca/pub/fedora/linux/releases/23/Everything/x86_64/os/repodata/874f220caf48ccd307c203772c04b8550896c42a25f82b93bd17082d69df80db-filelists.xml.gz
# gunzip 874f220caf48ccd307c203772c04b8550896c42a25f82b93bd17082d69df80db-filelists.xml.gz
# time cat 874f220caf48ccd307c203772c04b8550896c42a25f82b93bd17082d69df80db-filelists.xml | ./dnf_count_files.py > /tmp/output
#
# mgmt grouping analysis - 29/mar/2016
# By: James Shubin <james@shubin.ca>
# https://ttboj.wordpress.com/2016/03/30/automatic-grouping-in-mgmt/
#
* Comparison of different backends for package installation
* All times are in seconds. All tests ran with warm caches. Longer is worse.
* Data was collected from multiple runs but only one sample of each shown here.
* Accompanying spreadsheet with full data is also available.
15/Jun/2016
The other day, Chef announced an "application automation" system called Habitat. [1] Not too surprisingly, a lot of people have asked for my opinion about it, so I might as well collect some of my thoughts here.
First off, let me say that I like the Chef team and I welcome new projects in this space - especially when they're open source. Most of my early config management work has been with Puppet, but I think that Chef got a lot of things right, and they've also done an amazing job around community and avoiding open core. Nathen Harvey has no doubt been a driving influence here, and he's always been a pleasure to talk to at conferences. It's also worth mentioning that while my recent work on #mgmtconfig [2] has been heavily influenced from my time hacking on Puppet, I've tried to borrow ideas and draw inspiration from Chef concepts when they were more appropriate than what Puppet was doing.
The Habitat project was made public with almost 1800 commits so far. I really wish that if organizations

Let's say you have a Bash shell script, and you need to run a series of operations on another system (such as via ssh). There are a couple of ways to do this.

First, you can stage a child script on the remote system, then call it, passing along appropriate parameters. The problem with this is you will need to manually keep the remote script updated whenever you change it -- could be a bit of a challenge when you have something to execute on a number of remote servers (i.e., you have a backup script running on a central host, and it needs to put remote databases in hot backup mode before backing them up).

Another option is to embed the commands you want to run remotely within the ssh command line. But then you run into issues with escaping special characters, quoting, etc. This is ok if you only have a couple commands to run, but if it is a complex piece of Bash code, it can get a bit unwieldy.

So, to solve this, you can use a technique called rpcsh -- rpc in shell script, as follows:

First, place th

$ cat fix-dropbox.sh
#!/bin/bash
# XXX: use at your own risk - do not run without understanding this first!
exit 1
# safety directory
BACKUP='/tmp/fix-dropbox/'
# TODO: detect or pick manually...