Skip to content

Instantly share code, notes, and snippets.

@nathwill
Last active December 31, 2015 19:18
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save nathwill/8032248 to your computer and use it in GitHub Desktop.
Save nathwill/8032248 to your computer and use it in GitHub Desktop.
do crazy crazy shit with a backup file
#!/bin/bash
set -eu
# only run one instance
LOCKFILE=/tmp/pg_backup.lock
[ -f $LOCKFILE ] && { echo "Error: $LOCKFILE exists"; exit 1; } >&2
touch $LOCKFILE
trap "{ rm -f $LOCKFILE; exit; }" EXIT
# setup
backup_dir=/backup/pgsql
# run the backup
# best effort class, priority 1 (normal procs are best effort, priority 0)
ionice -c 2 -n 1 /usr/blueboxgrp/scripts/backup_pgdump.pl --expirem=120
# get completed backup
latest_backup=$(find $backup_dir/ -maxdepth 1 -type f -name pgsqldump* -print | sort | tail -n 1)
filename=$(basename $latest_backup)
# haters gonna hate
cd $backup_dir
while read line_data; do
mkdir $line_data
cd $line_data
done < <(echo "${filename}" | egrep -o "[0-9]+")
# There's a filesize cap on what we can upload
# so we upload it in chunks
ionice -c 2 -n 1 split -b 500M $latest_backup pgsqldump.gz.part-
# now we bandwidth limit the upload to 10Mb/s
trickle -u 1280 /usr/local/rvm/rubies/ruby-1.9.2-p320/bin/ruby /usr/local/bin/backup_to_s3.rb `pwd`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment