Skip to content

Instantly share code, notes, and snippets.

@x2q
Last active February 19, 2018 00:29
Show Gist options
  • Save x2q/5024054 to your computer and use it in GitHub Desktop.
Save x2q/5024054 to your computer and use it in GitHub Desktop.
WD My Book Live Duplicity Back Up To Amazon S3 (and Glacier). See this blog post for more info: http://www.x2q.net/blog/2013/02/24/howto-backup-wd-mybook-live-to-amazon-s3-and-glacier/
#!/bin/sh
# WD Live Duplicity Back Up To Amazon S3 (and Glacier)
#
# Requires python-boto, duplicity, util-linux, and trickle to be installed
# Install using: apt-get install python-boto duplicity util-linux trickle
#
# See this blog post for more info:
# http://www.x2q.net/blog/2013/02/24/howto-backup-wd-mybook-live-to-amazon-s3-and-glacier/
# Exclusive locking using flock. Ensures that only instance is actice at the
# time. This is useful, when your backup might run for several days.
# flock man page: http://linux.die.net/man/2/flock
exec 200<$0 flock -n 200 || exit 1
# Path variables
export PATH="/opt/bin:/opt/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin"
export HOME="/root"
# Export some ENV variables so you don't have to type anything
export AWS_ACCESS_KEY_ID=<YOUR AWS_ACCESS_KEY_ID>
export AWS_SECRET_ACCESS_KEY=<YOUR AWS_SECRET_ACCESS_KEY>
# GPG (currently unused)
#export PASSPHRASE=<your GPG passphrase>
#export GPG_KEY=<your key id>
# Bandwidth limiting
export MAXUPLOAD=128 # kbyte per second
# The source of your backup
SOURCE=/DataVolume/shares/Archive
CACHE=/DataVolume/Temp/.cache/duplicity
TEMP=/DataVolume/Temp
# The destination
# Note that the bucket need not exist but does need to be unique amongst all
# Amazon S3 users. So, choose wisely.
DEST=s3+http://<your-s3-bucket>
trickle -u ${MAXUPLOAD} duplicity \
--tempdir=${TEMP} \ # important on a WD Live with limited root space
--no-encryption \ # change this if you want to keep your stuff secret
--verbosity 5 \
--full-if-older-than 180D \
--s3-use-rrs \ # use this if you migrate to glacier
--s3-unencrypted-connection \ # usually provider better upload speeds
--volsize 128 \ # if you upload several TBs, then avoid too many files
--archive-dir ${CACHE} \ # important on a WD Live with limited root space
--asynchronous-upload \ # speeds up the upload
--s3-use-new-style ${SOURCE} ${DEST}
# Reset the ENV variables. Don't need them sitting around
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment