Skip to content

Instantly share code, notes, and snippets.

@lavoiesl
Created May 3, 2012 00:44
Show Gist options
  • Star 8 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lavoiesl/2582225 to your computer and use it in GitHub Desktop.
Save lavoiesl/2582225 to your computer and use it in GitHub Desktop.
Very low priority backup using duplicity
#!/bin/bash
# Export some ENV variables so you don’t have to type anything
export AWS_ACCESS_KEY_ID='my-key-id'
export AWS_SECRET_ACCESS_KEY='my-secret'
export PASSPHRASE='my-gpg-key-passphrase'
GPG_KEY='my-gpg-pub-id'
# The source of your backup
SOURCE=/
# The destination
# Note that the bucket need not exist
# but does need to be unique amongst all
# Amazon S3 users. So, choose wisely.
DEST='s3+http://my-bucket-name'
# Spawn duplicity with nicest priority
# Background to be able to start cpulimit after
nice -n 19 duplicity \
--encrypt-sign-key=${GPG_KEY} \
--include=/boot \
--include=/etc \
--include=/home \
--include=/root \
--include=/var/lib/mysql \
--exclude=/** \
"${SOURCE}" \
"${DEST}" \
&
# Sleep 300ms to make sure duplicity is spawned
sleep 0.3
# Limit CPU usage to 10%
# Uses http://cpulimit.sourceforge.net/
pid=$(pgrep duplicity)
[[ "$pid" -gt 0 ]] && cpulimit -b -p $pid --limit=10
# Reset the ENV variables. Don’t need them sitting around
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PASSPHRASE=
@julian-labuschagne
Copy link

I have been looking for something exactly like this. I will definitely test this. Thanks so much for sharing.

@Reiner030
Copy link

Hello,

I have made some tests because I need such limitation too...
Its a nice setup but couldn't work complete correctly?

In my tests

  • duply => duplicity => gpg made the high load...
  • "stress" test forked childs which produces the high load... and cpulimit try only to limits the main stress process (which is unnecessary) but not its chilidrens.

When testing different options I found out that cpulimit called with binary path is a good choice because it auto-detects multiple times created tasks... and need only finally to terminated.
So it may be better such way:

cpulimit -b -e /usr/bin/gpg -l 10
nice -n 19 duplicity ...
killall /usr/bin/cpulimit

? If cpulimit is needed also for other tasks the pid must be detected in other way e.g. like this:

kill `pgrep -f "cpulimit -b -e /usr/bin/gpg"`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment