Skip to content

Instantly share code, notes, and snippets.

@RulerOf
Last active May 17, 2018 09:52
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save RulerOf/a27454e8f95c6da62ea506c4893d95ba to your computer and use it in GitHub Desktop.
Save RulerOf/a27454e8f95c6da62ea506c4893d95ba to your computer and use it in GitHub Desktop.
Short description of service backup setup

Content-Engine Updates and Backup

Content Engine is a nickname I've given to a set of services, run inside of Docker containers through Docker Compose, that runs services predicated on serving media up for private use in a self-hosted manner.

This is a brief description of the update and backup strategy that I use for the containers and their data.

Files and their purposes

/opt/scripts/content-engine-update.sh

This file updates the containers, keeping an explicit log of its activities and also using logger to print things to my local syslog. I'll include a sample of the logfile output at the end of the gist.

/opt/scripts/fio-backup.sh

This script backs up the contents of /srv, which is where my main application storage is mounted. Everything that is custom to this server's setup is ultimately symlinked back into this mount point, such that backing it up serves as a catch-all for any hard-to-replace data. The backup process also produces a special log file with rsync stats, as well as using logger.

/etc/crontab

This should be a systemd timer, but I hadn't spent the time to learn how to configure those properly when I initially set this up, and have no desire to reconfigure it :P

This is included just to show how I invoke these scripts to actually get work done and create the log files.

Sanoid Notes

Sanoid is a tool that automates the process of ZFS snapshot management. I use it to implement version control of my backups. I'll include some of the log output from that as well just for the sake of completeness.

Final Remarks

This whole thing was an exercise in "classic Linux systems adminitration" for me. I'm used to doing everything in a very heavily-repeatable and generic way with respect to systems management (chef, terraform, etc), but for this particular implementation I didn't want to do that, yet still wanted to see what I came up with. In the end, doing some level of documentation on the build was basically unavoidable because it drives me nuts to not have that otherwise. This has been a fun, if ever-changing project. I'll probably be tweaking it forever.

#!/bin/bash
# content-engine-update.sh pulls new container images and starts those images
# if there are any updates to be found. The "proper" way to do this is with
# the v2tec/watchtower docker image, but I prefer this method because of the
# logging output I get with this single-host setup
# Get the script name
scriptName="$(basename $0)"
# Echo out a visual separator and print the time in the output
echo ------------------------
date
echo ------------------------
# Do the same thing for STDERR log too
>&2 echo ------------------------
>&2 date
>&2 echo ------------------------
# Move to the directory
cd /var/lib/content-engine
# Pull updated images
docker-compose --no-ansi pull && logger -s -t $scriptName docker images updated 2>&1 || logger -s -t $scriptName FAILURE updating docker images
# Recreate updated containers
docker-compose --no-ansi up -d && logger -s -t $scriptName containers recreated 2>&1 || logger -s -t $scriptName FAILURE recreating containers
# Put a space at the end to keep log files pretty
echo ""
>&2 echo ""
------------------------
Thu May 17 05:21:21 EDT 2018
------------------------
content-engine-update.sh: docker images updated
content-engine-update.sh: containers recreated
------------------------
Thu May 17 05:21:21 EDT 2018
------------------------
Pulling traefik ...
Pulling plex ...
Pulling sabnzbd ...
Pulling sonarr ...
Pulling radarr ...
Pulling headphones ...
Pulling plexpy ...
Pulling airsonic ...
Pulling musicbrainz ...
Pulling whoami ...
Pulling whoami ... done
Pulling radarr ... done
Pulling sonarr ... done
Pulling airsonic ... done
Pulling traefik ... done
Pulling musicbrainz ... done
Pulling plex ... done
Pulling sabnzbd ... done
Pulling headphones ... done
Pulling plexpy ... done
plex is up-to-date
whoami is up-to-date
radarr is up-to-date
tautulli is up-to-date
sabnzbd is up-to-date
sonarr is up-to-date
musicbrainz is up-to-date
headphones is up-to-date
airsonic is up-to-date
traefik is up-to-date
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
MAILTO=""
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
12 * * * * root fio-backup.sh >> /var/log/backup/fio-backup.log 2>> /var/log/backup/fio-backup_error.log
00 * * * * root sanoid --cron >> /var/log/backup/sanoid.log 2>> /var/log/backup/sanoid_error.log
55 6 * * * root content-engine-update.sh >> /var/log/content-engine/update.log 2>> /var/log/content-engine/update_error.log
------------------------
Thu May 17 05:34:54 EDT 2018
------------------------
Number of files: 1231455
Number of files transferred: 42
Total file size: 173.53G bytes
Total transferred file size: 1.50G bytes
Literal data: 1.52G bytes
Matched data: 0 bytes
File list size: 39.47M
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 1.56G
Total bytes received: 288.84K
sent 1.56G bytes received 288.84K bytes 48.07M bytes/sec
total size is 173.53G speedup is 111.06
------------------------
Thu May 17 05:38:08 EDT 2018
------------------------
Number of files: 1232739
Number of files transferred: 1351
Total file size: 172.41G bytes
Total transferred file size: 1.15G bytes
Literal data: 1.15G bytes
Matched data: 0 bytes
File list size: 39.49M
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 1.19G
Total bytes received: 313.76K
sent 1.19G bytes received 313.76K bytes 37.71M bytes/sec
total size is 172.41G speedup is 145.16
fio-backup.sh: /srv to /storage/backup/drew-metal/srv succeeded
#!/bin/bash
# fio-backup.sh is a script that is inteded to be run by cron every hour. It
# copies the contents of the /srv mount point over to /storage/backup/<hostname>.
# This is because /srv is an LVM RAID 0 volume of the two FusionIO drives in the
# expansion slots in the back of the machine
# Get the script name
scriptName="$(basename $0)"
# Echo out a visual separator and print the time in the output
echo ------------------------
date
echo ------------------------
# Do the same thing for STDERR log too
>&2 echo ------------------------
>&2 date
>&2 echo ------------------------
# Back up to /storage/backup
rsync -a --stats --update --inplace --human-readable --delete --exclude var/lib/content-engine/plex/transcode/* --delete-excluded /srv/* /storage/backup/drew-metal/srv/ && logger -s -t $scriptName /srv to /storage/backup/drew-metal/srv succeeded 2>&1 || logger -s -t $scriptName /srv to /storage/backup/drew-metal/srv FAILED
# Put a space at the end to keep log files pretty
echo ""
>&2 echo ""
------------------------
Thu May 17 05:34:54 EDT 2018
------------------------
file has vanished: "/srv/var/lib/content-engine/sabnzbd/incomplete/censored/newz[NZB].nfo"
file has vanished: "/srv/var/lib/content-engine/sabnzbd/incomplete/censored/__ADMIN__/SABnzbd_attrib"
file has vanished: "/srv/var/lib/content-engine/sabnzbd/incomplete/censored/__ADMIN__/SABnzbd_nzo_jiwLDm"
rsync warning: some files vanished before they could be transferred (code 24) at main.c(1052) [sender=3.0.9]
fio-backup.sh: /srv to /storage/backup/drew-metal/srv FAILED
------------------------
Thu May 17 05:38:08 EDT 2018
------------------------
INFO: taking snapshots...
taking snapshot storage/backup@autosnap_2018-05-17_05:00:01_hourly
taking snapshot storage/media/emulation@autosnap_2018-05-17_05:00:01_hourly
taking snapshot storage/photos@autosnap_2018-05-17_05:00:01_hourly
taking snapshot storage/media/software@autosnap_2018-05-17_05:00:01_hourly
taking snapshot storage/media@autosnap_2018-05-17_05:00:01_hourly
taking snapshot storage/vdisks@autosnap_2018-05-17_05:00:01_hourly
INFO: cache expired - updating from zfs list.
INFO: pruning snapshots...
INFO: pruning storage/backup@autosnap_2018-05-10_04:00:01_hourly ...
INFO: cache expired - updating from zfs list.
INFO: pruning storage/media/emulation@autosnap_2018-05-15_04:00:01_hourly ...
INFO: cache expired - updating from zfs list.
INFO: pruning storage/photos@autosnap_2018-05-10_04:00:01_hourly ...
INFO: cache expired - updating from zfs list.
INFO: pruning storage/media/software@autosnap_2018-05-15_04:00:01_hourly ...
INFO: cache expired - updating from zfs list.
INFO: pruning storage/media@autosnap_2018-05-15_04:00:01_hourly ...
INFO: cache expired - updating from zfs list.
INFO: pruning storage/vdisks@autosnap_2018-05-10_04:00:01_hourly ...
INFO: cache expired - updating from zfs list.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment