Skip to content

Instantly share code, notes, and snippets.

@alyssacohen
Last active October 2, 2022 02:35
Show Gist options
  • Save alyssacohen/5645aea82f1d300caa204c34912eb1b0 to your computer and use it in GitHub Desktop.
Save alyssacohen/5645aea82f1d300caa204c34912eb1b0 to your computer and use it in GitHub Desktop.
My laptop backup with restic

My Laptop Backup with restic

Some notes about setting up automatic backup on IBM Cloud Object Storage.

Inspired by an article about making free backups to the cloud and this one about automating the process with systemd, I decided to set up automatic backup over the internet of my mule laptop "lapdog". It's an Arch Linux powered machine I mostly use to kill time, so it's not a big deal if for some reason I screw things up. Something went wrong when I followed that guide, so I decided to retrace all the steps following restic and aws documentation instead of those articles. And here is my version of the guide (hope it helps someone) IBM COS basically let you create an Amazon S3 bucket with up to 25GB of cloud space to upload your backups for free! ๐Ÿ˜ Too small for a full system backup, but enough for day by day documents, jots, pictures and you can still switch to paid service if you need more.

So, let's create an account with Lite (aka free) plan and get some work done...

Create the bucket

Click on, you guessed it, "Create resource" button on the dashboard

Create resource

And under "Storage" category select "Object Storage"

Cloud Object Storage

Name it, tag it, assign it to a group as you like, but bear in mind you can create just one instance on Lite plan.

Now we can create the bucket Create bucket

...and service credentials. Create new service credentials

Be careful to turn on the switch to enable HMAC credentials as is set to off by default. Include HMAC credentials

Last thing before we can close the browser tab is to jot down these info:

  • Access key ID
  • Secret access key
  • Storage endpoint (something like s3.eu-de.cloud-object-storage.appdomain.cloud depending on the region you choose)
  • Bucket name (mine is mybackupbucket)

You can find your access key ID and secret access key under Service credentials and showing details with the dropdown arrow. Show credential details

Storage endpoint is under Buckets > Configuration or directly clicking on the bucket "three dots" button in buckets list. Open bucket conf Endpoints

Repository initialization

Before we can upload our snapshots we have to initialize the repository setting the password in the process.

๐Ÿ‘‰ repository string has the form s3:<storage endpoint>/[bucket name]

๐Ÿ‘‰ Do not lose the password!

$ restic -r s3:s3.eu-de.cloud-object-storage.appdomain.cloud/mybackupbucket init
enter password for new backend:
enter password again:
created restic backend 2fg02d1e5 at s3:s3.eu-de.cloud-object-s...

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

Now store that password in a file running

$ echo myunguessablepassword > ~/.config/restic/secretword.txt

The config file

Now is time to create a config file for the systemd unit to use. Mine is: /home/alyssa/.config/restic/backup.conf

๐Ÿ‘‰ RESTIC_REPOSITORY has the form s3:<storage endpoint>/<bucket name>

BACKUP_PATHS="/home/alyssa"
BACKUP_EXCLUDES="--exclude-file /home/alyssa/.config/restic/excludes.txt --exclude-if-present .exclude_from_backup"
RETENTION_DAYS=7
RETENTION_WEEKS=3
RETENTION_MONTHS=6
RETENTION_YEARS=3
AWS_ACCESS_KEY_ID=xxxxxxxxxxx-32digits-xxxxxxxxxxx
AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxx-48digits-xxxxxxxxxxxxxxxxxxxx
RESTIC_REPOSITORY=s3:s3.eu-de.cloud-object-storage.appdomain.cloud/mybackupbucket
RESTIC_PASSWORD_FILE=/home/alyssa/.config/restic/secretword.txt

It's set to backup my whole home directory with some exceptions: files and paths listed in excludes.txt file and folders containing a file named .exclude_from_backup. This option let you exclude a path just creating an empty file with the specifed name in it without editing the exclusion list, useful i.e. for temporary exclusion.

The forget policy is set to keep last 7 daily snapshots, plus last 2 weekly (one is already kept as daily) and so on.

Systemd unit files

This is a trick I found on reddit.

Creating this little systemd unit, you can then "attach" it to the one running the backup to be alerted (using notify-send) in case of failure.

~/.config/systemd/user/systemd-notify@.service

[Unit]
Description=Notify shell about unit failure

[Service]
Type=oneshot
ExecStart=notify-send --urgency=normal '%i failed.' 'See "systemctl --user status %i" and "journalctl --user-unit %i" for details.'

To do so, just add the OnFailure option as you can see below (:point_right: the %n is not a typing error).

~/.config/systemd/user/restic-backup.service

[Unit]
Description=Restic backup service
OnFailure=systemd-notify@%n.service
[Service]
Type=oneshot
ExecStart=restic backup --verbose --verbose --tag auto $BACKUP_EXCLUDES $BACKUP_PATHS
ExecStartPost=restic forget --verbose --tag auto --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
EnvironmentFile=%h/.config/restic/backup.conf

Snapshots created by this service are tagged as auto to separate them from those eventually created in other ways. The same tag is used by the forget command (explained below) to just operate on those snapshots. Now we need to set up the service to run daily.

~/.config/systemd/user/restic-backup.timer

[Unit]
Description=Backup with restic daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target

Running the service

First of all we need to reload systemd

$ systemctl --user daemon-reload

Then we can try to run the service manually

$ systemctl --user start restic-backup

You can wait until the backup finishes or you can Ctrl+Z and bg the process. In either cases you can check what's going on running journalctl --user-unit restic-backup -f

Checking that everything has gone right can be done with the snapshots command to list what's inside our backup repository and with the check command to actually check data integrity.

$ restic -r $RESTIC_REPOSITORY -p $RESTIC_PASSWORD_FILE snapshots
repository 2fg02d1e5 opened successfully, password is correct
ID        Time                 Host        Tags        Paths
----------------------------------------------------------------------------
63g75c64  2020-04-01 08:05:21  lapdog      test        /home/alyssa/projects
5o3s5c12  2020-04-01 09:43:16  lapdog      auto        /home/alyssa
g7f75c64  2020-04-02 21:09:27  lapdog      auto        /home/alyssa
b77b5f21  2020-04-03 00:10:54  lapdog      auto        /home/alyssa
----------------------------------------------------------------------------
4 snapshots
$ restic -r $RESTIC_REPOSITORY -p $RESTIC_PASSWORD_FILE check
using temporary cache in /tmp/restic-check-cache-529265944
repository 78e131ea opened successfully, password is correct
created new cache in /tmp/restic-check-cache-529265944
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

Prune

Some snapshots are deleted by the forget command but actual data remain on the backup server still occupying our precious space. Since snapshots kept by the keep policy may need part of those data to work, freeing space deleting just "useless" data is a not so fast process that we can run in a separate service.

~/.config/systemd/user/restic-prune.service

[Unit]
Description=Restic backup service (pruning)
[Service]
Type=oneshot
ExecStart=restic prune
EnvironmentFile=%h/.config/restic/backup.conf

To automatically prune, let's say, weekly our backup repository we can create and enable the corresponding timer

~/.config/systemd/user/restic-prune.timer

[Unit]
Description=Weekly prune data from restic repository
[Timer]
OnCalendar=weekly
Persistent=true
[Install]
WantedBy=timers.target

And that's all folks! (for now. I still have to figure out how to restore my data ๐Ÿ˜…)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment