Skip to content

Instantly share code, notes, and snippets.

@hilbix
Created May 15, 2023 08:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hilbix/16a94776ef687a092f4c338e76aa1069 to your computer and use it in GitHub Desktop.
Save hilbix/16a94776ef687a092f4c338e76aa1069 to your computer and use it in GitHub Desktop.
Borg Backup Script
#!/bin/bash
# vim: ft=bash
#
# borg init -enone `hostname -f`
STDOUT() { printf '%q' "$1"; printf ' %q' "${@:2}"; printf '\n'; }
STDERR() { local e=$?; STDOUT "$@" >&2; return $e; }
OOPS() { STDERR OOPS: "$@"; exit 23; }
x() { "$@"; STDERR exec $?: "$@"; }
o() { x "$@" || OOPS fail $?: "$@"; }
v() { local -n __var__="$1"; __var__="$("${@:2}")"; }
ov() { v "$@" || OOPS fail $?: set "$1" from "${@:2}"; }
ov DIR dirname -- "$0"
o cd "$DIR"
ov HOST hostname -f
[ -d "$HOST" ] && [ -f "$HOST/README" ] && [ -d "$HOST/data" ] || OOPS please create "$HOST" first
o printf -v NAME '%s::backup-%(%Y%m%d-%H%M%S)T' "$HOST" -1
LIST=()
declare -A FS
while read -ru6 src dst rest
do
case "$src" in
(/dev/loop*) continue;; # ignore snap
(/*) ;; # only use real mounts
(*) continue;; # ignore synthetic
esac
# Record SRC only once (SNAP uses binding mounts)
[ -n "${FS["$src"]}" ] && continue
FS["$src"]="$dst"
LIST+=("$dst")
done 6</proc/mounts
o borg create --stats --progress --one-file-system "$NAME" "${LIST[@]}"
@hilbix
Copy link
Author

hilbix commented May 15, 2023

Here is how I use it:

  • Normally, my machines have local ZFS under /zfs/
    • usually mirrored for auto-healing and speedup
zfs create zfs/backup
cd /zfs/backup
wget https://gist.github.com/hilbix/16a94776ef687a092f4c338e76aa1069/raw/82b8b824502626217fa39d9f2c14a744b13a54e2/backup.sh
mv backup.sh .x
chmod +x ./.x
borg init -enone `hostname -f`

Then to backup I just /zfs/backup/.x

  • This backups all local non-ZFS filesystems to my mirrored ZFS pool.
  • And the ZFS is automatically (incrementally) backupped by my central ZFS backup server.

Easy and convenient.

Please note that I do not use BTRFS because I read too many horrible stories about what can go wrong. The solution always is "call a BTRFS expert".

In contrast, in over 10 years of ZFS use on Linux with LUKS, mirror, RAID and encrypted ZFS volumes, there was not a single issue, which ZFS was unable to automatically(!) repair. And I had many of them (over 100 drives used). ZFS survived even a catastrophic half NUL of a drive. And backing up ZFS with zfs send | zfs receive is already fully automated at my side. All I need to do to backup is to create a snapshot, which then is automatically picked up by my the ZFS backup server (well, I had to script this myself, but it works).

When I read about BTRFS, it always is said:

  • Sudden power loss can make BTRFS unmountable.
    • You then need an expert to revive it. This is an absolute no-go.
    • In contrast, ZFS always mounts (AKA zpool import)
    • I never needed to hint zpool import to go back in time or find it's structures.
    • Perhaps I am lucky, but I often have sudden reboots or unexpected power loss.
  • Do not use RAID with BTRFS
    • Mirror is probably OK
    • But I have many RAID configurations with ZFS. Never had any issue with this.
    • Without RAID it is completely useless.
  • Do not use Snapshots
    • WTF?
    • Yes, I always read this and cannot even believe this is true!
    • But I do not want to be able to find out myself
  • Never fill a BTRFS to 100%
    • I never had problems with ZFS filling it up 100%
    • ZFS becomes inefficient (due to COW!), but that's all
    • But it gets back to normal when you remove files or expand ZFS devices.

Note that I use ZFS on top of LVM and often LUKS.
This way I can use LVM to expand ZFS devices.
This works fully online if done properly.

But:

ZFS is not good for people who are not careful enough.
Because 90% of all guides out there tell plain bullshit, are highly dangerous or lead you into some dead end.
So before you follow what others tell you, test it first with some scratch system!

Read:

  • If you use ZFS, you will be able to use it successfully.
  • If you want to change a running system, first train on this problem and you will solve it successfully.
  • If you encounter some problem, do LVM snaphot, then let ZFS repair the issue automatically (which may take weeks on bigger FS!)
  • Usually you then can remove the snapshot, as I never had the situation that I needed to roll back
    • Perhaps there might be problems I am not aware of
    • Even a borderline use of ZFS for over 10 years never revealed any such issue

In contrast to BTRFS (not my experience but what I read):

  • BTRFS may encounter strange problems just when using it
    • Filled up
    • Sudden power loss
    • Common disk errata (missed writes, USB drives disconnecting, etc.)
  • If you encounter some problem, even with an LVM snaphshot, you are usually still doomed.
    • "Call an expert" because the tools are not ready to use yet.
    • No way, sorry

None of these problems should be problems, as ZFS manages to overcome them all. The worst thing to repair all this was to reboot ..

That it is said that BTRFS needs less system resources is a nice-to-have which does not bring me to even test BTRFS. If BTRFS is ever reported as stable (repair wise for even weird problems) as ZFS, I will start to consider it.

Compare:

  • I switched to ext3 after it became more stable than ext2.
  • I switched to ext4 after it became as stable as ext3 but offered additional interesting options.
  • If BTRFS starts to become as stable as ext4, then I probably will switch to BTRFS.
    • As it offers even more interesting options like self-healing.
    • But first, it must be as stable as ext4. This includes being able to mount it in nearly every, even bad situation.
    • And AFAICS BTRFS is not yet as stable as ext4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment