Skip to content

Instantly share code, notes, and snippets.

@broedli
Last active February 8, 2021 15:01
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save broedli/4f401e0097f185ba34eb to your computer and use it in GitHub Desktop.
Save broedli/4f401e0097f185ba34eb to your computer and use it in GitHub Desktop.
ARCH - Snapper

This shall describe the installation and setup of snapper (CLI) on Arch Linux.

1. Setup SSH

echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
systemctl enable dhcp@<eth interface> && systemctl start dhcp@<eth interface> && systemctl enable sshd && systemctl start sshd
pacman -Syy && pacman -S git
git clone https://gist.github.com/138bdbeddce61d5266cd.git && less 138bdbeddce61d5266cd/README.md

Enable your public key

mkdir -p $HOME/.ssh && echo "<your public key>" >> $HOME/.ssh/authorized_keys
systemctl enable sshd && systemctl start sshd

2. Set environment

Install sudo and allow users in %wheel group to use it:

pacman -S sudo
sed -i 's/# %wheel ALL=(ALL) ALL/%wheel ALL=(ALL) ALL/g' /etc/sudoers
passwd

Set the Keyboard layout and language for systemd:

localectl set-keymap de && localectl set-x11-keymap pc105 de

Enable the systemd services for BTRFS:

Use btrfs-scrub@-.timer for / and btrfs-scrub@home.timer for home.

systemctl enable btrfs-scrub@-.timer && systemctl start btrfs-scrub@-.timer

3. Add a User

groupadd -g <GROUP_ID> <NAME_OF_GROUP>
useradd -m -g <NAME_OF_GROUP> -G users,wheel,storage,power,network,audio -c "<FULL USERNAME>" <USERNAME>
passwd <USERNAME>

4. Install pacaur

su <USERNAME>
cd ~/
curl -O https://aur.archlinux.org/cgit/aur.git/snapshot/cower.tar.gz && tar zxvf cower.tar.gz

We need to add the keys for the cower package:

cd cower && gpg --keyserver pgp.mit.edu --recv-keys F56C0C53 && makepkg -si && cd ~/

Install pacaur

curl -O https://aur.archlinux.org/cgit/aur.git/snapshot/pacaur.tar.gz && tar zxvf pacaur.tar.gz
cd pacaur && makepkg -si && cd ~/
pacaur -S snapper grub-btrfs-git
curl -O https://aur.archlinux.org/cgit/aur.git/snapshot/package-query.tar.gz && tar zxvf package-query.tar.gz
cd package-query && makepkg -si && cd ~/
curl -O https://aur.archlinux.org/cgit/aur.git/snapshot/pacupg.tar.gz && tar -zxvf pacupg.tar.gz
cd pacupg && makepkg -si  && cd ~/

Clean up:

rm *.tar.gz && rm -rf package-query pacupg pacaur cower

Return to ROOT:

exit

6 Configure snapper

Create subvolumes for each snapper target:

mount /dev/mapper/crypt /mnt && cd /mnt
mkdir -pv __snapshot && btrfs subvolume create __snapshot/rootvol && btrfs subvolume create __snapshot/home && btrfs subvolume create __snapshot/opt && btrfs subvolume create __snapshot/var

Check your work and remember the subvol IDs:

btrfs subvolume list .

6.1 Edit /etc/fstab

Get your UUIDs from mountpoints already present or do ls -l /dev/disk/by-uuid BTRFS subvolume IDs can be found with "btrfs subvolume list <path/to/mount>"

Add this for a presumed standard set of subvolumes (rootvol, home, opt and var) residing within a dm-crypted BTRFS volume named crypt on a SSD:

cat >>/etc/fstab <<EOL
# /dev/mapper/crypt LABEL=<PART_LABEL>
UUID=<PART_UUID>       /.snapshots            btrfs           rw,nodev,noatime,compress=lzo,ssd,discard,space_cache,subvolid=<SUBVOL_ID_ROOT>,subvol=/__snapshot/rootvol,subvol=__snapshot/rootvol       0 0
# /dev/mapper/crypt LABEL=<PART_LABEL>
UUID=<PART_UUID>       /home/.snapshots        btrfs           rw,nodev,noatime,compress=lzo,ssd,discard,space_cache,subvolid=<SUBVOL_ID_HOME>,subvol=/__snapshot/home,subvol=__snapshot/home       0 0
# /dev/mapper/crypt LABEL=<PART_LABEL>
UUID=<PART_UUID>       /opt/.snapshots        btrfs           rw,nodev,noatime,compress=lzo,ssd,discard,space_cache,subvolid=<SUBVOL_ID_OPT>,subvol=/__snapshot/opt,subvol=__snapshot/opt       0 0
# /dev/mapper/crypt LABEL=<PART_LABEL>
UUID=<PART_UUID>       /var/.snapshots        btrfs           rw,nodev,noatime,compress=lzo,ssd,discard,space_cache,subvolid=<SUBVOL_ID_VAR>,subvol=/__snapshot/var,subvol=__snapshot/var       0 0
EOL

For BTRFS filesystems on normal HDD drives take these mount options (Reference):

cat >>/etc/fstab <<EOL
# /dev/mapper/crypt LABEL=<PART_LABEL>
UUID=<PART_UUID>       /<PATH_TO_MOUNT>/.snapshot        btrfs           rw,nosuid,nodev,relatime,compress-force=lzo,space_cache,autodefrag,subvolid=<SUBVOL_ID>,subvol=/<PATH_TO_SUBVOL>,subvol=<PATH_TO_SUBVOL>    0 0
EOL

Check your work:

vim /etc/fstab

Do not mount your new mountpoints yet. Snapper wants to create a directory and snapshot by itself. If the respective */.snapshots directories already exist, snapper will refuse to create the inital config files.

6.2 Create snapper configs for /, /home, /opt, /var

snapper -c root create-config / && snapper -c home create-config /home && snapper -c opt create-config /opt && snapper -c var create-config /var

Delete automaticly created subvolumes */.snapshots and link our previously created __snapshot subvolumes:

btrfs subvolume delete /.snapshots && btrfs subvolume delete /home/.snapshots && btrfs subvolume delete /opt/.snapshots && btrfs subvolume delete /var/.snapshots
mkdir -p /.snapshots && mkdir -p /home/.snapshots && mkdir -p /opt/.snapshots && mkdir -p /var/.snapshots
chmod 750 /.snapshots && chmod 750 /home/.snapshots && chmod 750 /opt/.snapshots && chmod 750 /var/.snapshots
chown :users /home/.snapshots

Mount all */.snapshots mounts in /etc/fstab and check your work:

mount -a && df -hT

Edit snapper configuration files:

vim /etc/snapper/configs/*

Refer to https://wiki.archlinux.org/index.php/Snapper#Create_a_new_configuration

Also: https://en.opensuse.org/openSUSE:Snapper_Tutorial

Enable automatic snapshots:

systemctl enable snapper-timeline.timer && systemctl start snapper-timeline.timer
systemctl enable snapper-cleanup.timer && systemctl start snapper-cleanup.timer

6.3 Controlling snapper and your snapshots

You now may check your configurations and snapshots and also manual create a custom snapshot:

# <CONFIG>'s we created are: root, home, opt, var.
snapper -c <CONFIG> list        # Show available snapshots for respective config.
snapper -c <CONFIG> list-config # Show available configurations for respective config.
snapper -c <CONFIG> create -d "<DESCRIPTION>" # Create a snapshot with a description for respective config.
snapper -c <CONFIG> delete <ID> # Delete a snapshot for respective config with <ID>.

If you want to see what directories and / or files have been created, modified or deleted in the time bewteen two snapshots do:

snapper status -o <OUTPUT_FILE> <ID_1> <ID_2>

To compare the content of files and directories that have been created, modified or deleted do:

snapper diff <ID_1> <ID_2>

6.4 Caveats

Snapshots of root filesystems

Keeping a high number of snapshots of busy filesystems like / can cause serious slowdowns to your system. You may want to consider creating separate subvolumes for directories that dont benefit much from snapshots like /var/cache/pacman/pkg and /var/abs.

Updatedb

By default, updatedb will index all .snapshots directories used by snapper, which may cause serious slowdown and excessive memory usage if you have many snapshots. You may prevent updatedb from indexing those directories by adding the following to /etc/updatedb.conf:

PRUNENAMES = ".snapshots"

7. Using pacupg and grub-btrfs

Instead of running pacman -Syu you should use pacupg. This will automaticly wrap the upcomming upgrade (NOTE: Files are downloaded before a snapshot is made!) inside two snapshots. When grub-btrfs is installed pacupg will also add a bootentry for the snapshots created.

pacupg

pacupg -a uses pacaur to check for updated AUR packages.

To recover or delete snapshots with pacupg do:

pacupg -r

Alternatively you can use snp which does the same, but in a more manual context. For example you may use snp to wrap an install of a new package inside of two snapshots. This function is afaik not provided by pacupg.

8. Recover root system from a snapshot

Boot from an ARCH live CD and open the luks container:

cryptsetup luksAddKey /dev/sdXY /path/to/key
cryptsetup luksOpen /dev/sdXY crypt -d /path/to/key

Mount the BTRFS root volume:

mount /dev/mapper/crypt /mnt

Find the correct snapshot you want to recover:

vi /mnt/__snapshot/<SUBVOLUME>/*/info.xml

In these info.xml you will find the <DESCRIPTION>, <DATE> and <NUM>. Remember the <NUM> as it represents the <ID> later needed to restore the snapshot.

In vi you can use :n, :p to navigate between opened files and :w to store or :q to quit.

Next delete or move the old root subvolume:

btrfs subvolume delete /mnt/__active/system/rootvol

mv /mnt/__active/system/rootvol /mnt/__active/system/broken_rootvol

And create a read/write snapshot of the read-only snapshot from snapper:

btrfs subvolume snapshot /mnt/__snapshot/rootvol/<ID>/snapshot /mnt/__active/system/rootvol

Unmount your BTRFS volume and reboot.

umount /mnt
reboot

9. Manipulate snapshots

If you want to manipulate a snapshot you may want to take a look at snapperS. This python script expands snappers toolset to being able to delete or modify single files in a snapshot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment