Skip to content

Instantly share code, notes, and snippets.

@Birch-san
Last active December 12, 2023 20:17
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Birch-san/ec0c765fc497ce0c36a4c3cbf2d2ff22 to your computer and use it in GitHub Desktop.
Save Birch-san/ec0c765fc497ce0c36a4c3cbf2d2ff22 to your computer and use it in GitHub Desktop.
ZFS home encryption Ubuntu 22.10

I started with a basic Ubuntu 22.10 installation, where I chose in the installer to use ZFS as my volume manager.
I wanted to encrypt my home folder.

I followed the article (and comments, including Christoph Hagemann's) from:
https://talldanestale.dk/2020/04/06/zfs-and-homedir-encryption/

To achieve:

  • Home directory (a ZFS rpool mount) is encrypted
  • You are only prompted for password if you are trying to login to that user
    • So PC can boot fine to login screen without intervention
  • Password prompt authenticates you as the user and decrypts the home folder's rpool
  • SSH users get the same experience as physical users
    • You can power on the PC, then SSH in
  • Once rpool is unlocked: subsequent SSH login can use key exchange instead of password
  • Once all sessions log out: rpool is encrypted and unmounted again
@Birch-san
Copy link
Author

Birch-san commented Jan 28, 2023

Every time there's a gedit in here: see below comments for the contents I put into the files

zfs list -r rpool
sudo zfs set mountpoint=/home/birch_nonenc rpool/USERDATA/birch_mkvtd1
sudo zfs create -o encryption=aes-256-gcm -o keyformat=passphrase -o keylocation=prompt rpool/USERDATA/birch_enc -o mountpoint=/home/birch
VAL=$(zfs get com.ubuntu.zsys:bootfs-datasets rpool/USERDATA/birch_mkvtd1 -H -ovalue)
sudo zfs set com.ubuntu.zsys:bootfs-datasets=$VAL rpool/USERDATA/birch_enc
sudo chown birch:birch /home/birch
sudo -u birch rsync -ar /home/birch_nonenc/ /home/birch/
sudo gedit /usr/local/sbin/mount-zfs-homedir
sudo chmod +x /usr/local/sbin/mount-zfs-homedir
sudo gedit /etc/pam.d/common-auth
sudo zfs set canmount=noauto rpool/USERDATA/birch_enc
sudo zfs set dk.talldanestale.automount:user=birch rpool/USERDATA/birch_enc

# extra bits from Christoph Hagemann's comments, to manage mounting/unmounting via systemd
sudo zfs set org.openzfs.systemd:ignore=on rpool/USERDATA/birch_enc
sudo mkdir -p /etc/systemd/system/user@.service.d
sudo gedit /etc/systemd/system/user@.service.d/mount_zfs.conf
sudo gedit /etc/systemd/system/user-zfs-mount@.service
sudo gedit /usr/local/sbin/mount-zfs-homedir2

@Birch-san
Copy link
Author

Birch-san commented Jan 28, 2023

/usr/local/sbin/mount-zfs-homedir:

#!/bin/bash
# https://talldanestale.dk/2020/04/06/zfs-and-homedir-encryption/

# don't set -eux because it'll log the password
set -eu

PASS=$(cat -)

# List all zfs volumes, listing the *local* value of the property canmount.
zfs get canmount -s local -H -o name,value | while read volname canmount; do
    # Filter on canmount == 'noauto'. Filesystems marked 'noauto' can be mounted,
    # but is not done so automatically during boot.
    [[ $canmount = 'noauto' ]] || continue

    # Filter on user property dk.talldanestale.automount:user. It should match
    # the user that we are logging in as ($PAM_USER)
    user=$(zfs get dk.talldanestale.automount:user -s local -H -o value $volname)
    [[ $user = $PAM_USER ]] || continue

    # Unlock and mount the volume
    # can't use /dev/stdin approach in a loop because we can only read from stdin once
    # zfs load-key "$volname" < /dev/stdin || continue
    zfs load-key "$volname" <<< "$PASS" || continue
    # we are using systemd for mounting/unmounting instead
    # zfs mount "$volname" || true # ignore errors
done

@Birch-san
Copy link
Author

/etc/systemd/system/user@.service.d/mount_zfs.conf:

# https://talldanestale.dk/2020/04/06/zfs-and-homedir-encryption/
[Unit]
Requires=user-zfs-mount@%i.service

/etc/systemd/system/user-zfs-mount@.service:

# https://talldanestale.dk/2020/04/06/zfs-and-homedir-encryption/
[Unit]
Description=User ZFS mount /home/ for UID %i
After=dbus.service
StopWhenUnneeded=yes
IgnoreOnIsolate=yes

[Service]
ExecStart=/usr/local/sbin/mount-zfs-homedir2 start %i
ExecStop=/usr/local/sbin/mount-zfs-homedir2 stop %i
Type=oneshot
RemainAfterExit=yes
Slice=user-%i.slice

@Birch-san
Copy link
Author

Birch-san commented Jan 28, 2023

/usr/local/sbin/mount-zfs-homedir2:

#!/bin/bash
# https://talldanestale.dk/2020/04/06/zfs-and-homedir-encryption/


set -e

# called from systemd via /etc/systemd/system/user-zfs-mount@.service
# to mount/unmount
# we get: $1 – start/stop, $2 – UID

# get username from UID passed to us by systemd
USERNAME=$(id -nu $2)

zfs get canmount -s local -H -o name,value | while read volname canmount; do
	user=$(zfs get dk.talldanestale.automount:user -s local -H -o value $volname)
	[[ $user = $USERNAME ]] || continue

	case $1 in
	start)
	# skip if already mounted        
	MOUNTPOINT="$(zfs get -r mountpoint "$volname" -o value -H)"
	findmnt "$MOUNTPOINT" && continue

	# Mount homedir of user we are logging in as ($USERNAME)
	zfs mount "$volname"
	;;

	stop)
	zfs umount "$volname"
	zfs unload-key "$volname"
	;;

	esac
done

@Birch-san
Copy link
Author

SSH worked without any changes. I installed sshd, and the defaults were fine (external clients were able to login via password -- which would decrypt rpool if it's not already been decrypted -- or via key exchange if and only if the rpool's already decrypted).

@Birch-san
Copy link
Author

Birch-san commented Jul 27, 2023

Adding new disks

We will not attempt to add the disk to an existing pool. Just gonna keep things simple by giving the drive its own, new pool. If any drive is removed: we know it has no impact on the other pools.

# discover what disks are inserted
sudo fdisk -l | less

My results were initially:

Disk /dev/nvme1n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: CT4000P3PSSD8                           
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/nvme0n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: CT4000P3PSSD8                           
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D6B03CE8-EDFA-43A6-B9F5-904B0497D7D9

Device           Start        End    Sectors  Size Type
/dev/nvme0n1p1    2048    1050623    1048576  512M EFI System
/dev/nvme0n1p2 1050624    5244927    4194304    2G Linux swap
/dev/nvme0n1p3 5244928    9439231    4194304    2G Solaris boot
/dev/nvme0n1p4 9439232 7814035455 7804596224  3.6T Solaris root


Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: CT1000MX500SSD1 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 79319A2D-1389-40FA-AB1E-964CD8D91996

Device          Start        End    Sectors   Size Type
/dev/sda1          34     262177     262144   128M Microsoft reserved
/dev/sda2      262178     466977     204800   100M EFI System
/dev/sda3      466978 1952438264 1951971287 930.8G Microsoft basic data
/dev/sda4  1952438272 1953523711    1085440   530M Windows recovery environment

Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: Samsung SSD 870 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Where:

  • /dev/nvme1n1: new 4TB NVMe SSD, unformatted
  • /dev/nvme0n1: main boot device (4TB), already has ZFS set up as per above instructions
  • /dev/sda: previous 1TB Windows boot device, SATA SSD
  • /dev/sdb: new 8TB SATA SSD, unformatted

I then formatted the new /dev/nvme1n1 like so:

# formats /dev/nvme1n1
# creates a pool named nvme1
# mounts it at /nvme1
# you could use `-m /usr/share/pool` if you wished to mount it elsewhere
# https://ubuntu.com/tutorials/setup-zfs-storage-pool#3-creating-a-zfs-pool
# unencrypted:
#   sudo zpool create nvme1 /dev/nvme1n1
# encrypted (https://timor.site/2021/11/creating-fully-encrypted-zfs-pool/):
sudo zpool create -o feature@encryption=enabled -O encryption=aes-256-gcm -O keyformat=passphrase -O keylocation=prompt nvme1 /dev/nvme1n1

sudo fdisk -l now describes it as:

Disk /dev/nvme1n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: CT4000P3PSSD8                           
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9BDFC66B-CC16-3C4F-90FA-F049F2FC038C

Device              Start        End    Sectors  Size Type
/dev/nvme1n1p1       2048 7814019071 7814017024  3.6T Solaris /usr & Apple ZFS
/dev/nvme1n1p9 7814019072 7814035455      16384    8M Solaris reserved 1

Our new pool is not listed in zfs list -r rpool, because it is not a root pool. it appears in zfs list though:

birch@tree-diagram:/ $ zfs list
NAME                                               USED  AVAIL     REFER  MOUNTPOINT
bpool                                              327M  1.43G       96K  /boot
bpool/BOOT                                         326M  1.43G       96K  none
bpool/BOOT/ubuntu_5n8xhb                           325M  1.43G      325M  /boot
nvme1                                              212K  3.51T       98K  /nvme1
rpool                                             1.93T  1.58T       96K  /
rpool/ROOT                                        35.2G  1.58T       96K  none
rpool/ROOT/ubuntu_5n8xhb                          35.2G  1.58T     11.2G  /
rpool/ROOT/ubuntu_5n8xhb/srv                        96K  1.58T       96K  /srv
rpool/ROOT/ubuntu_5n8xhb/usr                      13.6G  1.58T       96K  /usr
rpool/ROOT/ubuntu_5n8xhb/usr/local                13.6G  1.58T     13.6G  /usr/local
rpool/ROOT/ubuntu_5n8xhb/var                      10.3G  1.58T       96K  /var
rpool/ROOT/ubuntu_5n8xhb/var/games                  96K  1.58T       96K  /var/games
rpool/ROOT/ubuntu_5n8xhb/var/lib                  9.29G  1.58T     9.14G  /var/lib
rpool/ROOT/ubuntu_5n8xhb/var/lib/AccountsService   112K  1.58T      112K  /var/lib/AccountsService
rpool/ROOT/ubuntu_5n8xhb/var/lib/NetworkManager    160K  1.58T      160K  /var/lib/NetworkManager
rpool/ROOT/ubuntu_5n8xhb/var/lib/apt              93.6M  1.58T     93.6M  /var/lib/apt
rpool/ROOT/ubuntu_5n8xhb/var/lib/dpkg             58.5M  1.58T     58.5M  /var/lib/dpkg
rpool/ROOT/ubuntu_5n8xhb/var/log                  1.02G  1.58T     1.02G  /var/log
rpool/ROOT/ubuntu_5n8xhb/var/mail                   96K  1.58T       96K  /var/mail
rpool/ROOT/ubuntu_5n8xhb/var/snap                 2.50M  1.58T     2.50M  /var/snap
rpool/ROOT/ubuntu_5n8xhb/var/spool                 120K  1.58T      120K  /var/spool
rpool/ROOT/ubuntu_5n8xhb/var/www                    96K  1.58T       96K  /var/www
rpool/USERDATA                                    1.89T  1.58T       96K  /
rpool/USERDATA/birch_enc                          1.89T  1.58T     1.89T  /home/birch
rpool/USERDATA/birch_mkvtd1                       1.32G  1.58T     1.32G  /home/birch_nonenc
rpool/USERDATA/root_mkvtd1                        3.87M  1.58T     3.87M  /root

I also changed ownership of /nvme1 to birch:birch:

sudo chown `whoami`:`whoami` /nvme1

At the time of creation: the directory should already be decrypted.

But after reboot: you'll need to unlock it.

# this will prompt you for your password
sudo zfs load-key nvme1
sudo zfs mount nvme1

You can avoid this by setting it up for compatibility with our automount script:

sudo zfs set canmount=noauto nvme1
sudo zfs set dk.talldanestale.automount:user=birch nvme1

@Birch-san
Copy link
Author

if you are X-forwarded remotely into the computer, and wish to use a graphical editor to edit text with root privileges (sudo gedit): you can copy your XAUTHORITY to the root user:

sudo xauth add $(xauth -f ~/.Xauthority list|tail -1)
sudo gedit /usr/local/sbin/mount-zfs-homedir2

If you encounter "Can't open display": that probably means users can't share a DISPLAY? Try closing the ssh session, opening a new ssh session, then doing the sudo xauth add and sudo gedit (i.e. let root be the first user to use that DISPLAY).

@Birch-san
Copy link
Author

Birch-san commented Dec 12, 2023

so your zpool disappeared? after a reboot for example.
this happened to me with nvme1.

doesn't show up in zfs list or zpool list:

zfs list
zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool  1.88G   262M  1.62G        -         -     0%    13%  1.00x    ONLINE  -
rpool  3.62T  2.03T  1.59T        -         -     3%    56%  1.00x    ONLINE  -
sdb    7.27T   380G  6.89T        -         -     0%     5%  1.00x    ONLINE  -
# where is nvme1

but still shows up in fdisk?

sudo fdisk -l
Disk /dev/nvme1n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: CT4000P3PSSD8                           
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D6B03CE8-EDFA-43A6-B9F5-904B0497D7D9

Device           Start        End    Sectors  Size Type
/dev/nvme1n1p1    2048    1050623    1048576  512M EFI System
/dev/nvme1n1p2 1050624    5244927    4194304    2G Linux swap
/dev/nvme1n1p3 5244928    9439231    4194304    2G Solaris boot
/dev/nvme1n1p4 9439232 7814035455 7804596224  3.6T Solaris root

then try:

cd /dev/disk/by-id

looks like it's still in /dev/disk/by-id:

birch@tree-diagram:/dev/disk/by-id $ ls | grep nvme
nvme-CT4000P3PSSD8_2242E67C5015
nvme-CT4000P3PSSD8_2242E67C5015_1
nvme-CT4000P3PSSD8_2242E67C5015_1-part1
nvme-CT4000P3PSSD8_2242E67C5015_1-part2
nvme-CT4000P3PSSD8_2242E67C5015_1-part3
nvme-CT4000P3PSSD8_2242E67C5015_1-part4
nvme-CT4000P3PSSD8_2242E67C5015-part1
nvme-CT4000P3PSSD8_2242E67C5015-part2
nvme-CT4000P3PSSD8_2242E67C5015-part3
nvme-CT4000P3PSSD8_2242E67C5015-part4
nvme-CT4000P3PSSD8_2325E6E60AC2
nvme-CT4000P3PSSD8_2325E6E60AC2_1
nvme-CT4000P3PSSD8_2325E6E60AC2_1-part1
nvme-CT4000P3PSSD8_2325E6E60AC2_1-part9
nvme-CT4000P3PSSD8_2325E6E60AC2-part1
nvme-CT4000P3PSSD8_2325E6E60AC2-part9
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part1
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part2
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part3
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part4
nvme-nvme.c0a9-323332354536453630414332-43543430303050335053534438-00000001
nvme-nvme.c0a9-323332354536453630414332-43543430303050335053534438-00000001-part1
nvme-nvme.c0a9-323332354536453630414332-43543430303050335053534438-00000001-part9

I think zpool import lists what's importable?

birch@tree-diagram:/dev/disk/by-id $ sudo zpool import
   pool: nvme1
     id: 5687133204337010723
  state: ONLINE
status: Some supported features are not enabled on the pool.
    (Note that they may be intentionally disabled if the
    'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    nvme1                              ONLINE
      nvme-CT4000P3PSSD8_2325E6E60AC2  ONLINE

okay, let's import it:

zpool import -a

that worked:

zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool  1.88G   262M  1.62G        -         -     0%    13%  1.00x    ONLINE  -
nvme1  3.62T  3.23T   405G        -         -     7%    89%  1.00x    ONLINE  -
rpool  3.62T  2.03T  1.59T        -         -     3%    56%  1.00x    ONLINE  -
sdb    7.27T   380G  6.89T        -         -     0%     5%  1.00x    ONLINE  -

start a repair, for good measure:

sudo zpool scrub nvme1

check zpool status:

zpool status nvme1
  pool: nvme1
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
	The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub in progress since Tue Dec 12 20:05:13 2023
	1.36T / 3.23T scanned at 34.7G/s, 0B / 3.23T issued
	0B repaired, 0.00% done, no estimated completion time
config:

	NAME                               STATE     READ WRITE CKSUM
	nvme1                              ONLINE       0     0     0
	  nvme-CT4000P3PSSD8_2325E6E60AC2  ONLINE       0     0     0

errors: No known data errors

okay, decrypt & mount it the usual way:

# do we need to chown the mount again?
# sudo chown `whoami`:`whoami` /nvme1
sudo zfs load-key nvme1
sudo zfs mount nvme1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment