Skip to content

Instantly share code, notes, and snippets.

@yvesh
Last active March 12, 2024 04:52
Show Gist options
  • Star 32 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save yvesh/ae77a68414484c8c79da03c4a4f6fd55 to your computer and use it in GitHub Desktop.
Save yvesh/ae77a68414484c8c79da03c4a4f6fd55 to your computer and use it in GitHub Desktop.
Proxmox 6.1 ZFS native full disk (ZFS root) encryption.

Simple guide for fulldisk encryption with Proxmox and ZFS native encryption

Install normally using the installer, after the setup reboot into recovery mode (from the USB stick). Make sure to install in UEFI mode (you need systemd-boot).

If the USB stick is not working for you, because of the old Kernel version (2.6.x), you can also use an Ubuntu 19.10 / 20.04 boot stick. ZFS suport is enabled there out of the box.

Steps:

# Import the old 
zpool import -f rpool

# Make a snapshot of the current one
zfs snapshot -r rpool/ROOT@copy

# Send the snapshot to a temporary root
zfs send -R rpool/ROOT@copy | zfs receive rpool/copyroot

# Destroy the old unencrypted root
zfs destroy -r rpool/ROOT

# Create a new zfs root, with encryption turned on
# OR -o encryption=aes-256-gcm - aes-256-ccm vs aes-256-gcm
zfs create -o encryption=on -o keyformat=passphrase rpool/ROOT

# Copy the files from the copy to the new encrypted zfs root
zfs send -R rpool/copyroot/pve-1@copy | zfs receive -o encryption=on rpool/ROOT/pve-1

# Set the Mountpoint
zfs set mountpoint=/ rpool/ROOT/pve-1

# Delete the old unencrypted copy
zfs destroy -r rpool/copyroot

# Export the pool again, so you can boot from it
zpool export rpool

If you want turn compression and other ZFS features on afterwards.

Helpful commands:

# list all mounts
zfs list

# Check which ZFS pools are encrypted
zfs get encryption

# Mount everything
zfs mount -l -a

# Show status and devices
zpool list

Original steps from from Yakuraku (proxmox-forum). Thanks to @nschemel for suggesting to delete the copy.

@stepurin
Copy link

stepurin commented Jun 9, 2021

I did everything according to the instructions. But I got the following problem - when the system starts, after unlocking the disk, I have time synchronization, which stops the download ... and I can’t do anything about it

@aurrak
Copy link

aurrak commented Jul 2, 2021

I had to move the mount point of the dataset "rpool/copyroot/pve-1" to /rpool/copyroot/pve-1 to get it working. Without this there where two datasets with the same mount point "/" and the system didn't boot.

I can confirm @nschemel's addition about changing the mountpoint is needed.

I believe the above issue could be avoided if we initially import the pool with the -N or -R (or both) flag.

-N means don't mount the pool
-R means import the pool with an alternate root mountpoint

So we can do the following instead:

# Import the old 
zpool import -f -NR /tmp rpool

Source: openzfs/zfs#5192 (comment)
Source2: https://docs.oracle.com/cd/E19253-01/819-5461/gbcgl/index.html

@Yyoglmaster
Copy link

Thanks for this wonderful instructions! I used it for pve7. To mount the zfs pool I had to use Ubuntu 21.04.
20.04. uses an older version and is not able to mount. To export the pool I had to delete the snapshot. Now it's working!

@andi448
Copy link

andi448 commented Jul 29, 2021

@stepurin: I had the same problem with network time synchronization running into timeouts on v6.4, but I've tried the latest v7.0 and it works smoothly.

Copy link

ghost commented Oct 1, 2021

Can anyone confirm if this works well with UEFI boot + RAID1 (dual disk) configurations?

@Yyoglmaster
Copy link

Can anyone confirm if this works well with UEFI boot + RAID1 (dual disk) configurations?

I‘m using it in this configuration. So, yes, it works!

Copy link

ghost commented Oct 1, 2021

Sweet! How does it handle the whole boot process? In the past when we did not enjoy the comforts of ZFS support in grub and co, we had to sync the boot partitions for all drives in the RAID1 set, and it was still hideous to admin.

Any specific steps for RAID1?

@Yyoglmaster
Copy link

As I remember, I did the ZFS setup during the install process and afterwards the encryption of the pool like it is described here. The boot partition itself is not encrypted btw. I think you have to do it by hand and I think I can remember there are some resync steps neccesary.
I hope this can answer your question…

Copy link

ghost commented Oct 3, 2021

Could you or the original author update the instructions? I replicated these in a guest and like you mentioned, it is necessary to remove the copy and snapshot. My boot process has slowed down significantly though, this is a current gen Xeon system with quite a bit of power.

@bung69
Copy link

bung69 commented Nov 23, 2021

for anyone wondering how to get in to a proxmox 7 recovery environment, after some trial and error with the recovery mode just booting up my proxmox 7 install, boot the installer usb then alt + f4 to exit and then right click to get a terminal emulator.

@mr44er
Copy link

mr44er commented Jul 15, 2022

I wasn't comfortable with the rescue mode or a normal ubuntu-version, so I did everything with NomadBSD. In short, it is like a bootable knoppix, but with FreeBSD: https://www.nomadbsd.org | https://github.com/nomadbsd/NomadBSD

At first you write the image to a thumbdrive (I strongly suggest a USB3-device, USB2 is sloooow), then boot from it. The wizard then will ask some things, which software you prefer, username, password etc. and as bonus you could set up a ssh-server from there to remote-login and copy and paste the following. I changed it a little bit, it is now a nobrainer ;)

zpool import -f -NR /tmp rpool && zpool status
zfs snapshot -r rpool/ROOT@copy
zfs send -R rpool/ROOT@copy | zfs receive rpool/copyroot
zfs destroy -r rpool/ROOT
zfs create -o encryption=on -o keyformat=passphrase rpool/ROOT
zfs send -R rpool/copyroot/pve-1@copy | zfs receive -o encryption=on rpool/ROOT/pve-1
zfs destroy -r rpool/copyroot
zfs set mountpoint=/ rpool/ROOT/pve-1
zpool export rpool
*reboot*

Bonus2:
If your proxmox-installation resides on a trim-capable device you could set
zpool set autotrim=on rpool and watch it after some time with zpool iostat -r rpool
and zfs set recordsize=1M compression=zstd-3 rpool does not hurt. You can go from zstd-1-19, but zstd-3 is the default and at this point compresses better than lz4. zstd-19 is unnecessary here and will slow down even big multicore cpus and your vms. Some background: https://openzfs.org/w/images/b/b3/03-OpenZFS_2017_-_ZStandard_in_ZFS.pdf

Bonus3:
If you have additional disks/pools for vm-storage or just want another password, try this:

zfs create -o encryption=on -o keyformat=passphrase rpool/vmdatasetcrypta01
zfs set atime=off primarycache=metadata rpool/vmdatasetcrypta01 #common settings for vm-datasets
pvesm add zfspool vmdatasetcrypta01 -pool rpool/vmdatasetcrypta01 #the storage will be visible in the GUI within seconds

The dataset is unlocked at this point, after reboot you can unlock with:
zfs mount -l rpool/vmdatasetcrypta01

In the GUI you can then safely delete unencrypted storages.

@uplight-dev
Copy link

uplight-dev commented Feb 28, 2023

Using Proxmox 7.3 and following this guide to install it on a Hetzner server with ZFS Encryption enabled.
All setup works fine and login to Proxmox is fast, until I encrypt the ZFS root partition.

After having typed zfs_unlock and waited the system to boot fully, the login takes +25 seconds to complete due to systemd-logind service fails to start.
Any ideas why this is or how to fix it?

# systemctl status systemd-logind.service
● systemd-logind.service - User Login Management
     Loaded: loaded (/lib/systemd/system/systemd-logind.service; static)
     Active: failed (Result: exit-code) since Mon 2023-02-27 21:12:52 CET; 1min 43s ago
       Docs: man:sd-login(3)
             man:systemd-logind.service(8)
             man:logind.conf(5)
             man:org.freedesktop.login1(5)
    Process: 1578 ExecStart=/lib/systemd/systemd-logind (code=exited, status=1/FAILURE)
   Main PID: 1578 (code=exited, status=1/FAILURE)
        CPU: 26ms

Feb 27 21:12:52 vmbox systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 5.
Feb 27 21:12:52 vmbox systemd[1]: Stopped User Login Management.
Feb 27 21:12:52 vmbox systemd[1]: systemd-logind.service: Start request repeated too quickly.
Feb 27 21:12:52 vmbox systemd[1]: systemd-logind.service: Failed with result 'exit-code'.
Feb 27 21:12:52 vmbox systemd[1]: Failed to start User Login Management.
# journalctl _PID=1578
-- Journal begins at Mon 2023-02-27 17:56:12 CET, ends at Mon 2023-02-27 21:15:36 CET. --
Feb 27 21:12:52 vmbox systemd-logind[1578]: Failed to connect to system bus: No such file or directory
Feb 27 21:12:52 vmbox systemd-logind[1578]: Failed to fully start up daemon: No such file or directory
systemctl status dbus

● dbus.service - D-Bus System Message Bus
     Loaded: loaded (/lib/systemd/system/dbus.service; static)
     Active: active (running) since Mon 2023-02-27 21:12:35 CET; 8h ago
TriggeredBy: ● dbus.socket
       Docs: man:dbus-daemon(1)
   Main PID: 981 (dbus-daemon)
      Tasks: 1 (limit: 76835)
     Memory: 1.2M
        CPU: 11ms
     CGroup: /system.slice/dbus.service
             └─981 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only

Feb 27 21:12:52 vmbox dbus-daemon[981]: [system] Activating via systemd: service name='org.freedesktop.login1' unit='dbus-org.freedesktop.login1.service' requested by ':1.1' (uid=0 pid=1514 comm="sshd: root [priv]    " label="unconfined")
Feb 27 21:13:17 vmbox dbus-daemon[981]: [system] Failed to activate service 'org.freedesktop.login1': timed out (service_start_timeout=25000ms)

Also when using dropbear-initramfs, there's a crypt error, not sure if there's any impact:

# apt install dropbear-initramfs
# update-initramfs -u

...
cryptsetup: ERROR: Couldn't resolve device rpool/ROOT/pve-1
cryptsetup: WARNING: Couldn't determine root device
...

@remus-selea
Copy link

Can anyone confirm if this is the correct output after following this process?

NAME              PROPERTY    VALUE        SOURCE
rpool             encryption  off          default
rpool/ROOT        encryption  aes-256-gcm  -
rpool/ROOT/pve-1  encryption  aes-256-gcm  -
rpool/data        encryption  off          default

Why do we not ecrypt rpool/data as well?

@yvesh
Copy link
Author

yvesh commented Mar 12, 2023

Why do we not ecrypt rpool/data as well?

Hi, I am encrypting rpool/data too. I just didn't add it to the gist to keep it short. It's basically the same commands.

@uplight-dev
Copy link

Has anyone encounter issues with login being slow after enabling encryption? Perhaps (some of) the errors pasted above?

@TrulsZK
Copy link

TrulsZK commented Jul 16, 2023

Did some testing during my reinstalls/upgrade to Proxmox VE 8.0 and managed to reproduce the issues reported here (like services failing to start on boot) after enabling encryption.

The issue seems to be related to not properly destroying the temporary root ZFS filesystem and snapshot. Make sure to do so when encrypting the rpool/ROOT dataset.

zfs destroy -r rpool/copyroot

If you already have the server up and running just press e in the systemd-boot bootloader and remove the entire argument and press enter (or use recovery mode from the USB stick) and run the following:

# Load the kernel module (when using systemd-boot)
modprobe zfs

# Import the old 
zpool import -f rpool

# Delete the old unencrypted copy
zfs destroy -r rpool/copyroot

# Export the pool again, so you can boot from it
zpool export rpool

@tomaszkiewicz
Copy link

Does it work for 8.1? I've tried the whole procedure (from Proxmox ISO as I had troubles getting into initrd shell), both with secure boot enabled (thus using grub) as well as with disabled one, but every time it hangs just after loading initrd...

image

@mr44er
Copy link

mr44er commented Feb 8, 2024

I'm running this config since my post in 2022, all nodes on 8.1.4 now. Some with UEFI, some older ones without UEFI.

@tomaszkiewicz
Copy link

Ok, I managed to resolve it - if someone experiences that you probably need "simplefb" module for initramfs to be added - https://forum.proxmox.com/threads/native-full-disk-encryption-with-zfs.140170/#post-628782

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment