Skip to content

Instantly share code, notes, and snippets.

@bjcubsfan
Last active February 17, 2021 22:51
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bjcubsfan/bc54704006789949ab6d54d71cb32336 to your computer and use it in GitHub Desktop.
Save bjcubsfan/bc54704006789949ab6d54d71cb32336 to your computer and use it in GitHub Desktop.
Working configuration for NVME
title Arch Linux
linux /vmlinuz-linux
initrd /intel-ucode.img
initrd /initramfs-linux.img
options nvme_load=YES root=UUID=e535f1e1-13d7-45e2-b0c6-d9c3c186448c rw

Install Arch Linux with a ZFS root filesystem

After booting into the ALEZ ISO, I run:

mkdir /tmp/usb && mount /dev/sdf /tmp/usb
cp /tmp/usb/vdev_id.conf /etc/zfs && udevadm trigger
zpool import -l -a
mount -t zfs nand/sys/dirty/root/default /mnt
arch-chroot /mnt /usr/bin/zsh
# Do the work
umount -a
exit
umount /mnt
reboot

Redoing the boot drive

umount /mnt/boot
dd if=/dev/zero of=/dev/nvme0n1p1
mkfs.fat -F32 /dev/nvme0n1p1
mount /dev/nvme0n1p1 /mnt/boot
echo "##### ***** ##### >> /mnt/etc/fstab
genfstab -U /mnt /etc/fstab
mkinitcpio -p linux
# Resinstall intel-ucode
grub-install --target=x86_64-efi --efi-directory=esp --bootloader-id=grub
grub-mkconfig -o /boot/grub/grub.cfg

I tried this, but it didn't work

https://www.funtoo.org/ZFS_Install_Guide

Do I need nvme_load=YES

zfs=rpool/ROOT/rootfs root=ZFS=rpool/ROOT/rootfs from a forum post

added nvme_core & zfs to /etc/mkinitcpio.conf modules

grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=grub --recheck

grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=grub --recheck grub-mkconfig -o /boot/grub/grub.cfg

That seems to have gotten farther, but cannot find the pools.

Next try

Get the cachefile right

2019-08-19T08:45:30 - I am going to copy over the cachefile every time I mess with it. I am also going to make a copy on the boot drive in case I can only access it there.

  1. Could not install grub with vdev_id.conf and vdevs. It worked without those
  2. made sure to copy zpool.cache to /etc/zfs. I also copied it to boot instead.
  3. I took away some references in grub that listed boot zfs since it's not

..... And I forgot to reinstall the kernel and microcode

Remove autodetect

A Arch Linux forum post says removing autodetect allowed him to boot a root ZFS system. I'll try it 2019-08-20T14:57:23.

Post on forum

I am making a post on the Arch Linux forum.

They closed it because I used ALEZ and John Ramsden's blog. I guess I could ask for help there. . . or just try Reddit or the ALEZ community.

I want to install Arch with a ZFS root on a new machine, but I cannot get it to work. I have installed Arch many times and have run it as my main desktop for a decade.

I am using the ALEZ install CD and I have followed the wiki and a guide on John ramsden's blog trying to get Arch to boot with a ZFS root. I have set all of ZFS mount points to legacy, hoping to make it easier. I have tried several boot managers, but have gotten the "closest" using grub. On one attempt the keyboard worked when I was dropped to the rootfs prompt, but not this time. I was able to figure out how to make the NVME work and the ZFS modules load, but now I am get this with no keyboard:

:: running early hook [udev] Starting version 242.84-1-arch :: running hook [udev] :: Triggering uevents... :: running hook[zfs] ERROR: device 'ZFS=nand/sys/dirty/root/default' not found. Skipping fsck. cannot open 'nand': no such pool ZFS: Importing pool nand. cannot import 'nand': no such pool available /init: line 51: die: not found cannot open Inand/sys/dirty/root/default': dataset does not exist :: running late hook [zfs] no pools available to import :: running late hook [usr] :: running cleanup hook [shutdown] :: running cleanup hook [udev] ERROR: Failed to mount the real root device. Bailing out, you are on your own. Good luck.

sh: can't access tty; job control turned off [rootfs]#

I copy over the "zpool.cache" file each time to make sure it's on the root device. I run "zpool export -a" to properly close out the zpool.

I have made a gist with relevant configuration files and command outputs. These include:

/etc/default/grub /etc/default/zfs # I haven't done much with this, but maybe it's useful? /etc/mkinitcpio.conf fdisk -l zfs list zpool status

Does anyone see a reason this might not be working? What is your best idea for what to change or check?

Posting on reddit

I want to install Arch with a ZFS root on a new machine, but I cannot get it to work.

I am using the ALEZ install image so that I have ZFS available on the install media. I have followed the Arch Linux wiki and a guide on John ramsden's blog trying to get Arch to boot with a ZFS root. I have set all of ZFS mount points to legacy, hoping to make it easier. I have also tried rEFInd and systemd-boot as boot managers, but have gotten the "closest" using grub. On one attempt with grub the keyboard worked when I was dropped to the rootfs prompt, but not this time. I was able to figure out how to make the NVME work and the ZFS modules load, but now I am get this with no keyboard:

:: running early hook [udev]
Starting version 242.84-1-arch
:: running hook [udev]
:: Triggering uevents...
:: running hook[zfs]
ERROR: device 'ZFS=nand/sys/dirty/root/default' not found. Skipping fsck.
cannot open 'nand': no such pool
ZFS: Importing pool nand.
cannot import 'nand': no such pool available
/init: line 51: die: not found
cannot open Inand/sys/dirty/root/default': dataset does not exist
:: running late hook [zfs] no pools available to import
:: running late hook [usr]
:: running cleanup hook [shutdown]
:: running cleanup hook [udev]
ERROR: Failed to mount the real root device. Bailing out, you are on your own. Good luck.

sh: can't access tty; job control turned off
[rootfs]#

I copy over the zpool.cache file each time to make sure it's on the root device. I run zpool export -a to properly close out the zpool.

I have made a gist with relevant configuration files and command outputs. These include:

/etc/default/grub /etc/default/zfs # I haven't done much with this, but maybe it's useful? /etc/mkinitcpio.conf fdisk -l zfs list zpool status

Does anyone see a reason this might not be working? What is your best idea for what to change or check?

I posted this.

Custom archiso

I have built a custom archiso and it works with zfs.

I want to:

  • consider my disk usage
  • consider ramsden blog and incorporate virtual box storage locations
  • design a file system
  • get it to rebooting as soon as possible to see if it works

Storage design

I want to have encrypted data sets. The options on creation will be:

For HDDs:

ashift=12
atime=off
compression=lz4
xattr=sa

For SSDs:

ashift=13
atime=off
compression=lz4
xattr=sa

I will have the following ZFS paths:

nand/root/default
rust

For nand, it will be two mirrored partitions, one on each of the SSDs. On the free space left on the larger SSD, I will have:

  • 2 GB ZIL
  • 25% of the drive left alone
  • Remainder L2 ARC

Here are the file system paths that I want on rust:

/var/cache
/var/lib/libvirt
/var/lib/machines
/var/lib/docker
/home

Posted on Arch Forums again

I made a post on the Arch Forums. It seems I am back to where NVME is not working, but I don't know how to get past it. I have hit my specified time limit, so I am going to switch after lunch to making a ext4 root.

NVME ideas

I am stuck with the NVME not loading.

I am going to try a few things:

  • ahcpi and nvme_load=yes
  • add module vmd
  • I also enabled VMD in the BIOS. I had disabled it for testing. I am hopeful this might be related

2020-01-03T08:50:21: I forgot to mkinitcpio. . . I will probably have to redo this.

THIS WORKED

Holy shit, what a saga. I'm going to update my recent post. Maybe this will help someone in posterity.

Install log

I plan to install ZFS and get my extra folders mounting. I have decided to forgo ZFS root. I will make necessary backups to my encrypted ZFS partition and reap the benefits of the performance on the flash drive operating by itself. I am less concerned about data loss there as I will put everything important on the ZFS system.

ZFS in the end

I have these set up on zfs:

Filesystem             Size  Used Avail Use% Mounted on
dev                     31G     0   31G   0% /dev
run                     31G  1.4M   31G   1% /run
/dev/nvme0n1p2         938G  5.3G  885G   1% /
tmpfs                   31G  215M   31G   1% /dev/shm
tmpfs                   31G     0   31G   0% /sys/fs/cgroup
tmpfs                   31G  244K   31G   1% /tmp
/dev/nvme0n1p1         511M   48M  464M  10% /boot
rust/home              3.5T  288G  3.2T   9% /home
rust/var/lib/docker    3.2T  256K  3.2T   1% /var/lib/docker
rust/var/lib/libvirt   3.2T  384K  3.2T   1% /var/lib/libvirt
rust/pacman-cache      3.2T  1.2G  3.2T   1% /var/lib/pacman-cache
rust/var/lib/machines  3.2T  256K  3.2T   1% /var/lib/machines
tmpfs                  6.2G   12K  6.2G   1% /run/user/16431

I boot in to command line mode by default. There I load the zfs keys with sudo zfs load-key -a. Then I mount those folders with sudo zfs mount -a. I can then exit and log back in to reset everything. I start awesome with startx.

X

SSH keys

I used Gnome Keyring as my ssh agent which gives me a similar function to MacOS. The password for the keys will be remembered across reboots. I can add keys with:

/usr/lib/seahorse/ssh-askpass my_key

I used PAM and xinitrc to get this working:

PAM method

Start the gnome-keyring-daemon from /etc/pam.d/login:

Add auth optional pam_gnome_keyring.so at the end of the auth section and session optional pam_gnome_keyring.so auto_start at the end of the session section.

/etc/pam.d/login

#%PAM-1.0

auth       required     pam_securetty.so
auth       requisite    pam_nologin.so
auth       include      system-local-login
auth       optional     pam_gnome_keyring.so
account    include      system-local-login
session    include      system-local-login
session    optional     pam_gnome_keyring.so auto_start

To use automatic unlocking, the same password for the user account and the keyring have to be set. You will still need the code in ~/.xinitrc below in order to export the environment variables required.

xinitrc method

Start the gnome-keyring-daemon from xinitrc:

~/.xinitrc

eval $(/usr/bin/gnome-keyring-daemon --start --components=pkcs11,secrets,ssh)
export SSH_AUTH_SOCK
# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run. Advanced users may wish to specify all system modules
# in this array. For instance:
# MODULES=(piix ide_disk reiserfs)
MODULES=(vmd)
# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image. This is run last, so it may be used to
# override the actual binaries included by a given hook
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=()
# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way. This is useful for config files.
FILES=()
# HOOKS
# This is the most important setting in this file. The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
## This setup specifies all modules in the MODULES setting above.
## No raid, lvm2, or encrypted root is needed.
# HOOKS=(base)
#
## This setup will autodetect all modules for your system and should
## work as a sane default
# HOOKS=(base udev autodetect block filesystems)
#
## This setup will generate a 'full' image which supports most systems.
## No autodetection is done.
# HOOKS=(base udev block filesystems)
#
## This setup assembles a pata mdadm array with an encrypted root FS.
## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
# HOOKS=(base udev block mdadm encrypt filesystems)
#
## This setup loads an lvm2 volume group on a usb device.
# HOOKS=(base udev block lvm2 filesystems)
#
## NOTE: If you have /usr on a separate partition, you MUST include the
# usr, fsck and shutdown hooks.
HOOKS=(base udev autodetect modconf block filesystems keyboard fsck)
# COMPRESSION
# Use this to compress the initramfs image. By default, gzip compression
# is used. Use 'cat' to create an uncompressed image.
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
#COMPRESSION="lz4"
# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=()
@l15k4
Copy link

l15k4 commented Feb 17, 2021

Hi @bjcubsfan, would you pls add more info about how to do it? Having a single nvme sdd and booting up from usb iso? I tried to hit e at the boot menu and supply options="nvme_load=YES" but that did not have any effect, hwo to actually edit these files, where and when? I just made sure that uefi is used for booting as the Legacy Bios does not detect nvme

@bjcubsfan
Copy link
Author

I added my install log from the time. I don't remember a lot, but I at least wrote down some things . . I would focus mostly on the end of the saga. The formula for me was:

  • ahcpi and nvme_load=yes
  • add module vmd
  • I also enabled VMD in the BIOS. I had disabled it for testing. I am hopeful this might be related

I hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment