Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save iosdev747/9923ca5c7a486a81096e857506ec6a80 to your computer and use it in GitHub Desktop.
Save iosdev747/9923ca5c7a486a81096e857506ec6a80 to your computer and use it in GitHub Desktop.
How to install TrueNAS SCALE on a partition instead of the full disk

Install TrueNAS SCALE on a partition instead of the full disk

The TrueNAS installer doesn't have a way to use anything less than the full device. This is usually a waste of resources when installing to a modern NVMe which is usually several hundred of GB. TrueNAS SCALE will use only a few GB for its system files so installing to a 16GB partition would be helpful.

Unfortunately this is only possible by using an intermediate device to act as the installation disk and later move this data to the NVMe. Below I have documented the steps I took to get TrueNAS SCALE to run from a mirrored 16GB partition on NVMe disks.

For an easier initial partition please see this comment and the discussion that follows. Should remove the need to use a USB stick as a intermediate medium.

  1. Install TrueNAS SCALE on a USB drive, preferrably 16GB in size. If you use a 32GB stick you must create a 32GB partition on the NVMe, wasting space that can be used for VMs and Docker/k8s applications.

  2. Boot and enter a Linux shell as root. For example by enabling SSH service and login by root password.

  3. Check available devices

     $ parted
     (parted) print devices
     /dev/sdb (15.4GB)  # boot device
     /dev/nvme0n1 (500GB)
     /dev/nvme1n1 (512GB)
     (parted) quit
    

If you only have one NVMe disk just ignore the instructions that include the second disk (nvme1n1). This disk is used to create a ZFS mirror to handle disk failures.

  1. Clone the boot device to the other devices

     $ cat /dev/sdb > /dev/nvme0n1
     $ cat /dev/sdb > /dev/nvme1n1
    
  2. Check the partition layout. Fix all the GPT space warning prompts that show up.

     $ parted -l
     [...]
     Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the
     space (an extra 946741296 blocks) or continue with the current setting?
     Fix/Ignore? f
     [...]
     Model:  USB  SanDisk 3.2Gen1 (scsi)
     Disk /dev/sdb: 15.4GB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start   End     Size    File system  Name  Flags
      1      20.5kB  1069kB  1049kB                     bios_grub
      2      1069kB  538MB   537MB   fat32              boot, esp
      3      538MB   15.4GB  14.8GB  zfs
     [...]
    

    The other disks partition table should look identical to this.

  3. Remove the zfs partition from the new devices, number 3 in this case. This is the boot-pool partition and we will recreate it later. The reason we remove it is that zfs will recognize metadata that makes it think it's part of the pool while it is not.

     $ parted /dev/nvme0n1 rm
     Partition number? 3
     Information: You may need to update /etc/fstab.
    
  4. Recreate the boot-pool partition as a 16GiB large partition with a sligtly later start sector than before, make sure that it is on a sector divisable with 2048 for best performance (526336 % 2048 = 0). We also do this to make sure that zfs doesn't find any metadata from the old partition.

    Start with the smaller disk if they are not identical.

     $ parted
     (parted) unit kiB
     (parted) select /dev/nvme0n1
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start    End        Size       File system  Name  Flags
      1      20.0kiB  1044kiB    1024kiB                       bios_grub
      2      1044kiB  525332kiB  524288kiB  fat32              boot, esp
    
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start      End          Size         File system  Name       Flags
      1      20.0kiB    1044kiB      1024kiB                              bios_grub
      2      1044kiB    525332kiB    524288kiB    fat32                   boot, esp
      3      526336kiB  17303552kiB  16777216kiB               boot-pool
    
  5. Now you can create a partition allocating the rest of the disk.

     (parted) mkpart pool 17303552kiB 100%
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  6. Do the same for the next device, but this time use the same values as in the printout above. We do this to make sure that the partitions are exactly the same size. In this example the disks are slightly different in size so using 100% on the second disk would create a partition larger than the one we just created on the smaller disk.

     (parted) select /dev/nvme1n1
     Using /dev/nvme1n1
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) mkpart pool 17303552kiB 488386560kiB
     (parted) print
     Model: TS512GMTE220S (nvme)
     Disk /dev/nvme1n1: 500107608kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  7. Make the new system partitions part of the boot-pool. This is done by attaching them to the existing pool while detaching the USB drive.

     $ zpool attach boot-pool sdb3 nvme0n1p3
    

    Wait for resilvering to complete, check progress with

     $ zpool status
    

    When resilvering is complete we can detach the USB device.

     $ zpool offline boot-pool sdb3
     $ zpool detach boot-pool sdb3
    

    Finally add the last drive to create a mirror of the boot-pool.

     $ zpool attach boot-pool nvme0n1p3 nvme1n1p3
     $ zpool status
     pool: boot-pool
     state: ONLINE
     scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
     config:
    
             NAME           STATE     READ WRITE CKSUM
             boot-pool      ONLINE       0     0     0
             mirror-0       ONLINE       0     0     0
                 nvme0n1p3  ONLINE       0     0     0
                 nvme1n1p3  ONLINE       0     0     0
    

    At this point you can remove the USB device and when the machine is rebooted it will start up from the NVMe devices instead. Check BIOS boot order if it doesn't.

  8. Now that the boot-pool is mirrored we want to create a mirror pool using the remaining partitions.

     $ zpool create pool1 mirror nvme0n1p4 nvme1n1p4
     $ zpool status
     pool: boot-pool
     state: ONLINE
     scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
     config:
    
             NAME           STATE     READ WRITE CKSUM
             boot-pool      ONLINE       0     0     0
             mirror-0       ONLINE       0     0     0
                 nvme0n1p3  ONLINE       0     0     0
                 nvme1n1p3  ONLINE       0     0     0
    
     pool: pool1
     state: ONLINE
     config:
    
             NAME           STATE     READ WRITE CKSUM
             pool1          ONLINE       0     0     0
             mirror-0       ONLINE       0     0     0
                 nvme0n1p4  ONLINE       0     0     0
                 nvme1n1p4  ONLINE       0     0     0
    

    But to be able to import it in the Web UI we need to export it.

     $ zpool export pool1
    
  9. All done! Import pool1 using the Web UI and start enjoying the additional space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment