Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 78 You must be signed in to star a gist
  • Fork 10 You must be signed in to fork a gist
  • Save gangefors/2029e26501601a99c501599f5b100aa6 to your computer and use it in GitHub Desktop.
Save gangefors/2029e26501601a99c501599f5b100aa6 to your computer and use it in GitHub Desktop.
How to install TrueNAS SCALE on a partition instead of the full disk

Install TrueNAS SCALE on a partition instead of the full disk

The TrueNAS installer doesn't have a way to use anything less than the full device. This is usually a waste of resources when installing to a modern NVMe which is usually several hundred of GB. TrueNAS SCALE will use only a few GB for its system files so installing to a 16GB partition would be helpful.

The easiest way to solve this is to modify the installer script before starting the installation process.

  1. Boot TrueNAS Scale installer from USB stick/ISO

  2. Select shell in the first menu (instead of installing)

  3. While in the shell, run the following commands:

    sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install
    /usr/sbin/truenas-install
    

    The first command modifies the installer script so that it creates a 16GiB boot-pool partition instead of using the full disk. The second command restarts the TrueNAS Scale installer.

  4. Continue installing according to the official docs.

Step 8 in the deprecated guide has instructions on how to allocate the remaining space to a partition.

Deprecated guide using a USB stick as intermediary

Unfortunately this is only possible by using an intermediate device to act as the installation disk and later move this data to the NVMe. Below I have documented the steps I took to get TrueNAS SCALE to run from a mirrored 16GB partition on NVMe disks.

For an easier initial partition please see this comment and the discussion that follows. Should remove the need to use a USB stick as a intermediate medium.

  1. Install TrueNAS SCALE on a USB drive, preferrably 16GB in size. If you use a 32GB stick you must create a 32GB partition on the NVMe, wasting space that can be used for VMs and Docker/k8s applications.

  2. Boot and enter a Linux shell as root. For example by enabling SSH service and login by root password.

  3. Check available devices

     $ parted
     (parted) print devices
     /dev/sdb (15.4GB)  # boot device
     /dev/nvme0n1 (500GB)
     /dev/nvme1n1 (512GB)
     (parted) quit
    

If you only have one NVMe disk just ignore the instructions that include the second disk (nvme1n1). This disk is used to create a ZFS mirror to handle disk failures.

  1. Clone the boot device to the other devices

     $ cat /dev/sdb > /dev/nvme0n1
     $ cat /dev/sdb > /dev/nvme1n1
    
  2. Check the partition layout. Fix all the GPT space warning prompts that show up.

     $ parted -l
     [...]
     Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the
     space (an extra 946741296 blocks) or continue with the current setting?
     Fix/Ignore? f
     [...]
     Model:  USB  SanDisk 3.2Gen1 (scsi)
     Disk /dev/sdb: 15.4GB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start   End     Size    File system  Name  Flags
      1      20.5kB  1069kB  1049kB                     bios_grub
      2      1069kB  538MB   537MB   fat32              boot, esp
      3      538MB   15.4GB  14.8GB  zfs
     [...]
    

    The other disks partition table should look identical to this.

  3. Remove the zfs partition from the new devices, number 3 in this case. This is the boot-pool partition and we will recreate it later. The reason we remove it is that zfs will recognize metadata that makes it think it's part of the pool while it is not.

     $ parted /dev/nvme0n1 rm
     Partition number? 3
     Information: You may need to update /etc/fstab.
    
  4. Recreate the boot-pool partition as a 16GiB large partition with a sligtly later start sector than before, make sure that it is on a sector divisable with 2048 for best performance (526336 % 2048 = 0). We also do this to make sure that zfs doesn't find any metadata from the old partition.

    Start with the smaller disk if they are not identical.

     $ parted
     (parted) unit kiB
     (parted) select /dev/nvme0n1
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start    End        Size       File system  Name  Flags
      1      20.0kiB  1044kiB    1024kiB                       bios_grub
      2      1044kiB  525332kiB  524288kiB  fat32              boot, esp
    
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start      End          Size         File system  Name       Flags
      1      20.0kiB    1044kiB      1024kiB                              bios_grub
      2      1044kiB    525332kiB    524288kiB    fat32                   boot, esp
      3      526336kiB  17303552kiB  16777216kiB               boot-pool
    
  5. Now you can create a partition allocating the rest of the disk.

     (parted) mkpart pool 17303552kiB 100%
     (parted) print
     Model: KINGSTON SNVS500GB (nvme)
     Disk /dev/nvme0n1: 488386584kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  6. Do the same for the next device, but this time use the same values as in the printout above. We do this to make sure that the partitions are exactly the same size. In this example the disks are slightly different in size so using 100% on the second disk would create a partition larger than the one we just created on the smaller disk.

     (parted) select /dev/nvme1n1
     Using /dev/nvme1n1
     (parted) mkpart boot-pool 526336kiB 17303552kiB
     (parted) mkpart pool 17303552kiB 488386560kiB
     (parted) print
     Model: TS512GMTE220S (nvme)
     Disk /dev/nvme1n1: 500107608kiB
     Sector size (logical/physical): 512B/512B
     Partition Table: gpt
     Disk Flags:
    
     Number  Start        End           Size          File system  Name       Flags
      1      20.0kiB      1044kiB       1024kiB                               bios_grub
      2      1044kiB      525332kiB     524288kiB     fat32                   boot, esp
      3      526336kiB    17303552kiB   16777216kiB                boot-pool
      4      17303552kiB  488386560kiB  471083008kiB               pool
    
  7. Make the new system partitions part of the boot-pool. This is done by attaching them to the existing pool while detaching the USB drive.

    $ zpool attach boot-pool sdb3 nvme0n1p3
    

    Wait for resilvering to complete, check progress with

    $ zpool status
    

    When resilvering is complete we can detach the USB device.

    $ zpool offline boot-pool sdb3
    $ zpool detach boot-pool sdb3
    

    Finally add the last drive to create a mirror of the boot-pool.

    $ zpool attach boot-pool nvme0n1p3 nvme1n1p3
    $ zpool status
    pool: boot-pool
    state: ONLINE
    scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
    config:
    
            NAME           STATE     READ WRITE CKSUM
            boot-pool      ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p3  ONLINE       0     0     0
                nvme1n1p3  ONLINE       0     0     0
    

    At this point you can remove the USB device and when the machine is rebooted it will start up from the NVMe devices instead. Check BIOS boot order if it doesn't.

  8. Now that the boot-pool is mirrored we want to create a mirror pool using the remaining partitions.

    $ zpool create pool1 mirror nvme0n1p4 nvme1n1p4
    $ zpool status
    pool: boot-pool
    state: ONLINE
    scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021
    config:
    
            NAME           STATE     READ WRITE CKSUM
            boot-pool      ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p3  ONLINE       0     0     0
                nvme1n1p3  ONLINE       0     0     0
    
    pool: pool1
    state: ONLINE
    config:
    
            NAME           STATE     READ WRITE CKSUM
            pool1          ONLINE       0     0     0
            mirror-0       ONLINE       0     0     0
                nvme0n1p4  ONLINE       0     0     0
                nvme1n1p4  ONLINE       0     0     0
    

    But to be able to import it in the Web UI we need to export it.

    $ zpool export pool1
    
  9. All done! Import pool1 using the Web UI and start enjoying the additional space.

@dejan024
Copy link

Thank you for the good explanation.
I create virtual machine with 16GB storage. Installed truenas scale. With passtrought I added ssd disk to the vm and used your tutorial.
After that, just attack that ssd to truenas server and imported system/general configuration and imported new created pool on SSD.
That is all.

@stein20k
Copy link

Is there possible to use this mirrored pool as cache? or set up the free space of the disks as cache?

@gangefors
Copy link
Author

gangefors commented Nov 18, 2022

Is there possible to use this mirrored pool as cache? or set up the free space of the disks as cache?

@stein20k, most likely. Before creating the pool it is just two partitions that you can do what ever you want with. So just use them however you like. In my example I create a mirrored pool to use as data storage.

@jpmchia
Copy link

jpmchia commented Dec 15, 2022

Worked out a slightly easier route for a new install of v22.12 .... boot up the installer and then instead of installing, select [8] to exit the installer and into a shell. Use vi to edit the install script: /usr/sbin/truenas-install and then edit line 410:

if ! sgdisk -n3:0:0 -t3:BF01 /dev/${_disk}; then

Specifying the size of the partition you want to limit TrueNAS to occupying, e.g.:

if ! sgdisk -n3:0:+16384M -t3:BF01 /dev/${_disk}; then

Save the file and re-run the installer script. Once successfully installed, return back to the shell to manually configure your additional partitions, exporting the ZFS pool once configured. Reboot, TrueNAS should boot and you should then be able to import your additional storage pool via the web UI.

@gangefors
Copy link
Author

@jpmchia Thank you for this information. I will try and verify this process and update the guide at a later time. I will mention your comment in the meantime.

@Grogdor
Copy link

Grogdor commented Dec 15, 2022

Just tried @jpmchia 's process on a fresh 22.12 install of TrueNAS SCALE and it didn't work but it's possible I misunderstood something. There's no [8] to exit before installing, it's 2 or 3 for "shell". I edited line 410, wq'd then exit'ed back to the installer, completing the process by choosing my 119GB SSD then also indicating "yes" to the 16GB swap file, and after successful installation not returning to shell to manually configure any additional partitions... Might give it another shot here.

@jpmchia
Copy link

jpmchia commented Dec 15, 2022

Apologies, yes so when the installer first starts up, select the "Shell" option from the menu - I was writing the above from memory rather than as I went.

When you select the "shell" option, the installer will then dump you out to a shell as the root user so that you can edit the installer script.

As per the manpage for the "sgdisk" tool, the parameter that you are editing is:

-n, --new=partnum:start:end
Create a new partition. You enter a partition number, starting sector, and an ending sector. Both start and end sectors can be specified in absolute terms as sector numbers or as positions measured in kibibytes (K), mebibytes (M), gibibytes (G), tebibytes
(T), or pebibytes (P); for instance, 40M specifies a position 40MiB from the start of the disk. You can specify locations relative to the start or end of the specified default range by preceding the number by a '+' or '-' symbol, as in +2G to specify a point
2GiB after the default start sector, or -200M to specify a point 200MiB before the last available sector. A start or end value of 0 specifies the default value, which is the start of the largest available block for the start sector and the end of the same
block for the end sector. A partnum value of 0 causes the program to use the first available partition number. Subsequent uses of the -A, -c, -t, and -u options may also use 0 to refer to the same partition.

Once you make your change, the script should run as it is intended. When it completes, rather than rebooting straight into your new installation - I think the installer provides the option again at the end of the installation to either reboot, shutdown or exit to a shell. If not, Ctrl-C will take you back to the shell.

If you prefer to use a GUI to make the partition changes or want a more comprehensive set of tools / environment to work with - you could always use Ventoo to boot into a SystemRestore+ZFS live image or even a Debian LiveCD.

I'm not sure if there's any harm in rebooting at that point and allowing TrueNAS to boot up for the first time - I just thought it would be safer to make the partition changes before allowing it to start up in case it configured itself upon start up.

@dzacball
Copy link

dzacball commented Dec 17, 2022

I combined the two methods (by @gangefors and the installer edit trick by @jpmchia).
Everything seems fine, except it seems permissions are somehow messed up. I tried to exec into my app pods, but I get
WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied

The file in question is owned by root and is not readable by anyone other than root:
# ll /etc/rancher/k3s/k3s.yaml -rw------- 1 root 2957 Dec 17 00:55 /etc/rancher/k3s/k3s.yaml

Any ideas ? :)

@LalitMaganti
Copy link

LalitMaganti commented Dec 19, 2022

Worked out a slightly easier route for a new install of v22.12 .... boot up the installer and then instead of installing, select [8] to exit the installer and into a shell. Use vi to edit the install script: /usr/sbin/truenas-install and then edit line 410:

if ! sgdisk -n3:0:0 -t3:BF01 /dev/${_disk}; then

Specifying the size of the partition you want to limit TrueNAS to occupying, e.g.:

if ! sgdisk -n3:0:+16384M -t3:BF01 /dev/${_disk}; then

Save the file and re-run the installer script. Once successfully installed, return back to the shell to manually configure your additional partitions, exporting the ZFS pool once configured. Reboot, TrueNAS should boot and you should then be able to import your additional storage pool via the web UI.

The critical step here is to "re-run the installer script". I was Ctrl-D-ing from the shell and expecting that the installer script would be re-executed; in retrospect, this doesn't make sense. It was only after I reran the installer by running /usr/sbin/truenas-install in the shell that things started working.

Thanks for this tip! For hacking around with hobbyist hardware, dedicating a 1TB SSD just for the OS doesn't make sense so I'm glad nice workarounds for this usecase exist.

@bdyling
Copy link

bdyling commented Dec 21, 2022

The critical step here is to "re-run the installer script". I was Ctrl-D-ing from the shell and expecting that to re-run the installer script to be re-executed; in retrospect, this doesn't make sense. It was only after I reran the installer by running /usr/sbin/truenas-install in the shell that things started working.

Yeah, this tripped me up too.

Also, I found that using the installer hack to provision a 16GB partition worked, but it doesn't necessarily mean it'll align it to your disk sectors. It didn't seem to for me, which still meant I had to manually destroy the boot pool on the SSD, re-create the partition aligned to my drive's sectors, and then mirror the boot pool from a USB stick. YMMV.

@User-3090
Copy link

Any chance you could also figure out how to install TrueNAS Scale with ZFS encryption, so that Grub asks for a password before booting? I found various guides doing this for Ubuntu and Debian, but I'm not smart enough to adapt the python installer.

@User-3090
Copy link

I think patching the zpool create with encryption and then https://docs.zfsbootmenu.org/en/latest/guides/debian/uefi.html could be a way?

@gangefors
Copy link
Author

@User-3090 Encryption of the OS is out of scope for this guide and I have no interest in digging into it.

Feel free to post a link to your own gist if you figure out a way to do it.

@kmanan
Copy link

kmanan commented Mar 23, 2023

Is there a YT vid or screenshots based walk-through? n00b to Unix and the whole vi edit the file went over my head. I chose Shell instead of install, vi file path and it created a new file. Would love to use my NVMe for app instead of giving it all to the OS that'll use 10% of total space.

@gangefors
Copy link
Author

Is there a YT vid or screenshots based walk-through?

If you have searched and didn't find anything there most likely are not. I'm not investing any more time on this since I've my system up and running. I hope the community can help you out.

@WilliamAnderssonGames
Copy link

Worked out a slightly easier route for a new install of v22.12 .... boot up the installer and then instead of installing, select [8] to exit the installer and into a shell. Use vi to edit the install script: /usr/sbin/truenas-install and then edit line 410:

if ! sgdisk -n3:0:0 -t3:BF01 /dev/${_disk}; then

Specifying the size of the partition you want to limit TrueNAS to occupying, e.g.:

if ! sgdisk -n3:0:+16384M -t3:BF01 /dev/${_disk}; then

Save the file and re-run the installer script. Once successfully installed, return back to the shell to manually configure your additional partitions, exporting the ZFS pool once configured. Reboot, TrueNAS should boot and you should then be able to import your additional storage pool via the web UI.

The critical step here is to "re-run the installer script". I was Ctrl-D-ing from the shell and expecting that the installer script would be re-executed; in retrospect, this doesn't make sense. It was only after I reran the installer by running /usr/sbin/truenas-install in the shell that things started working.

Thanks for this tip! For hacking around with hobbyist hardware, dedicating a 1TB SSD just for the OS doesn't make sense so I'm glad nice workarounds for this usecase exist.

Thanks both of you!!! I was exiting the shell too but this worked awesome 👍

@TCB13
Copy link

TCB13 commented Jul 14, 2023

Worked out a slightly easier route for a new install of v22.12 .... boot up the installer and then instead of installing, select [8] to exit the installer and into a shell.

This option isn't there anymore. Now simply hit Control+C and you'll be dropped into a root shell. Then proceed as described. The line to change is now 412. After changing simply run truenas-install and install it.

If you're not doing a mirrored setup and you just want to have usable space, after the steps above, reboot the system, SSH into it and create a pool from the available space and export it:

zpool create internalssd pool
zpool export internalssd

After this go into the WebUI > Storage > Import Pool and pick the pool you've just created. Reboot and it should be done.

@homonto
Copy link

homonto commented Oct 18, 2023

why? why is this NOT an option during installation? ;-)

@chopinwong01
Copy link

allowing TrueNAS to boot up for the first time

allowing TrueNAS to boot up for the first time will not do any harm

@Awesomefreeman
Copy link

sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install /usr/sbin/truenas-install
this doesn't work anymore. Any suggestions?

@XY-Wing
Copy link

XY-Wing commented Nov 23, 2023

sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install /usr/sbin/truenas-install this doesn't work anymore. Any suggestions?

the same with me. version: 23.10.0.1

@sta777ic
Copy link

sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install /usr/sbin/truenas-install this doesn't work anymore. Any suggestions?

The second /usr/sbin/truenas-install is a seperate command that you run after you sed because it will restart the installer with your new config you just made with sed. so after the sed command, hit enter to apply then run '/usr/sbin/truenas-install'

@sta777ic
Copy link

sta777ic commented Nov 29, 2023

Is there a YT vid or screenshots based walk-through? n00b to Unix and the whole vi edit the file went over my head. I chose Shell instead of install, vi file path and it created a new file. Would love to use my NVMe for app instead of giving it all to the OS that'll use 10% of total space.

Use the sed command above (stands for 'streaming editor') which will modify the file on the fly without having to vi, just realize that the second /usr/sbin/truenas-install is a seperate command intended to run AFTER the sed command.

@sta777ic
Copy link

sta777ic commented Nov 30, 2023

Thank you @gangefors (OP) for getting me started down this path and ultimately sorting thru his commands on which ones I needed.

I have a 16GB DOM (disk/device on module) coming in the mail and that is a capacity efficient but not cost effective way to use newer versions of TrueNAS at least until iXsystems comes up with a better way to install without wasting so much useful storage. But! Since I am waiting on the DOM and specialized power cable to arrive... I thought I would go ahead and try to get TrueNAS booting on my 500GB SSD and using the remaining space for my VM pool.

In my case I have a 500GB Samsung drive (usually dedicated for my VM machines and apps) and wanted TrueNAS boot to use 16GB while the rest was dedicated to my virtual machine/app environment which I will name: vmpool

DO AT YOUR OWN RISK. I AM CANNOT BE HELD RESPONSIBLE IF THIS IS WRONG OR HARMFUL TO YOUR DATA / SYSTEM

Boot into TrueNAS Scale installer and select shell:

parted -l|more

Find your device number and write it down in my case, my drive was /dev/sdi

Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sdi: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name    Flags
 1      2097kB  3146kB  1049kB                       bios_grub, legacy_boot
 2      3146kB  540MB   537MB   fat32                boot, esp
 3      540MB   17.7GB  17.2GB  zfs

run 'parted' command to get you into prompt

# parted
(parted) unit kiB
(parted) select /dev/sdi
(parted) print

This should print the drive details you plan on expanding to use the rest of the drive

Number  Start        End           Size          File system  Name    Flags
 1      2048kiB      3072kiB       1024kiB                            bios_grub, legacy_boot
 2      3072kiB      527360kiB     524288kiB     fat32                boot, esp
 3      527360kiB    17304576kiB   16777216kiB   zfs

For the rest of my space, I want to dedicate this to my virtual machines and call this vmpool.
The important thing here is to create the pool at the END of the other pool hence why we need to give it the end kiB # followed by the 100% to make it use the rest of the drive space.

(parted) mkpart vmpool 17304576kiB 100%
(parted) print 
Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sdi: 488386584kiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start        End           Size          File system  Name    Flags
 1      2048kiB      3072kiB       1024kiB                            bios_grub, legacy_boot
 2      3072kiB      527360kiB     524288kiB     fat32                boot, esp
 3      527360kiB    17304576kiB   16777216kiB   zfs
 4      17304576kiB  488386560kiB  471081984kiB               vmpool

Now quit out of parted

(parted) quit

Back to the TrueNAS installer shell,
Now we need to make a zfs pool to export so the WebGUI will see it. We will need to use the new partition we just created

# zpool create vmpool /dev/sdi4
# zpool status 

You should see something without error that shows the pool you just created.
Now for the WebGUI to be able to see this, we just need to export this pool:

# zpool export vmpool

Reboot and boot into normal TrueNAS and NOT the installer, you should be able to go to the WebGUI and import pool and vmpool should show up.

From putty I ran zpool status and get this now after importing the pool:

root@truenas[~]# zpool status
  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sdi3      ONLINE       0     0     0

errors: No known data errors

  pool: vmpool
 state: ONLINE
config:

        NAME                                    STATE     READ WRITE CKSUM
        vmpool                                  ONLINE       0     0     0
          3aee8bc4-0004-49c7-8845-85b86944769e  ONLINE       0     0     0

errors: No known data errors

You can also see my space in df -h

# root@truenas[~]# df -h |grep vmpool
vmpool                                                      435G  128K  435G   1% /mnt/vmpool

@gangefors
Copy link
Author

gangefors commented Nov 30, 2023

Thanks for contibuting to the guide @sta777ic.

So this article is super useful but missing exact instructions on how to just install TrueNAS to SSD and use the remaining space of the drive.

Creating a pool from the remaining space is detailed in the deprecated part of the guide that goes into how to set up a mirrored boot pool. The updated guide just covers the install part since that seemed what most people are looking for.

@hrdwdmrbl
Copy link

Just in case I might be dense, can someone please confirm which step we're supposed to pick up from in the deprecated instructions? Step 8?

@gangefors
Copy link
Author

... which step we're supposed to pick up from in the deprecated instructions?

Step 8 in the deprecated guide has instructions on how to allocate the remaining space to a partition. I've updated the guide to point to the exact bullet.

@nicwn
Copy link

nicwn commented Mar 3, 2024

Success! I'm using the N5105 board with two 2TB nvme drives, my goal was to create a small boot partition for TrueNAS Scale (32GB) and use the rest for cache. I followed the guide to edit the /usr/sbin/truenas-install file and changed sgdisk -n3:0:0 to sgdisk -n3:0:+32768M.

At first I didn't continue with step 8. and just re-ran /usr/sbin/truenas-install and installed TrueNAS Scale. It installed, but didn't see the empty space. So, I plugged the USB back in to reboot and go into shell and followed step 8 and beyond that made sense:

 $ parted
 (parted) unit kiB
 (parted) select /dev/nvme0n1
 (parted) print
 # See where 32GB drive ends and use that as the beginning of the nvme-pool I'm about to create
 (parted) mkpart nvme-pool 17303552kiB 100%

Since my 2 nvme drives are the same sized, I could just select /dev/nvme1n1 and do mkpart nvme-pool 17303552kiB 100% again.

Next, was trial and error. zpool status didn't show anything. In the end, I figured out it was step 11 to create a mirror pool using the remaining partitions. It threw an error saying to use -f to force the command through:

$ zpool create nvme-pool mirror nvme0n1p5 nvme1n1p5
$ zpool status

(For me the boot pool was already mirrored at partition3 by TrueNAS's installation and doesn't have a name. The nvme-pool I created, when print in parted, showed it's at partition 5, hence nvme0n1p5.)

Now I can boot into TrueNAS and import the pool. Next step is to figure out if I really want to use it as cache, and how to do it. (I'm totally new to TrueNAS.

Thanks to everyone above.

@Jeff-WuYo
Copy link

This works flawlessly, I've been using my mirrored drive for about a year. But now, I've encounter the REAL problem of this method, one of my drive failed, and two pools including boot-pool is degraded. Rebulid the normal pool is fairly easy, but what about boot-pool and efi partition? Do I use dd to clone the partition? What if the drive is completely dead? Any hint is welcome.

@homonto
Copy link

homonto commented Apr 12, 2024

This works flawlessly, I've been using my mirrored drive for about a year. But now, I've encounter the REAL problem of this method, one of my drive failed, and two pools including boot-pool is degraded. Rebulid the normal pool is fairly easy, but what about boot-pool and efi partition? Do I use dd to clone the partition? What if the drive is completely dead? Any hint is welcome.

this one was perfrectly working for me:
https://www.youtube.com/watch?v=m4zwJvzQ65s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment