Skip to content

Instantly share code, notes, and snippets.

@kilfu0701
Created January 19, 2023 05:25
Show Gist options
  • Save kilfu0701/6cf610e692c39d96d5ee3787c6e93c4c to your computer and use it in GitHub Desktop.
Save kilfu0701/6cf610e692c39d96d5ee3787c6e93c4c to your computer and use it in GitHub Desktop.
Change/Switch drives in ZFS RADI10. (also for updating broken drive)
  1. check current drivers
sudo zpool status

output:

pool: raid10_pool
state: ONLINE
scan: resilvered 545G in 0 days 00:18:17 with 0 errors on Wed Jan 18 18:16:04 2023
config:

  NAME          STATE     READ WRITE CKSUM
  raid10_pool   ONLINE       0     0     0
    mirror-0    ONLINE       0     0     0
      nvme0n1   ONLINE       0     0     0
      nvme1n1   ONLINE       0     0     0
    mirror-1    ONLINE       0     0     0
      nvme2n1   ONLINE       0     0     0
      nvme3n1   ONLINE       0     0     0
    mirror-2    ONLINE       0     0     0
      nvme4n1   ONLINE       0     0     0
      nvme5n1   ONLINE       0     0     0

  1. If we want to switch nvme3n1 & nvme5n1 in this case, we need offline/detach these 2 drives.
sudo zpool offline raid10_pool nvme3n1
sudo zpool detach raid10_pool nvme3n1

sudo zpool offline raid10_pool nvme5n1
sudo zpool detach raid10_pool nvme5n1

  1. After detached these 2 drives, you will see the zpool status results looks like as below, it won't affect current online READ/WRITE process.
  • (* BE AWARE: if nvme2n1 or nvme4n1 broken when doing switch process, the data will be permanently lost.)

  • (* Better to do a snapshot/backup or offline the zpool to ensure the process finished smoothly.)

  pool: raid10_pool
 state: ONLINE
  scan: resilvered 545G in 0 days 00:18:17 with 0 errors on Wed Jan 18 18:16:04 2023
config:

	NAME          STATE     READ WRITE CKSUM
	raid10_pool   ONLINE       0     0     0
	  mirror-0    ONLINE       0     0     0
	    nvme0n1   ONLINE       0     0     0
	    nvme1n1   ONLINE       0     0     0
	  nvme2n1     ONLINE       0     0     0
	  nvme4n1     ONLINE       0     0     0

  1. Format & clear data on nvme3n1 / nvme5n1
sudo mkfs -t ext4 /dev/nvme4n1
sudo dd if=/dev/urandom of=/dev/nvme4n1 bs=1M count=2

sudo mkfs -t ext4 /dev/nvme5n1
sudo dd if=/dev/urandom of=/dev/nvme5n1 bs=1M count=2

  1. attach to zpool
sudo zpool attach zfr10 nvme2n1 nvme5n1
sudo zpool attach zfr10 nvme4n1 nvme3n1

check the zpool status, it will be resilvered status. Process will be finished for some hours.

pool: raid10_pool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
  continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Jan 18 17:21:32 2023
  1.61T scanned at 53.1G/s, 1.25T issued at 41.2G/s, 9.39T total
  938M resilvered, 13.28% done, 0 days 00:03:22 to go

  NAME          STATE     READ WRITE CKSUM
  raid10_pool   ONLINE       0     0     0
    mirror-0    ONLINE       0     0     0
      nvme0n1   ONLINE       0     0     0
      nvme1n1   ONLINE       0     0     0
    mirror-1    ONLINE       0     0     0
      nvme2n1   ONLINE       0     0     0
      nvme5n1   ONLINE       0     0     0 (resilvering)
    mirror-2    ONLINE       0     0     0
      nvme4n1   ONLINE       0     0     0
      nvme3n1   ONLINE       0     0     0 (resilvering)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment