Skip to content

Instantly share code, notes, and snippets.

@MakiseKurisu
Last active February 14, 2020 23:38
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save MakiseKurisu/ce76ec78c5672bb493b81c30598c7cc9 to your computer and use it in GitHub Desktop.
Save MakiseKurisu/ce76ec78c5672bb493b81c30598c7cc9 to your computer and use it in GitHub Desktop.
Recover Proxmox VE RAID-1 configuration when one disk is replaced / readded
# TODO: Add ESP disk replacement
# Assuming the disk in problem is /dev/sdb, and we are running off /dev/sda in degraded mode.
# However, if both /dev/sda and /dev/sdb have been booted in degraded mode,
# then the next time when you boot with both disk present, initramfs will not be able to mount root in rw mode.
# In that case, you may have PERMANENT DATA LOSS if you are not careful, and MAKE BACKUPS of your current disks before any recovery.
# I HAVE NOT tried to recovery from such situation, as my plan is always recovery from existing backups.
# However, I'm more likely to recreate the system from scratch, as my data are stored in RAID and with multiple backups,
# so I only need to recreate my VM environment according to the script I'm writting on Gist. I love fresh system anyway.
# Remember, RAID is not BACKUP!
# ----> If your disk is replaced, start from here:
# Set up partition table
(echo g; echo n; echo ''; echo ''; echo '+512M'; echo t; echo 1; echo n; echo ''; echo ''; echo ''; echo w) | fdisk /dev/sdb
# Replace missing Btrfs partition
# Replace 1 with your missing device ID, which can be found in "btrfs de usage /"
# WARNING! This only works if the new disk is the same size of the missing disk.
# Please refer to https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Using_btrfs_replace
# to see how to replace with disk of different size.
btrfs replace start -f 1 /dev/sdb2 /
# ----> If your disk is readded, start from here:
# The following steps should be performed if you are replacing your disk as well.
# Repair Btrfs partition
# You can check the status with "btrfs scrub status /"
systemctl start btrfs-scrub@-.service
# Balance Btrfs partition
# You can check the rebalance result with "btrfs fi usage /"
btrfs balance start -dusage=100 /
btrfs balance start -musage=100 /
btrfs balance start -dusage=100 -musage=100 /
# You do not need to manually resync ESP, if you created mdadm-efi according to
# https://gist.github.com/MakiseKurisu/1b46df3374c1dcbab1048b23312a4e0e
# However, MDADM may not be able to correctly pick the correct resync source. See
# https://outflux.net/blog/archives/2018/04/19/uefi-booting-and-raid1/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment