Skip to content

Instantly share code, notes, and snippets.

@sambrightman
Last active January 10, 2017 13:26
Show Gist options
  • Save sambrightman/e69dd8e018058fb79a3316212e840d6c to your computer and use it in GitHub Desktop.
Save sambrightman/e69dd8e018058fb79a3316212e840d6c to your computer and use it in GitHub Desktop.
RAID re-design
# find right controller/virtual drive:
sudo /opt/MegaRAID/storcli/storcli64 show all
sudo /opt/MegaRAID/storcli/storcli64 /c0 show all
sudo /opt/MegaRAID/storcli/storcli64 /c0/v0 show all
...
sudo umount /datam
sudo umount /datan
# sometimes you can migrate (maybe not losing data?), but only non-nested RAID levels currently supported:
# sudo /opt/MegaRAID/storcli/storcli64 /c0/v0 start migrate type=raid5
# otherwise have to DESTROY and recreate:
# probably i missed this step (correct LVM preparation) in the cases that had trouble:
# sudo vgremove datan
# sudo vgremove datan
# sudo pvremove /dev/sdx
# sudo pvremove /dev/sdy
# RAID reconfigure/rebuild:
# sudo /opt/MegaRAID/storcli/storcli64 /c0/v0 del
# sudo /opt/MegaRAID/storcli/storcli64 /c0 add vd r5 drives=252:0-7
# likely to show as inconsitent now, so:
# sudo /opt/MegaRAID/storcli/storcli64 /c0/v0 start init full
# sudo /opt/MegaRAID/storcli/storcli64 /c0/v0 show init
# ... wait and check (~2.25 hours)
# when done:
sudo pvscan
sudo pvs
ls /dev/sd*
################### TROUBLE ZONE ###################
# devices didn't always re-appear or resize smoothly in all cases
# either errors are shown or devices just do not appear, so if necessary try the below re-scans
# one array worked with this, others didn't or only partially
# possibly all due to (initially) missing the vgremove/pvremove now shown above, so maybe goes well now
# crunch003 worked fine with vgremove/pvremove
echo 1 | sudo tee /sys/block/sdx/device/rescan
# see http://serverfault.com/questions/5336/how-do-i-make-linux-recognize-a-new-sata-dev-sda-drive-i-hot-swapped-in-without
# maybe just reboot (comment out lines in /etc/fstab first)
# i used https://access.redhat.com/solutions/140273:
# dmsetup remove /dev/data1/*
# echo 1 > sudo tee /sys/block/sdx/device/delete
# then tried all rescan in /sys/bus/scsi_host which brought back new /dev/sdx devices:
echo "- - -" | sudo tee /sys/class/scsi_host/host0/scan
echo "- - -" | sudo tee /sys/class/scsi_host/host1/scan
echo "- - -" | sudo tee /sys/class/scsi_host/host2/scan
################### TROUBLE ZONE ###################
# repeat for other array, then split one disk into two partitions to assign each part to different volume groups:
# sudo gdisk /dev/sdx # create empty GPT, split into two partitions
# sudo pvcreate /dev/sdx1
# sudo pvcreate /dev/sdx2
# sudo pvcreate /dev/sdy
# sudo pvs
# sudo vgcreate data1 /dev/sdy /dev/sdx1
# sudo vgcreate data2 /dev/sdx2
# sudo vgs
# sudo lvcreate -l 100%VG data1
# sudo lvcreate -l 100%VG data2
# sudo vgs
# sudo mkfs.xfs /dev/mapper/data1-lvol0
# sudo mkfs.xfs /dev/mapper/data2-lvol0
# make sure the filesystem and mapper name is correct (crunch1 was different than others, used ext4 previously)
# sudo emacs /etc/fstab
# sudo mount -a
# df -h
# done! maybe reboot to be sure
alternatively, if migrating instead of destroying:
# UNTESTED
# sudo pvresize /dev/sdx
# extra size should be added to "PFree"
# sudo vgextend...
# sudo lvextend...
# sudo xfs_growfs /dev/mapper/xxx (or resize2fs if using ext4)
# re-allocating the second half of the split array to another volume group should be something like:
# UNTESTED
# sudo umount /datam
# sudo umount /datan
# sudo mkfs.xfs /dev/mapper/data1-lvol0
# sudo pvmove /dev/sdy
# sudo vgsplit data1 data2 /dev/sdy
# sudo lvextend -l 100%VG data2 # necessary?
# sudo xfs_growfs /dev/mapper/data2-lvol0
# sudo mount -a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment