Skip to content

Instantly share code, notes, and snippets.

@afanjul
Last active March 3, 2022 16:42
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save afanjul/0fa6b75bd9a24746e87a833feb216ab1 to your computer and use it in GitHub Desktop.
Save afanjul/0fa6b75bd9a24746e87a833feb216ab1 to your computer and use it in GitHub Desktop.
Repartition server after Proxmox image installation from rescue mode
We want to split a 8TB disks into:
-1TB RAID1 for root "/"
-1TB RAID1 for extra things "backups, templates, etc."
-6TB LVM-Thin (with LVM mirroring) for Containers and VM
Boot into rescue mode
#Active probes:
modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe dm-mod
#### Mount disks and copy array info and scan it (the root partition "/" in this case is in md127)
/usr/local/sbin/mountall.sh
fdisk -l
mdadm --examine --scan
# Copy the "normal boot" mdadm info into "rescue mode" etc, scan it and check
cp /mnt/md127/etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf
mdadm -A --scan
fdisk -l
e2fsck -f /dev/md127
#### Unmount the partition, and start the process of shrinking the filesystem and the array
umount /mnt/md127
sudo mdadm -D /dev/md127 | grep -e "Array Size" -e "Dev Size" #just to check the array size
resize2fs /dev/md127 1000G #resize filesystem to 1000GB
mdadm --grow /dev/md127 --size=1073741824 #resize array size to 1024GB to keep some "space" and avoid problems
resize2fs /dev/md127 #resize again the filesystem to its maximum (1024GB)
mdadm --detail /dev/md127 #just check the array
# Remove one disk from array and resize partition/segment to something over 1024GB (1110000MB) and re-add and rescan
cat /proc/mdstat
mdadm /dev/md127 --fail /dev/sda4 --remove /dev/sda4
parted -a opt /dev/sda
>> resizepart -> 4 -> 1110000
mdadm -a /dev/md127 /dev/sda4
cat /proc/mdstat
echo 1 > /sys/block/sda/device/rescan
# Do the same with the other disk
#### Until here we would already have the array shrinked to 1T with free space for the rest of the disk.
#### Ne we create 1 extra partition for whatever (backups, etc), for example in RAID1 (we could also use ZFS or whatever)
#Format a 1Tb partition in both disks in RAID1 (type 29)
fdisk /dev/sda
>>n -> 5 -> +1T -> t -> 29
# Do the same with the other disk
# And create the array with the 2 disks
mdadm --create /dev/md128 --level=1 --raid-devices=2 /dev/sda5 /dev/sdb5
cat /proc/mdstat
# Create the file system with mk, mount it, and add it to fstab
mkfs.ext4 -L label-name /dev/md128
nano /etc/fstab
/dev/md128 /backups ext4 defaults 0 1
#### Now we create the 6TB (or actually 5TB) LVM-Thin partition (using now type 31 that is LVM)
fdisk /dev/sda
>>n -> 6 -> +6T -> t -> 31
# do the same (if we want it in mirror) with the other disk
# We create the physical volumes for each disk and the volume group of both
pvcreate /dev/sda6 /dev/sdb6
vgcreate -s128M lvm /dev/sda6 /dev/sdb6
vgdisplay lvm
vgs
# we now create the logical volume in "mirror1" mode (m1) at 97% to leave space for the poolmetadata so then -when we convert it to thin-pool- there won't any problems,
# If we don't want it to be thin-pool it could be done it at 100%
lvcreate -n data -m1 -l 97%VG lvm
lvconvert --type thin-pool --poolmetadatasize 1024M --chunksize 128 lvm/data
lvs -a -o name,copy_percent,devices lvm
mkfs.ext4 /dev/lvm/data
# now doing a fdisk it should show it like this
fdisk -l
Device Start End Sectors Size Type
/dev/sdb1 2048 4095 2048 1M BIOS boot
/dev/sdb2 4096 1052671 1048576 512M Linux swap
/dev/sdb3 1052672 2281471 1228800 600M Linux RAID
/dev/sdb4 2281472 2167968750 2165687279 1T Linux RAID
/dev/sdb5 2167969792 4315453439 2147483648 1T Linux RAID
/dev/sdb6 4315453440 15052871679 10737418240 5T Linux LVM
# to check partitions and mounts (we don't need to mount sdb6 partition for proxmox interface to use it, i think)
blkid
cat /etc/mtab
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment