Skip to content

Instantly share code, notes, and snippets.

@laineantti
Created November 4, 2018 22:37
Show Gist options
  • Star 47 You must be signed in to star a gist
  • Fork 20 You must be signed in to fork a gist
  • Save laineantti/4fc29acbbd25593619a76b413e42b78f to your computer and use it in GitHub Desktop.
Save laineantti/4fc29acbbd25593619a76b413e42b78f to your computer and use it in GitHub Desktop.
Proxmox - Resize pve-root
# Check disk space before
df -h
# Delete local-lvm storage in gui
lvremove /dev/pve/data
lvresize -l +100%FREE /dev/pve/root
resize2fs /dev/mapper/pve-root
# Check disk space after
df -h
@Cadjoe
Copy link

Cadjoe commented Nov 27, 2021

Planning to upgrade proxmox-ve 6.4.1 to PVE 7+ but running pve6to7, I have some concerns.

WARN: Less than 4 GiB free space on root file system.

My current installation runs on a 32GB flash drive.

Also df -h shows

Filesystem            Size  Used Avail Use% Mounted on
udev                  7.8G     0  7.8G   0% /dev
tmpfs                 1.6G  166M  1.4G  11% /run
/dev/mapper/pve-root  7.1G  4.5G  2.3G  67% /
tmpfs                 7.8G   54M  7.8G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sdb2             511M  312K  511M   1% /boot/efi
/dev/sda1             458G  206G  229G  48% /mnt/data-store
/dev/fuse              30M   52K   30M   1% /etc/pve
tmpfs                 1.6G     0  1.6G   0% /run/user/0

Wondering if this option you have here would do the trick without loss of data. Could you give me a bit more details when you get a chance?

@laineantti
Copy link
Author

Planning to upgrade proxmox-ve 6.4.1 to PVE 7+ but running pve6to7, I have some concerns.

WARN: Less than 4 GiB free space on root file system.

My current installation runs on a 32GB flash drive.

Also df -h shows

Filesystem            Size  Used Avail Use% Mounted on
udev                  7.8G     0  7.8G   0% /dev
tmpfs                 1.6G  166M  1.4G  11% /run
/dev/mapper/pve-root  7.1G  4.5G  2.3G  67% /
tmpfs                 7.8G   54M  7.8G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sdb2             511M  312K  511M   1% /boot/efi
/dev/sda1             458G  206G  229G  48% /mnt/data-store
/dev/fuse              30M   52K   30M   1% /etc/pve
tmpfs                 1.6G     0  1.6G   0% /run/user/0

Wondering if this option you have here would do the trick without loss of data. Could you give me a bit more details when you get a chance?

I haven't use Proxmox in years but only thing i suggest is that you make backups before doing anything else. I would recommend Clonezilla. After you have whole disk backup at least in two different places saved you can test if this works. Then in the worst-case scenario you can just flash that backup back with Clonezilla and everything will be the same as before.

@Nidal14
Copy link

Nidal14 commented Jun 24, 2022

Worked well. Thank you!

@clibequilibrium
Copy link

➜ ~ df -h Filesystem Size Used Avail Use% Mounted on udev 32G 0 32G 0% /dev tmpfs 6.3G 1.2M 6.3G 1% /run /dev/mapper/pve-root 60G 17G 43G 29% / tmpfs 32G 43M 32G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/fuse 128M 48K 128M 1% /etc/pve tmpfs 6.3G 0 6.3G 0% /run/user/0 ➜ ~ lvremove /dev/pve/data Removing pool "data" will remove 6 dependent volume(s). Proceed? [y/n]: y Logical volume pve/vm-105-disk-0 in use. ➜ ~ lvresize -l +100%FREE /dev/pve/root New size (19262 extents) matches existing size (19262 extents). ➜ ~ resize2fs /dev/mapper/pve-root resize2fs 1.46.2 (28-Feb-2021) resize2fs: Bad magic number in super-block while trying to open /dev/mapper/pve-root Couldn't find valid filesystem superblock. ➜ ~

use xfs_growfs
xfs_growfs /dev/mapper/pve-root

@Cadjoe
Copy link

Cadjoe commented Sep 13, 2022

I finally upgraded successfully and ran again without issues. A system backup is always a good idea. Thanks

@john-shine
Copy link

Warning! This will destroy all your VMs, check before you do it.

@ililminati
Copy link

ililminati commented Mar 9, 2023

Não consegui resolver com segurança meu problema com o método acima, então, fiz o seguinte:

  1. Com comando vgs verifiquei que pve ainda tinha espaço livre:
    root@pve2:~# vgs
 VG       #PV #LV #SN Attr   VSize    VFree
  pve        1   3   0 wz--n- <223.07g 7.30g
  1. Confirmei a informação com o comando:
    # vgdisplay pve
    Free PE / Size 1869 / 7.30 GiB

  2. Depois, com o comando abaixo, me certifiquei do tamanho da minha LV
    # lvdisplay pve-root

Free PE / Size 1869 / 7.30 GiB

  1. Depois redimensionei minha LV em mais 7Gigas
    lvextend -r -L +7G /dev/pve/root

  2. Confirmei que ela havia sido redimensionada com o comando abaixo:
    lvdisplay /dev/pve/root

  3. E para garantir, verifiquei com o comando abaixo:
    lvdisplay /dev/pve/root

Me inspirei no poste abaixo para essa solução, espero que ajude!
https://learn.microsoft.com/pt-br/azure/virtual-machines/linux/how-to-resize-encrypted-lvm

@askiiart
Copy link

askiiart commented Jul 10, 2023

To recreate pve data using 100GB of space: lvcreate -L 100GB -n data pve

You can find how much space is free with vgdisplay (under "FREE PE / Size")

Note that you might need to decrease the size by a tiny bit from what is listed there, due to rounding. For example, on my server, vgdisplay showed 267.79 GB free, but I had to use 267.78 GB for lvcreate.

@dcolley
Copy link

dcolley commented Jul 29, 2023

I found this document that helped me loads.
The main problem was that LV data took up all the space.
https://docs.google.com/document/d/1_5FFYUGwXQEjNosfBXgn838CdqcUExJrFc5ktqLu52c/edit

@sysvar
Copy link

sysvar commented Mar 26, 2024

For a new fresh install, very helpful, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment