Skip to content

Instantly share code, notes, and snippets.

@colindean
Created August 13, 2022 22:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save colindean/0a1081f7386eaa323ef144735d589407 to your computer and use it in GitHub Desktop.
Save colindean/0a1081f7386eaa323ef144735d589407 to your computer and use it in GitHub Desktop.
How to expand an Linux Logical Volume Management (LVM) volume to use a resized virtual disk

How to expand an Linux Logical Volume Management (LVM) volume to use a resized virtual disk

Backstory

A Linux virtual machine I use for hosting containerized services in my home lab recently ran out of disk space. I decided to allocate more disk space to it since I had plenty to spare. It turns out that when I'd created the VM, I'd given the virtual hard drive only 200 GB but sometime between then and now, I'd increased the allocation to 1 TB. Strangely, I never checked to see if the VM was using the full terabyte! Frustrated that it wasn't, I set about rectifying the situation.

The setup

⚠️ Always make a backup of your data before doing things with filesystems. For me, I've got regular data-level backups as well as VM snapshots to safeguard against me doing something stupid with things I don't know enough about to be able to fix quickly. Also, don't forget to schedule some time to do things like this, because you don't want to find yourself in a bind. Been there, done that.

I am running the VM on a QNAP NAS using QNAP's QOS's Virtualization Station. This is a basic VM manager that uses qemu under the hood. It's adequate for my needs and has a simple control panel for administration. In my case, sometime in the past, I'd turned off the VM and adjusted the slider for the virtual disk to increase the reserved disk space on the host.

In my case, I've a single VM with a single virtual hard disk. It sees one disk, /dev/vda, and has an LVM-managed volume at /dev/mapper/ubuntu--vg-ubuntu--lv. While df -h shows the volume having a 200 GB capacity, parted shows /dev/vda3 having 1 TB. Let's expand to fill the space.

The steps

Trust your memory, but Verify

First, I verified all of the assumptions using df -h, fdisk -l, and parted, running print devices in the latter. N.b., I run in regular user mode so the below will have sudo prepended where root privileges are needed. I'm also only showing the relevant parts of the output, noting elided output with .

colin@hotdog:~$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
…
/dev/mapper/ubuntu--vg-ubuntu--lv  196G  160G   26G  87% /
…
/dev/vda2                          974M  205M  702M  23% /boot
colin@hotdog:~$ sudo fdisk -l
…
Disk /dev/vda: 1000 GiB, 1073741824000 bytes, 2097152000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6E9BC534-3F72-4EBC-9244-5D95C5A7D186

Device       Start        End    Sectors  Size Type
/dev/vda1     2048       4095       2048    1M BIOS boot
/dev/vda2     4096    2101247    2097152    1G Linux filesystem
/dev/vda3  2101248 2097149951 2095048704  999G Linux filesystem


Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
…
colin@hotdog:~$ sudo parted
GNU Parted 3.3
Using /dev/mapper/ubuntu--vg-ubuntu--lv
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print devices
/dev/mapper/ubuntu--vg-ubuntu--lv (215GB)
/dev/sr0 (952MB)
/dev/vda (1074GB)
(parted) print list
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 215GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
 1      0.00B  215GB  215GB  ext4

…

Model: Virtio Block Device (virtblk)
Disk /dev/vda: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  1074GB  1073GB

The One-Two Punch

The rest of this went very easily for me and resulted in no downtime.

Resize the partition for good measure using parted.

colin@hotdog:~$ sudo parted
GNU Parted 3.3
Using /dev/mapper/ubuntu--vg-ubuntu--lv
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) select /dev/vda
Using /dev/vda
(parted) resizepart
Partition number? 3   <----- Manually typed based on output of above "print list"
End?  [1074GB]?       <----- I just hit ENTER to confirm.
(parted)
Information: You may need to update /etc/fstab.

Resize the physical volume in LVM's eyes:

colin@hotdog:~$ sudo pvresize /dev/vda3
[sudo] password for colin:
  Physical volume "/dev/vda3" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

Extend the logical volume in LVM's eyes to fit the whole physical volume:

colin@hotdog:~$ sudo lvextend  -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
  Size of logical volume ubuntu-vg/ubuntu-lv changed from 200.00 GiB (51200 extents) to <999.00 GiB (255743 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.

Now, resize the filesystem. This command took several minutes:

colin@hotdog:~$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 25, new_desc_blocks = 125
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 261880832 (4k) blocks long.

Lastly, verify that df -h shows the correct volume size:

colin@hotdog:~$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
…
/dev/mapper/ubuntu--vg-ubuntu--lv  983G  160G  781G  17% /
…
/dev/vda2                          974M  205M  702M  23% /boot
…

That's what I expect, so I'm done. Hooray!

See also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment