Skip to content

Instantly share code, notes, and snippets.

@christopher-hopper
Last active April 5, 2022 10:30
Star You must be signed in to star a gist
Save christopher-hopper/9755310 to your computer and use it in GitHub Desktop.
Resize a Hard Disk for a Virtual Machine provisioned using Vagrant from a Linux base box to run using VirutalBox.

Resize a Hard Disk for a Virtual Machine

Our Virtual Machines are provisioned using Vagrant from a Linux base box to run using VirutalBox. If the Hard Disk space runs out and you cannot remove files to free-up space, you can resize the Hard Disk using some VirtualBox and Linux commands.

Some assumptions

The following steps assume you've got a set-up like mine, where:

  • you use a Cygwin or Linux command-line terminal on your host machine
  • the VirtualBox install path is in your Windows (and therefore Cygwin bash) PATH environment variable
  • the vagrant boxes live at the path provisioning/boxes/mybox
  • your Cygwin HOME path is the same as your Windows %USERPROFILE% (see How do I change my Cygwin HOME folder after installation)
  • VirtualBox creates new Virtual Machines in the default location ~/VirtualBox\ VMs/

Steps to resize the hard disk

  1. Stop the virtual machine using Vagrant.

     # cd provisioning/boxes/mybox
     # vagrant halt
    
  2. Locate the VirtuaBox VM and the HDD attached to its SATA Controller. In this instance we're assuming the VM is located in the default location and is named mybox_default_1382400620.

     # cd ~/VirtualBox\ VMs/mybox_default_1382400620
     # VBoxManage showvminfo mybox_default_1382400620 | grep ".vmdk"
    

    The showvminfo command should show you the location on the file-system of the HDD of type VMDK along with the name of the Controller it is attached to - it will look something like this:

     SATA Controller (0, 0): C:\Users\user.name\VirtualBox VMs\mybox_default_1382400620\box-disk1.vmdk (UUID: 2f79610e-6c06-46d5-becb-448386ea40ec)
    
  3. clone the VMDK type disk to a VDI type disk so it can be resized.

     # cd ~/VirtualBox\ VMs/mybox_default_1382400620
     # VBoxManage clonehd "box-disk1.vmdk" "clone-disk1.vdi" --format vdi
    

    NOTE: We do this because VMDK type disks cannot be resized by VirtualBox. It has the added benefit of allowing us to keep our original disk backed-up during the resize operation.

  4. Find out how big the disk is currently, to determine how large to make it when resized. The information will show the current size and the Format variant. If Dynamic Allocation was used to create the disk, the Format variant will be "dynamic default".

     # VBoxManage showhdinfo "clone-disk1.vdi"
    
  5. Resize the cloned disk to give it more space. The size argument below is given in Megabytes (1024 Bytes = 1 Megabyte). Because this disk was created using dynamic allocation I'm going to resize it to 100 Gigabytes.

     # VBoxManage modifyhd "clone-disk1.vdi" --resize 102400
    

    NOTE: If the disk was created using dynamic allocation (see previous step) then the physical size of the disk will not need to match its logical size - meaning you can create a very large logical disk that will increase in physical size only as space is used.

    TIP: To convert a Gigabyte value into Megabytes use an online calculator.

  6. Find out the name of the Storage Controller to attach the newly resized disk to.

     # VBoxManage showvminfo mybox_default_1382400620 | grep "Storage"
    
  7. Attach the newly resized disk to the Storage Controller of the Virtual Machine. In our case we're going to use the same name for the Storage Controller, SATA Controller, as revealed in the step above.

     # VBoxManage storageattach mybox_default_1382400620 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi
    
  8. Reboot the Virtual Machine using Vagrant.

     # cd provisioning/boxes/mybox
     # vagrant up
    
  9. Open a command-line shell as root on the Virtual Machine via ssh.

     # vagrant ssh
     # sudo su -
    
  10. Find the name of the logical volume mapping the file-system is on (ie. /dev/mapper/VolGroupOS-lv_root).

     # df 
    
  11. Find the name of the physical volume (or device) that all the partitions are created on (ie. /dev/sda).

     # fdisk -l
    
  12. Create a new primary partition for use as a Linux LVM

     # fdisk /dev/sda
    
    1. Press p to print the partition table to identify the number of partitions. By default there are two - sda1 and sda2.
    2. Press n to create a new primary partition.
    3. Press p for primary.
    4. Press 3 for the partition number, depending the output of the partition table print.
    5. Press Enter two times to accept the default First and Last cylinder.
    6. Press t to change the system's partition ID
    7. Press 3 to select the newly creation partition
    8. Type 8e to change the Hex Code of the partition for Linux LVM
    9. Press w to write the changes to the partition table.
  13. Reboot the machine, then ssh back in when it is up again and switch to the root user once more.

     # reboot
     # vagrant ssh
     # sudo su -
    
  14. Create a new physical volume using the new primary partition just created.

     # pvcreate /dev/sda3
    
  15. Find out the name of the Volume Group that the Logical Volume mapping belongs to (ie. VolGroupOS).

     # vgdisplay
    
  16. Extend the Volume Group to use the newly created physical volume.

     # vgextend VolGroupOS /dev/sda3
    
  17. Extend the logical volume to use more of the Volume Group size now available to it. You can either tell it to add a set amount of space in Megabytes, Gigabytes or Terabytes, and control the growth of the Disk:

     # lvextend -L+20G /dev/mapper/VolGroupOS-lv_root
    

    Or if you want to use all the free space now available to the Volume Group:

     # lvextend -l +100%FREE /dev/mapper/VolGroupOS-lv_root
    
  18. Resize the file-system to use up the space made available in the Logical Volume

     # resize2fs /dev/mapper/VolGroupOS-lv_root
    
  19. Verfiy that there is now more space available

     # df -h 
    
  20. A restart of the VM using vagrant may be a good idea here, to ensure that all services are running correctly now that there is more space available. Exit the root user, exit the vagrant user and ssh session, then tell vagrant to restart the machine.

     # exit
     # exit
     # vagrant reload --provision
    
@sefeng211
Copy link

OMG, this one is working!!!!
A few things I need to change for my case:

  1. Instead of /dev/mapper/VolGroupOS-lv_root, mine is /dev/mapper/VolGroup-lv_root
  2. and I have "IDE Controller" not "SATA Controller".
    Everything else just works perfect for me
    Thanks man!

@Gcaufy
Copy link

Gcaufy commented Jan 25, 2016

All goes well but failed in the last step, any idea?

resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.

@lockfree7
Copy link

You saved me!! Thank you very much.

I tried to this on ubuntu 14.04LTS and Step 12 was a little bit different. I should do twice and
got the result with /dev/sda3 and /dev/sda4. And then it works well after I created PV with /dev/sda4.

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 501758 167770111 83634177 5 Extended
/dev/sda3 499712 501757 1023 83 Linux
/dev/sda4 167770112 293601279 62915584 8e Linux LVM
/dev/sda5 501760 167770111 83634176 8e Linux LVM

@phirschybar
Copy link

Using Debian 7 and various boxes... unfortunately this guide didn't work for me, I got caught on step 14 pvcreate /dev/sda3 where I got the error Device /dev/sda3 not found (or ignored by filtering). I wrote an alternate guide for people using Debian (this works specifically with the puphpet/debian75-x64 box. Here it is: https://medium.com/@phirschybar/resize-your-vagrant-virtualbox-disk-3c0fbc607817

@mudakara
Copy link

Awesome Tutorial!!! just worked for me with slight modification
@aldrienht please use SATA instead of SATA Controller, it worked for me in that way.
Thanks

@han0tae
Copy link

han0tae commented Mar 14, 2016

Very nice. I use xfs_growfs instead of resize2fs in centos 7

@nickshek
Copy link

Great tutorial! it works by changing resize2fs to xfs_growfs in centos 7

@khsibr
Copy link

khsibr commented Mar 31, 2016

Great man! thanks

@KevinYanCHN
Copy link

Thank you~!!

@teyepe
Copy link

teyepe commented Jun 9, 2016

Thanks a lot @christopher-hopper, saved my day!

@Xronger
Copy link

Xronger commented Jun 22, 2016

@JonLevy also not working, in archlinux i can not find the /dev/mapper/VolGroup-lv_root, can someone help?

@LyleH
Copy link

LyleH commented Jun 24, 2016

This is the most painful process in the world... But thanks for the article. I just really dislike resizing VMs :(

@nroose
Copy link

nroose commented Jun 25, 2016

Yeah. Painful. Can anyone listening who creates boxes in the future create them bigger rather than smaller? What god of system administration decreed that 40 GB was the magic number? The stuff that is usually used makes it so that it only takes up as much space as it needs to on the physical disk anyway! How about 100GB or so?

@pc84
Copy link

pc84 commented Aug 23, 2016

Excellent tutorial. Thanks!!

@cscott300
Copy link

This was really helpful! Thanks!

For reference, worked flawlessly on the following configuration:

Mac OS X Yosemite 10.10.5
VirtualBox 5.0.26
Vagrant 1.8.5
Vagrantfile basebox snippet:

config.vm.box_url = "http://cloud.terry.im/vagrant/oraclelinux-6-x86_64.box"
config.vm.box = "oraclelinux-6-x86_64"

@shaibu
Copy link

shaibu commented Nov 1, 2016

Thanks buddy!

@pionl
Copy link

pionl commented Jan 3, 2017

Hi,

I'm not able to continue on partition. I've got different structure. I've tried several solutions.

I'm able to create /dev/sda3 partition, but it has empty VG name. I've also tried to create the VG group, but then I cant extend the /dev/sda1

My structure is:

df

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sda1        9620408 8118692    989972  90% /
udev               10240       0     10240   0% /dev
tmpfs             693532    8544    684988   2% /run
tmpfs            1733828       0   1733828   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs            1733828       0   1733828   0% /sys/fs/cgroup

sudo fdisk -l

/dev/sda1  *        2048 19816447 19814400  9.5G 83 Linux
/dev/sda2       19818494 20764671   946178  462M  5 Extended
/dev/sda5       19818496 20764671   946176  462M 82 Linux swap / Solaris

Any ideas?

Thank you,

@IGR2014
Copy link

IGR2014 commented Apr 3, 2018

Why no one noticed this: "(1024 Bytes = 1 Megabyte)"
Actually 1024 Bytes = 1 Kilobyte and 1024 Kilobytes = 1 Megabyte

@ferzerkerx
Copy link

thanks!

@suzenhan
Copy link

resize2fs /dev/mapper/centos-root on centos 7 error

resize2fs: Invalid magic number in the superblock when trying to open /dev/mapper/centos-root
Can not find a valid file system superblock.

Resolved by

fsadm resize /dev/mapper/centos-root

@bsofiato
Copy link

Awesome dude !!!

@idzuwan
Copy link

idzuwan commented Aug 16, 2018

@suzenhan saved a lot of time googling :)

@misachi
Copy link

misachi commented Aug 22, 2018

awesome!! works like a charm

@BobCochran
Copy link

I too had to resize a Vagrant box. The only tricky part for me was attaching the resized VDI. The output of 'vboxmanage showvminfo mdb4' was:

SCSI (0, 0): /home/usbob2/VirtualBox VMs/mdb4/ubuntu-xenial-16.04-cloudimg.vmdk (UUID: 265eb031-74de-4235-87e0-2ba1e41c5e6f)

So the name needed for 'vboxmanage storageattach' is "SCSI". I substituted this command for step 7:

vboxmanage storageattach mdb4 --storagectl "SCSI" --port 0 --device 0 --type hdd --medium clone-mdb4-disk1.vdi

This seems to have worked fine. I simply went back to my vagrant box and did 'vagrant up' and then 'vagrant ssh' and everything came up just fine. Thanks to the VDI, I now have a lot more space for the VM. Because my flavor of Ubuntu 16.04 LTS does not use LVM, I didn't need to do any of the LVM-related steps after step 8.

@roomm
Copy link

roomm commented Oct 16, 2018

Another working option is https://github.com/sprotheroe/vagrant-disksize automatic and silent. Only modifying the vagrantfile is needed.

@nawel
Copy link

nawel commented Dec 27, 2018

Thank you, very helpful tutorial !

Some substitutions I needed to make:

  1. The virtual disk had to be closed before I could clone it. I used this command line before step 3:
    VBoxManage closemedium disk box-disk1.vmdk
  2. I used "IDE Controller" instead of "SATA Controller" in step 7
  3. Centos 7 default filesystem is xfs, so in step 18 I used xfs_growfs instead of resize2fs

@nwinkler
Copy link

Worked great for me on a CentOS box. I used the https://github.com/sprotheroe/vagrant-disksize plugin that was mentioned above. You can automate the whole thing by putting the following in the Vagrantfile:

config.disksize.size = "50GB"

$script = <<-SCRIPT
    # START FS RESIZE

    # References:
    # - https://gist.github.com/christopher-hopper/9755310
    # - https://superuser.com/questions/332252/how-to-create-and-format-a-partition-using-a-bash-script
    (
    echo n   # Create new partition
    echo p   # Primary partition
    echo 3   # Third partition
    echo     # Default first
    echo     # Default last
    echo t   # Change partition ID
    echo 3   # Select 3rd partition
    echo 8e  # Linux LVM
    echo w   # Write partition table
    ) | sudo fdisk /dev/sda

    sudo partprobe

    sudo pvcreate /dev/sda3

    sudo vgextend centos /dev/sda3

    sudo lvextend -l +100%FREE /dev/mapper/centos-root

    sudo xfs_growfs /dev/centos/root

    df -h

    # END FS RESIZE
SCRIPT

config.vm.provision "shell", inline: $script

@BobCochran
Copy link

BobCochran commented Jun 21, 2019

When I did

vboxmanage showvminfo nnnn | egrep '.vmdk'

...it turned out that I have two files for that virtual machine

SCSI (0, 0): /home/aaaa/VirtualBox VMs/nnnn/box-disk001.vmdk (UUID: 2ac4b4f5-5557-4a2d-ac92-adaff48c5ab8)
SCSI (1, 0): /home/aaaa/VirtualBox VMs/nnnn/box-disk002.vmdk (UUID: b0fb2148-be8d-4739-98bd-5a30525b7b48)

File

box-disk002.vmdk

could be a snapshot? I do not know enough of virtual machines to decide.

I was not sure how to convert both of these to .vdi format, so I decided to simply trust the 'clonehd' utility:

VBoxManage clonehd "box-disk001.vmdk" "clone-disk001.vdi" --format vdi

Then I resized it:

VBoxManage modifyhd "clone-disk001.vdi" --resize 102400

Then I attached it:

vboxmanage storageattach aaaa --storagectl "SCSI" --port 0 --device 0 --type hdd --medium clone-disk001.vdi

..and I dropped the second SCSI port although I cannot give you a good reason for doing so except that I did not want box-disk002.vmdk to be used:

vboxmanage storageattach entmdb5 --storagectl "SCSI" --port 1 --device 0 --type hdd --medium none

When I did a 'vagrant up', the new machine did indeed boot up. However it still shows 9.63 GiB as the size:

Usage of /: 70.9% of 9.63GB

EDIT: I had missed the two final steps needed. First, I had to resize the device partition /dev/sda1 to the new size of 100 GiB. I was at a loss for how to do this from the command line, without using the graphical gparted utility. I eventually recalled that I simply 'vagrant up' the new virtual machine, ssh into that, start fdisk as a sudoer, print partition #1, note the starting sector and the file system id of '83'. Then I delete that partition, and create a new partition #1 with the same starting sector value and the default ending sector value, and retaining the id of 83. After doing this I wrote the partition table, rebooted the virtual machine, ssh-ed into it again, and executed resize2fs /dev/sda1. I did another reboot, and now the system shows a total size of 98-some GiB.

@isayeter
Copy link

isayeter commented Dec 4, 2019

at #18 step (resize2fs ) if you face Bad magic number in super-block while trying to open error , then you need to do fsadm resize /dev/centos/root then continue other steps.

@chikolokoy08
Copy link

@christopher-hopper Thank you for this clear and life saving tutorial. You're a gift to mankind. Stay safe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment