Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
Resize a Hard Disk for a Virtual Machine provisioned using Vagrant from a Linux base box to run using VirutalBox.

Resize a Hard Disk for a Virtual Machine

Our Virtual Machines are provisioned using Vagrant from a Linux base box to run using VirutalBox. If the Hard Disk space runs out and you cannot remove files to free-up space, you can resize the Hard Disk using some VirtualBox and Linux commands.

Some assumptions

The following steps assume you've got a set-up like mine, where:

  • you use a Cygwin or Linux command-line terminal on your host machine
  • the VirtualBox install path is in your Windows (and therefore Cygwin bash) PATH environment variable
  • the vagrant boxes live at the path provisioning/boxes/mybox
  • your Cygwin HOME path is the same as your Windows %USERPROFILE% (see How do I change my Cygwin HOME folder after installation)
  • VirtualBox creates new Virtual Machines in the default location ~/VirtualBox\ VMs/

Steps to resize the hard disk

  1. Stop the virtual machine using Vagrant.

     # cd provisioning/boxes/mybox
     # vagrant halt
  2. Locate the VirtuaBox VM and the HDD attached to its SATA Controller. In this instance we're assuming the VM is located in the default location and is named mybox_default_1382400620.

     # cd ~/VirtualBox\ VMs/mybox_default_1382400620
     # VBoxManage showvminfo mybox_default_1382400620 | grep ".vmdk"

    The showvminfo command should show you the location on the file-system of the HDD of type VMDK along with the name of the Controller it is attached to - it will look something like this:

     SATA Controller (0, 0): C:\Users\\VirtualBox VMs\mybox_default_1382400620\box-disk1.vmdk (UUID: 2f79610e-6c06-46d5-becb-448386ea40ec)
  3. clone the VMDK type disk to a VDI type disk so it can be resized.

     # cd ~/VirtualBox\ VMs/mybox_default_1382400620
     # VBoxManage clonehd "box-disk1.vmdk" "clone-disk1.vdi" --format vdi

    NOTE: We do this because VMDK type disks cannot be resized by VirtualBox. It has the added benefit of allowing us to keep our original disk backed-up during the resize operation.

  4. Find out how big the disk is currently, to determine how large to make it when resized. The information will show the current size and the Format variant. If Dynamic Allocation was used to create the disk, the Format variant will be "dynamic default".

     # VBoxManage showhdinfo "clone-disk1.vdi"
  5. Resize the cloned disk to give it more space. The size argument below is given in Megabytes (1024 Bytes = 1 Megabyte). Because this disk was created using dynamic allocation I'm going to resize it to 100 Gigabytes.

     # VBoxManage modifyhd "clone-disk1.vdi" --resize 102400

    NOTE: If the disk was created using dynamic allocation (see previous step) then the physical size of the disk will not need to match its logical size - meaning you can create a very large logical disk that will increase in physical size only as space is used.

    TIP: To convert a Gigabyte value into Megabytes use an online calculator.

  6. Find out the name of the Storage Controller to attach the newly resized disk to.

     # VBoxManage showvminfo mybox_default_1382400620 | grep "Storage"
  7. Attach the newly resized disk to the Storage Controller of the Virtual Machine. In our case we're going to use the same name for the Storage Controller, SATA Controller, as revealed in the step above.

     # VBoxManage storageattach mybox_default_1382400620 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi
  8. Reboot the Virtual Machine using Vagrant.

     # cd provisioning/boxes/mybox
     # vagrant up
  9. Open a command-line shell as root on the Virtual Machine via ssh.

     # vagrant ssh
     # sudo su -
  10. Find the name of the logical volume mapping the file-system is on (ie. /dev/mapper/VolGroupOS-lv_root).

     # df 
  11. Find the name of the physical volume (or device) that all the partitions are created on (ie. /dev/sda).

     # fdisk -l
  12. Create a new primary partition for use as a Linux LVM

     # fdisk /dev/sda
    1. Press p to print the partition table to identify the number of partitions. By default there are two - sda1 and sda2.
    2. Press n to create a new primary partition.
    3. Press p for primary.
    4. Press 3 for the partition number, depending the output of the partition table print.
    5. Press Enter two times to accept the default First and Last cylinder.
    6. Press t to change the system's partition ID
    7. Press 3 to select the newly creation partition
    8. Type 8e to change the Hex Code of the partition for Linux LVM
    9. Press w to write the changes to the partition table.
  13. Reboot the machine, then ssh back in when it is up again and switch to the root user once more.

     # reboot
     # vagrant ssh
     # sudo su -
  14. Create a new physical volume using the new primary partition just created.

     # pvcreate /dev/sda3
  15. Find out the name of the Volume Group that the Logical Volume mapping belongs to (ie. VolGroupOS).

     # vgdisplay
  16. Extend the Volume Group to use the newly created physical volume.

     # vgextend VolGroupOS /dev/sda3
  17. Extend the logical volume to use more of the Volume Group size now available to it. You can either tell it to add a set amount of space in Megabytes, Gigabytes or Terabytes, and control the growth of the Disk:

     # lvextend -L+20G /dev/mapper/VolGroupOS-lv_root

    Or if you want to use all the free space now available to the Volume Group:

     # lvextend -l +100%FREE /dev/mapper/VolGroupOS-lv_root
  18. Resize the file-system to use up the space made available in the Logical Volume

     # resize2fs /dev/mapper/VolGroupOS-lv_root
  19. Verfiy that there is now more space available

     # df -h 
  20. A restart of the VM using vagrant may be a good idea here, to ensure that all services are running correctly now that there is more space available. Exit the root user, exit the vagrant user and ssh session, then tell vagrant to restart the machine.

     # exit
     # exit
     # vagrant reload --provision

rhacker commented Sep 8, 2014

excellent tutorial. Thanks a lot :D

tobsan commented Sep 17, 2014

I agree with rhacker, very nice! However on my setup (ubuntu/trusty32 box), steps 9 and further were unneccessary, since the image did not use a volume group but was mounted directly as /dev/sda1.

Oh, and my host system is running debian testing (jessie)

guicaro commented Sep 18, 2014

This is awesome! Similar to @tobsan I also did not have to do steps 9 and further. Thanks

Thnkz a lot! Even not using lvm is good to haveit in the same place.. no one knowns XD


This didn't work for me : / . My VM doesn't have a Volume Group, but the VM doesn't recognize more space being available after step 8. Not sure what to do ..

sese commented Oct 7, 2014

Great, it worked ! Thanks man, you saved my day !

👍 🌟 🌟 🌟 😄

budhrg commented Dec 3, 2014

Awesome!!!! 👍 ☺️ .

However, found two typo errors at step 4 and step 7.
Step 4 : Command started with VboxManage instead of VBoxManage.
Step 7: SATA Controller should be SATAController (in my case).

Thanks. Tutorial is Great!!

rohanpn commented Dec 5, 2014

👍 😊 worked for me as well.

Works like a charm 🍻 👍

My VM has CentOS7, so in step 18 I had to replace resize2fs with xfs_growfs:

xfs_growfs /dev/centos/root

Thanks! Saved me!

In Step 3, give full path for vmdk file, so the command will be changed to

VBoxManage clonehd " ~/VirtualBox\ VMs/mybox_default_1382400620/box-disk1.vmdk" "clone-disk1.vdi" --format vdi

Otherwise, I got some errors as:

VBoxManage: error: Cannot register the hard disk 'xxxxxxxxxx.vmdk' {3b16a523-1637-45fe-ae4f-0b6c78736ba5} because a hard disk 'xxxxxxxxx.vmdk' with UUID {XXXXXXXX} already exists
VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component VirtualBox, interface IVirtualBox, callee nsISupports
VBoxManage: error: Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, enmAccessMode, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 178 of file VBoxManageDisk.cpp

In step 7, I have to change from SATA Controller to IDE Controller

Avoid to follow up these complex steps, I create a repository to do the job automatically ( Use vagrant box "fillup/centos-6.5-x86_64-minimal" as sample, which has 3GB on root.

manojtr commented Mar 20, 2015

Awesome! tutorial



Great tutorial! Thanks!

pjullah commented Jul 2, 2015

Thank you so much for this.

khyurri commented Jul 13, 2015

Change line «VboxManage showhdinfo "clone-disk1.vdi"» to «VBoxManage showhdinfo "clone-disk1.vdi"», please.

So happy to see that this helped people. I've made a few changes as suggested in your comments. Thanks also for helping others with your own experiences. Had to run this process again for the first time in over a year and was surprised to see it had been helping others.

I originally wrote this so I wouldn't have to resize VMs for my colleagues at a previous job. The steps may be long-winded or complex, but I wanted to explain what was being done so the reader could understand a bit about what they were actually doing. That helps a lot when things don't work as expected.

p4ali commented Aug 14, 2015

Step 12. v. The default value does not work for me. I have to pick the first unused block.

Other than that, everything works for me. Thanks!

yeah xfs_growfs is nice @CentOS
very nice

JonLevy commented Aug 21, 2015

Doesn't work for me (Ubuntu 14.04). Goes off-track at step 10. Can't see a logical volume mapping. Also, there is not volume group (step 15).

Excellent! thanks for the detailed step

Very helpful. Thank you.

jstsch commented Oct 6, 2015

Good write-up, thanks a lot! Also, note that if you use VirtualBox snapshots, you also need to manually resize all the snapshots individually using VBoxManage.

For example I have the below output after running the command in Step 6:

$ VBoxManage showvminfo InfomixDev-CentOS7-1 | grep "Storage"
Storage Controller Name (0): IDE
Storage Controller Type (0): PIIX4
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0): 2
Storage Controller Port Count (0): 2
Storage Controller Bootable (0): on
Storage Controller Name (1): SATA
Storage Controller Type (1): IntelAhci
Storage Controller Instance Number (1): 0
Storage Controller Max Port Count (1): 30
Storage Controller Port Count (1): 1
Storage Controller Bootable (1): on

I got ERROR on the next step (7) running:

VBoxManage storageattach InfomixDev-CentOS7-1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi

ERROR: VBoxManage.exe: error: Could not find a controller named 'SATA Controller'

Please help!

@christopher-hopper I wish I could pay you a beer sir ! Thank you for this tutorial, worked like a charm.

@aldrienht It appears you have an IDE controller for your storage. You need to do VBoxManage storageattach InfomixDev-CentOS7-1 --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi and not "SATA Controller"

Great tutorial, thanks a bunch!

marzn commented Dec 8, 2015

First: Thank you =)

In step 13: You do not need to restart the system, just use partprobe

OMG, this one is working!!!!
A few things I need to change for my case:

  1. Instead of /dev/mapper/VolGroupOS-lv_root, mine is /dev/mapper/VolGroup-lv_root
  2. and I have "IDE Controller" not "SATA Controller".
    Everything else just works perfect for me
    Thanks man!

Gcaufy commented Jan 25, 2016

All goes well but failed in the last step, any idea?

resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.

You saved me!! Thank you very much.

I tried to this on ubuntu 14.04LTS and Step 12 was a little bit different. I should do twice and
got the result with /dev/sda3 and /dev/sda4. And then it works well after I created PV with /dev/sda4.

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 501758 167770111 83634177 5 Extended
/dev/sda3 499712 501757 1023 83 Linux
/dev/sda4 167770112 293601279 62915584 8e Linux LVM
/dev/sda5 501760 167770111 83634176 8e Linux LVM

Using Debian 7 and various boxes... unfortunately this guide didn't work for me, I got caught on step 14 pvcreate /dev/sda3 where I got the error Device /dev/sda3 not found (or ignored by filtering). I wrote an alternate guide for people using Debian (this works specifically with the puphpet/debian75-x64 box. Here it is:

Awesome Tutorial!!! just worked for me with slight modification
@aldrienht please use SATA instead of SATA Controller, it worked for me in that way.

han0tae commented Mar 14, 2016

Very nice. I use xfs_growfs instead of resize2fs in centos 7

Great tutorial! it works by changing resize2fs to xfs_growfs in centos 7

khsibr commented Mar 31, 2016

Great man! thanks

Thank you~!!

teyepe commented Jun 9, 2016

Thanks a lot @christopher-hopper, saved my day!

Xronger commented Jun 22, 2016

@JonLevy also not working, in archlinux i can not find the /dev/mapper/VolGroup-lv_root, can someone help?

LyleH commented Jun 24, 2016

This is the most painful process in the world... But thanks for the article. I just really dislike resizing VMs :(

nroose commented Jun 25, 2016

Yeah. Painful. Can anyone listening who creates boxes in the future create them bigger rather than smaller? What god of system administration decreed that 40 GB was the magic number? The stuff that is usually used makes it so that it only takes up as much space as it needs to on the physical disk anyway! How about 100GB or so?

pc84 commented Aug 23, 2016

Excellent tutorial. Thanks!!

This was really helpful! Thanks!

For reference, worked flawlessly on the following configuration:

Mac OS X Yosemite 10.10.5
VirtualBox 5.0.26
Vagrant 1.8.5
Vagrantfile basebox snippet:

config.vm.box_url = "" = "oraclelinux-6-x86_64"

shaibu commented Nov 1, 2016

Thanks buddy!

pionl commented Jan 3, 2017


I'm not able to continue on partition. I've got different structure. I've tried several solutions.

I'm able to create /dev/sda3 partition, but it has empty VG name. I've also tried to create the VG group, but then I cant extend the /dev/sda1

My structure is:


Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sda1        9620408 8118692    989972  90% /
udev               10240       0     10240   0% /dev
tmpfs             693532    8544    684988   2% /run
tmpfs            1733828       0   1733828   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs            1733828       0   1733828   0% /sys/fs/cgroup

sudo fdisk -l

/dev/sda1  *        2048 19816447 19814400  9.5G 83 Linux
/dev/sda2       19818494 20764671   946178  462M  5 Extended
/dev/sda5       19818496 20764671   946176  462M 82 Linux swap / Solaris

Any ideas?

Thank you,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment