Instantly share code, notes, and snippets.

Embed
What would you like to do?
Resize a Hard Disk for a Virtual Machine provisioned using Vagrant from a Linux base box to run using VirutalBox.

Resize a Hard Disk for a Virtual Machine

Our Virtual Machines are provisioned using Vagrant from a Linux base box to run using VirutalBox. If the Hard Disk space runs out and you cannot remove files to free-up space, you can resize the Hard Disk using some VirtualBox and Linux commands.

Some assumptions

The following steps assume you've got a set-up like mine, where:

  • you use a Cygwin or Linux command-line terminal on your host machine
  • the VirtualBox install path is in your Windows (and therefore Cygwin bash) PATH environment variable
  • the vagrant boxes live at the path provisioning/boxes/mybox
  • your Cygwin HOME path is the same as your Windows %USERPROFILE% (see How do I change my Cygwin HOME folder after installation)
  • VirtualBox creates new Virtual Machines in the default location ~/VirtualBox\ VMs/

Steps to resize the hard disk

  1. Stop the virtual machine using Vagrant.

     # cd provisioning/boxes/mybox
     # vagrant halt
    
  2. Locate the VirtuaBox VM and the HDD attached to its SATA Controller. In this instance we're assuming the VM is located in the default location and is named mybox_default_1382400620.

     # cd ~/VirtualBox\ VMs/mybox_default_1382400620
     # VBoxManage showvminfo mybox_default_1382400620 | grep ".vmdk"
    

    The showvminfo command should show you the location on the file-system of the HDD of type VMDK along with the name of the Controller it is attached to - it will look something like this:

     SATA Controller (0, 0): C:\Users\user.name\VirtualBox VMs\mybox_default_1382400620\box-disk1.vmdk (UUID: 2f79610e-6c06-46d5-becb-448386ea40ec)
    
  3. clone the VMDK type disk to a VDI type disk so it can be resized.

     # cd ~/VirtualBox\ VMs/mybox_default_1382400620
     # VBoxManage clonehd "box-disk1.vmdk" "clone-disk1.vdi" --format vdi
    

    NOTE: We do this because VMDK type disks cannot be resized by VirtualBox. It has the added benefit of allowing us to keep our original disk backed-up during the resize operation.

  4. Find out how big the disk is currently, to determine how large to make it when resized. The information will show the current size and the Format variant. If Dynamic Allocation was used to create the disk, the Format variant will be "dynamic default".

     # VBoxManage showhdinfo "clone-disk1.vdi"
    
  5. Resize the cloned disk to give it more space. The size argument below is given in Megabytes (1024 Bytes = 1 Megabyte). Because this disk was created using dynamic allocation I'm going to resize it to 100 Gigabytes.

     # VBoxManage modifyhd "clone-disk1.vdi" --resize 102400
    

    NOTE: If the disk was created using dynamic allocation (see previous step) then the physical size of the disk will not need to match its logical size - meaning you can create a very large logical disk that will increase in physical size only as space is used.

    TIP: To convert a Gigabyte value into Megabytes use an online calculator.

  6. Find out the name of the Storage Controller to attach the newly resized disk to.

     # VBoxManage showvminfo mybox_default_1382400620 | grep "Storage"
    
  7. Attach the newly resized disk to the Storage Controller of the Virtual Machine. In our case we're going to use the same name for the Storage Controller, SATA Controller, as revealed in the step above.

     # VBoxManage storageattach mybox_default_1382400620 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi
    
  8. Reboot the Virtual Machine using Vagrant.

     # cd provisioning/boxes/mybox
     # vagrant up
    
  9. Open a command-line shell as root on the Virtual Machine via ssh.

     # vagrant ssh
     # sudo su -
    
  10. Find the name of the logical volume mapping the file-system is on (ie. /dev/mapper/VolGroupOS-lv_root).

     # df 
    
  11. Find the name of the physical volume (or device) that all the partitions are created on (ie. /dev/sda).

     # fdisk -l
    
  12. Create a new primary partition for use as a Linux LVM

     # fdisk /dev/sda
    
    1. Press p to print the partition table to identify the number of partitions. By default there are two - sda1 and sda2.
    2. Press n to create a new primary partition.
    3. Press p for primary.
    4. Press 3 for the partition number, depending the output of the partition table print.
    5. Press Enter two times to accept the default First and Last cylinder.
    6. Press t to change the system's partition ID
    7. Press 3 to select the newly creation partition
    8. Type 8e to change the Hex Code of the partition for Linux LVM
    9. Press w to write the changes to the partition table.
  13. Reboot the machine, then ssh back in when it is up again and switch to the root user once more.

     # reboot
     # vagrant ssh
     # sudo su -
    
  14. Create a new physical volume using the new primary partition just created.

     # pvcreate /dev/sda3
    
  15. Find out the name of the Volume Group that the Logical Volume mapping belongs to (ie. VolGroupOS).

     # vgdisplay
    
  16. Extend the Volume Group to use the newly created physical volume.

     # vgextend VolGroupOS /dev/sda3
    
  17. Extend the logical volume to use more of the Volume Group size now available to it. You can either tell it to add a set amount of space in Megabytes, Gigabytes or Terabytes, and control the growth of the Disk:

     # lvextend -L+20G /dev/mapper/VolGroupOS-lv_root
    

    Or if you want to use all the free space now available to the Volume Group:

     # lvextend -l +100%FREE /dev/mapper/VolGroupOS-lv_root
    
  18. Resize the file-system to use up the space made available in the Logical Volume

     # resize2fs /dev/mapper/VolGroupOS-lv_root
    
  19. Verfiy that there is now more space available

     # df -h 
    
  20. A restart of the VM using vagrant may be a good idea here, to ensure that all services are running correctly now that there is more space available. Exit the root user, exit the vagrant user and ssh session, then tell vagrant to restart the machine.

     # exit
     # exit
     # vagrant reload --provision
    
@rhacker

This comment has been minimized.

Show comment
Hide comment
@rhacker

rhacker Sep 8, 2014

excellent tutorial. Thanks a lot :D

rhacker commented Sep 8, 2014

excellent tutorial. Thanks a lot :D

@tobsan

This comment has been minimized.

Show comment
Hide comment
@tobsan

tobsan Sep 17, 2014

I agree with rhacker, very nice! However on my setup (ubuntu/trusty32 box), steps 9 and further were unneccessary, since the image did not use a volume group but was mounted directly as /dev/sda1.

Oh, and my host system is running debian testing (jessie)

tobsan commented Sep 17, 2014

I agree with rhacker, very nice! However on my setup (ubuntu/trusty32 box), steps 9 and further were unneccessary, since the image did not use a volume group but was mounted directly as /dev/sda1.

Oh, and my host system is running debian testing (jessie)

@guicaro

This comment has been minimized.

Show comment
Hide comment
@guicaro

guicaro Sep 18, 2014

This is awesome! Similar to @tobsan I also did not have to do steps 9 and further. Thanks

guicaro commented Sep 18, 2014

This is awesome! Similar to @tobsan I also did not have to do steps 9 and further. Thanks

@speedlight

This comment has been minimized.

Show comment
Hide comment
@speedlight

speedlight Sep 25, 2014

Thnkz a lot! Even not using lvm is good to haveit in the same place.. no one knowns XD

speedlight commented Sep 25, 2014

Thnkz a lot! Even not using lvm is good to haveit in the same place.. no one knowns XD

@11digitlabs

This comment has been minimized.

Show comment
Hide comment
@11digitlabs

11digitlabs commented Sep 25, 2014

THANK YOU!

@fresheneesz

This comment has been minimized.

Show comment
Hide comment
@fresheneesz

fresheneesz Oct 5, 2014

This didn't work for me : / . My VM doesn't have a Volume Group, but the VM doesn't recognize more space being available after step 8. Not sure what to do ..

fresheneesz commented Oct 5, 2014

This didn't work for me : / . My VM doesn't have a Volume Group, but the VM doesn't recognize more space being available after step 8. Not sure what to do ..

@sese

This comment has been minimized.

Show comment
Hide comment
@sese

sese Oct 7, 2014

Great, it worked ! Thanks man, you saved my day !

sese commented Oct 7, 2014

Great, it worked ! Thanks man, you saved my day !

@dblodgett-usgs

This comment has been minimized.

Show comment
Hide comment
@dblodgett-usgs

dblodgett-usgs Nov 12, 2014

👍 🌟 🌟 🌟 😄

dblodgett-usgs commented Nov 12, 2014

👍 🌟 🌟 🌟 😄

@budhram

This comment has been minimized.

Show comment
Hide comment
@budhram

budhram Dec 3, 2014

Awesome!!!! 👍 ☺️ .

However, found two typo errors at step 4 and step 7.
Step 4 : Command started with VboxManage instead of VBoxManage.
Step 7: SATA Controller should be SATAController (in my case).

Thanks. Tutorial is Great!!

budhram commented Dec 3, 2014

Awesome!!!! 👍 ☺️ .

However, found two typo errors at step 4 and step 7.
Step 4 : Command started with VboxManage instead of VBoxManage.
Step 7: SATA Controller should be SATAController (in my case).

Thanks. Tutorial is Great!!

@rohanpn

This comment has been minimized.

Show comment
Hide comment
@rohanpn

rohanpn Dec 5, 2014

👍 😊 worked for me as well.

rohanpn commented Dec 5, 2014

👍 😊 worked for me as well.

@tlenclos

This comment has been minimized.

Show comment
Hide comment
@tlenclos

tlenclos Dec 17, 2014

Works like a charm 🍻 👍

tlenclos commented Dec 17, 2014

Works like a charm 🍻 👍

@vmexplorer

This comment has been minimized.

Show comment
Hide comment
@vmexplorer

vmexplorer Dec 29, 2014

My VM has CentOS7, so in step 18 I had to replace resize2fs with xfs_growfs:

xfs_growfs /dev/centos/root

vmexplorer commented Dec 29, 2014

My VM has CentOS7, so in step 18 I had to replace resize2fs with xfs_growfs:

xfs_growfs /dev/centos/root

@thecatontheflat

This comment has been minimized.

Show comment
Hide comment
@thecatontheflat

thecatontheflat Dec 30, 2014

Thanks! Saved me!

thecatontheflat commented Dec 30, 2014

Thanks! Saved me!

@ozbillwang

This comment has been minimized.

Show comment
Hide comment
@ozbillwang

ozbillwang Feb 25, 2015

In Step 3, give full path for vmdk file, so the command will be changed to

VBoxManage clonehd " ~/VirtualBox\ VMs/mybox_default_1382400620/box-disk1.vmdk" "clone-disk1.vdi" --format vdi

Otherwise, I got some errors as:

VBoxManage: error: Cannot register the hard disk 'xxxxxxxxxx.vmdk' {3b16a523-1637-45fe-ae4f-0b6c78736ba5} because a hard disk 'xxxxxxxxx.vmdk' with UUID {XXXXXXXX} already exists
VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component VirtualBox, interface IVirtualBox, callee nsISupports
VBoxManage: error: Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, enmAccessMode, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 178 of file VBoxManageDisk.cpp

In step 7, I have to change from SATA Controller to IDE Controller

ozbillwang commented Feb 25, 2015

In Step 3, give full path for vmdk file, so the command will be changed to

VBoxManage clonehd " ~/VirtualBox\ VMs/mybox_default_1382400620/box-disk1.vmdk" "clone-disk1.vdi" --format vdi

Otherwise, I got some errors as:

VBoxManage: error: Cannot register the hard disk 'xxxxxxxxxx.vmdk' {3b16a523-1637-45fe-ae4f-0b6c78736ba5} because a hard disk 'xxxxxxxxx.vmdk' with UUID {XXXXXXXX} already exists
VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component VirtualBox, interface IVirtualBox, callee nsISupports
VBoxManage: error: Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, enmAccessMode, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 178 of file VBoxManageDisk.cpp

In step 7, I have to change from SATA Controller to IDE Controller

@ozbillwang

This comment has been minimized.

Show comment
Hide comment
@ozbillwang

ozbillwang Mar 20, 2015

Avoid to follow up these complex steps, I create a repository to do the job automatically (https://github.com/SydOps/vagrant-box-resize). Use vagrant box "fillup/centos-6.5-x86_64-minimal" as sample, which has 3GB on root.

ozbillwang commented Mar 20, 2015

Avoid to follow up these complex steps, I create a repository to do the job automatically (https://github.com/SydOps/vagrant-box-resize). Use vagrant box "fillup/centos-6.5-x86_64-minimal" as sample, which has 3GB on root.

@manojtr

This comment has been minimized.

Show comment
Hide comment
@manojtr

manojtr Mar 20, 2015

Awesome! tutorial

manojtr commented Mar 20, 2015

Awesome! tutorial

@houtianze

This comment has been minimized.

Show comment
Hide comment
@houtianze

houtianze commented Apr 6, 2015

👍

@cporter71

This comment has been minimized.

Show comment
Hide comment
@cporter71

cporter71 commented Apr 14, 2015

👍

@PontusNyberg

This comment has been minimized.

Show comment
Hide comment
@PontusNyberg

PontusNyberg May 8, 2015

Great tutorial! Thanks!

PontusNyberg commented May 8, 2015

Great tutorial! Thanks!

@pjullah

This comment has been minimized.

Show comment
Hide comment
@pjullah

pjullah Jul 2, 2015

Thank you so much for this.

pjullah commented Jul 2, 2015

Thank you so much for this.

@khyurri

This comment has been minimized.

Show comment
Hide comment
@khyurri

khyurri Jul 13, 2015

Change line «VboxManage showhdinfo "clone-disk1.vdi"» to «VBoxManage showhdinfo "clone-disk1.vdi"», please.

khyurri commented Jul 13, 2015

Change line «VboxManage showhdinfo "clone-disk1.vdi"» to «VBoxManage showhdinfo "clone-disk1.vdi"», please.

@christopher-hopper

This comment has been minimized.

Show comment
Hide comment
@christopher-hopper

christopher-hopper Jul 20, 2015

So happy to see that this helped people. I've made a few changes as suggested in your comments. Thanks also for helping others with your own experiences. Had to run this process again for the first time in over a year and was surprised to see it had been helping others.

I originally wrote this so I wouldn't have to resize VMs for my colleagues at a previous job. The steps may be long-winded or complex, but I wanted to explain what was being done so the reader could understand a bit about what they were actually doing. That helps a lot when things don't work as expected.

Owner

christopher-hopper commented Jul 20, 2015

So happy to see that this helped people. I've made a few changes as suggested in your comments. Thanks also for helping others with your own experiences. Had to run this process again for the first time in over a year and was surprised to see it had been helping others.

I originally wrote this so I wouldn't have to resize VMs for my colleagues at a previous job. The steps may be long-winded or complex, but I wanted to explain what was being done so the reader could understand a bit about what they were actually doing. That helps a lot when things don't work as expected.

@p4ali

This comment has been minimized.

Show comment
Hide comment
@p4ali

p4ali Aug 14, 2015

Step 12. v. The default value does not work for me. I have to pick the first unused block.

Other than that, everything works for me. Thanks!

p4ali commented Aug 14, 2015

Step 12. v. The default value does not work for me. I have to pick the first unused block.

Other than that, everything works for me. Thanks!

@foxundermoon

This comment has been minimized.

Show comment
Hide comment
@foxundermoon

foxundermoon Aug 20, 2015

yeah xfs_growfs is nice @CentOS
very nice

foxundermoon commented Aug 20, 2015

yeah xfs_growfs is nice @CentOS
very nice

@JonLevy

This comment has been minimized.

Show comment
Hide comment
@JonLevy

JonLevy Aug 21, 2015

Doesn't work for me (Ubuntu 14.04). Goes off-track at step 10. Can't see a logical volume mapping. Also, there is not volume group (step 15).

JonLevy commented Aug 21, 2015

Doesn't work for me (Ubuntu 14.04). Goes off-track at step 10. Can't see a logical volume mapping. Also, there is not volume group (step 15).

@kameshsampath

This comment has been minimized.

Show comment
Hide comment
@kameshsampath

kameshsampath Sep 10, 2015

Excellent! thanks for the detailed step

kameshsampath commented Sep 10, 2015

Excellent! thanks for the detailed step

@seterrychen

This comment has been minimized.

Show comment
Hide comment
@seterrychen

seterrychen Sep 22, 2015

Very helpful. Thank you.

seterrychen commented Sep 22, 2015

Very helpful. Thank you.

@jstsch

This comment has been minimized.

Show comment
Hide comment
@jstsch

jstsch Oct 6, 2015

Good write-up, thanks a lot! Also, note that if you use VirtualBox snapshots, you also need to manually resize all the snapshots individually using VBoxManage.

jstsch commented Oct 6, 2015

Good write-up, thanks a lot! Also, note that if you use VirtualBox snapshots, you also need to manually resize all the snapshots individually using VBoxManage.

@aldrienht

This comment has been minimized.

Show comment
Hide comment
@aldrienht

aldrienht Nov 10, 2015

For example I have the below output after running the command in Step 6:

$ VBoxManage showvminfo InfomixDev-CentOS7-1 | grep "Storage"
Storage Controller Name (0): IDE
Storage Controller Type (0): PIIX4
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0): 2
Storage Controller Port Count (0): 2
Storage Controller Bootable (0): on
Storage Controller Name (1): SATA
Storage Controller Type (1): IntelAhci
Storage Controller Instance Number (1): 0
Storage Controller Max Port Count (1): 30
Storage Controller Port Count (1): 1
Storage Controller Bootable (1): on

I got ERROR on the next step (7) running:

VBoxManage storageattach InfomixDev-CentOS7-1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi

ERROR: VBoxManage.exe: error: Could not find a controller named 'SATA Controller'

Please help!

aldrienht commented Nov 10, 2015

For example I have the below output after running the command in Step 6:

$ VBoxManage showvminfo InfomixDev-CentOS7-1 | grep "Storage"
Storage Controller Name (0): IDE
Storage Controller Type (0): PIIX4
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0): 2
Storage Controller Port Count (0): 2
Storage Controller Bootable (0): on
Storage Controller Name (1): SATA
Storage Controller Type (1): IntelAhci
Storage Controller Instance Number (1): 0
Storage Controller Max Port Count (1): 30
Storage Controller Port Count (1): 1
Storage Controller Bootable (1): on

I got ERROR on the next step (7) running:

VBoxManage storageattach InfomixDev-CentOS7-1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi

ERROR: VBoxManage.exe: error: Could not find a controller named 'SATA Controller'

Please help!

@leiluspocus

This comment has been minimized.

Show comment
Hide comment
@leiluspocus

leiluspocus Nov 21, 2015

@christopher-hopper I wish I could pay you a beer sir ! Thank you for this tutorial, worked like a charm.

@aldrienht It appears you have an IDE controller for your storage. You need to do VBoxManage storageattach InfomixDev-CentOS7-1 --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi and not "SATA Controller"

leiluspocus commented Nov 21, 2015

@christopher-hopper I wish I could pay you a beer sir ! Thank you for this tutorial, worked like a charm.

@aldrienht It appears you have an IDE controller for your storage. You need to do VBoxManage storageattach InfomixDev-CentOS7-1 --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium clone-disk1.vdi and not "SATA Controller"

@Fandekasp

This comment has been minimized.

Show comment
Hide comment
@Fandekasp

Fandekasp Nov 24, 2015

Great tutorial, thanks a bunch!

Fandekasp commented Nov 24, 2015

Great tutorial, thanks a bunch!

@marzn

This comment has been minimized.

Show comment
Hide comment
@marzn

marzn Dec 8, 2015

First: Thank you =)

In step 13: You do not need to restart the system, just use partprobe

marzn commented Dec 8, 2015

First: Thank you =)

In step 13: You do not need to restart the system, just use partprobe

@fengshuo211

This comment has been minimized.

Show comment
Hide comment
@fengshuo211

fengshuo211 Dec 17, 2015

OMG, this one is working!!!!
A few things I need to change for my case:

  1. Instead of /dev/mapper/VolGroupOS-lv_root, mine is /dev/mapper/VolGroup-lv_root
  2. and I have "IDE Controller" not "SATA Controller".
    Everything else just works perfect for me
    Thanks man!

fengshuo211 commented Dec 17, 2015

OMG, this one is working!!!!
A few things I need to change for my case:

  1. Instead of /dev/mapper/VolGroupOS-lv_root, mine is /dev/mapper/VolGroup-lv_root
  2. and I have "IDE Controller" not "SATA Controller".
    Everything else just works perfect for me
    Thanks man!
@Gcaufy

This comment has been minimized.

Show comment
Hide comment
@Gcaufy

Gcaufy Jan 25, 2016

All goes well but failed in the last step, any idea?

resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.

Gcaufy commented Jan 25, 2016

All goes well but failed in the last step, any idea?

resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/mapper/centos-root
Couldn't find valid filesystem superblock.
@lockfree7

This comment has been minimized.

Show comment
Hide comment
@lockfree7

lockfree7 Feb 3, 2016

You saved me!! Thank you very much.

I tried to this on ubuntu 14.04LTS and Step 12 was a little bit different. I should do twice and
got the result with /dev/sda3 and /dev/sda4. And then it works well after I created PV with /dev/sda4.

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 501758 167770111 83634177 5 Extended
/dev/sda3 499712 501757 1023 83 Linux
/dev/sda4 167770112 293601279 62915584 8e Linux LVM
/dev/sda5 501760 167770111 83634176 8e Linux LVM

lockfree7 commented Feb 3, 2016

You saved me!! Thank you very much.

I tried to this on ubuntu 14.04LTS and Step 12 was a little bit different. I should do twice and
got the result with /dev/sda3 and /dev/sda4. And then it works well after I created PV with /dev/sda4.

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 501758 167770111 83634177 5 Extended
/dev/sda3 499712 501757 1023 83 Linux
/dev/sda4 167770112 293601279 62915584 8e Linux LVM
/dev/sda5 501760 167770111 83634176 8e Linux LVM

@phirschybar

This comment has been minimized.

Show comment
Hide comment
@phirschybar

phirschybar Feb 9, 2016

Using Debian 7 and various boxes... unfortunately this guide didn't work for me, I got caught on step 14 pvcreate /dev/sda3 where I got the error Device /dev/sda3 not found (or ignored by filtering). I wrote an alternate guide for people using Debian (this works specifically with the puphpet/debian75-x64 box. Here it is: https://medium.com/@phirschybar/resize-your-vagrant-virtualbox-disk-3c0fbc607817

phirschybar commented Feb 9, 2016

Using Debian 7 and various boxes... unfortunately this guide didn't work for me, I got caught on step 14 pvcreate /dev/sda3 where I got the error Device /dev/sda3 not found (or ignored by filtering). I wrote an alternate guide for people using Debian (this works specifically with the puphpet/debian75-x64 box. Here it is: https://medium.com/@phirschybar/resize-your-vagrant-virtualbox-disk-3c0fbc607817

@mudakara

This comment has been minimized.

Show comment
Hide comment
@mudakara

mudakara Feb 24, 2016

Awesome Tutorial!!! just worked for me with slight modification
@aldrienht please use SATA instead of SATA Controller, it worked for me in that way.
Thanks

mudakara commented Feb 24, 2016

Awesome Tutorial!!! just worked for me with slight modification
@aldrienht please use SATA instead of SATA Controller, it worked for me in that way.
Thanks

@han0tae

This comment has been minimized.

Show comment
Hide comment
@han0tae

han0tae Mar 14, 2016

Very nice. I use xfs_growfs instead of resize2fs in centos 7

han0tae commented Mar 14, 2016

Very nice. I use xfs_growfs instead of resize2fs in centos 7

@nickshek

This comment has been minimized.

Show comment
Hide comment
@nickshek

nickshek Mar 24, 2016

Great tutorial! it works by changing resize2fs to xfs_growfs in centos 7

nickshek commented Mar 24, 2016

Great tutorial! it works by changing resize2fs to xfs_growfs in centos 7

@khsibr

This comment has been minimized.

Show comment
Hide comment
@khsibr

khsibr Mar 31, 2016

Great man! thanks

khsibr commented Mar 31, 2016

Great man! thanks

@KevinYanCHN

This comment has been minimized.

Show comment
Hide comment
@KevinYanCHN

KevinYanCHN Jun 6, 2016

Thank you~!!

KevinYanCHN commented Jun 6, 2016

Thank you~!!

@teyepe

This comment has been minimized.

Show comment
Hide comment
@teyepe

teyepe Jun 9, 2016

Thanks a lot @christopher-hopper, saved my day!

teyepe commented Jun 9, 2016

Thanks a lot @christopher-hopper, saved my day!

@Xronger

This comment has been minimized.

Show comment
Hide comment
@Xronger

Xronger Jun 22, 2016

@JonLevy also not working, in archlinux i can not find the /dev/mapper/VolGroup-lv_root, can someone help?

Xronger commented Jun 22, 2016

@JonLevy also not working, in archlinux i can not find the /dev/mapper/VolGroup-lv_root, can someone help?

@LyleH

This comment has been minimized.

Show comment
Hide comment
@LyleH

LyleH Jun 24, 2016

This is the most painful process in the world... But thanks for the article. I just really dislike resizing VMs :(

LyleH commented Jun 24, 2016

This is the most painful process in the world... But thanks for the article. I just really dislike resizing VMs :(

@nroose

This comment has been minimized.

Show comment
Hide comment
@nroose

nroose Jun 25, 2016

Yeah. Painful. Can anyone listening who creates boxes in the future create them bigger rather than smaller? What god of system administration decreed that 40 GB was the magic number? The stuff that is usually used makes it so that it only takes up as much space as it needs to on the physical disk anyway! How about 100GB or so?

nroose commented Jun 25, 2016

Yeah. Painful. Can anyone listening who creates boxes in the future create them bigger rather than smaller? What god of system administration decreed that 40 GB was the magic number? The stuff that is usually used makes it so that it only takes up as much space as it needs to on the physical disk anyway! How about 100GB or so?

@pc84

This comment has been minimized.

Show comment
Hide comment
@pc84

pc84 Aug 23, 2016

Excellent tutorial. Thanks!!

pc84 commented Aug 23, 2016

Excellent tutorial. Thanks!!

@cscott300

This comment has been minimized.

Show comment
Hide comment
@cscott300

cscott300 Oct 4, 2016

This was really helpful! Thanks!

For reference, worked flawlessly on the following configuration:

Mac OS X Yosemite 10.10.5
VirtualBox 5.0.26
Vagrant 1.8.5
Vagrantfile basebox snippet:

config.vm.box_url = "http://cloud.terry.im/vagrant/oraclelinux-6-x86_64.box"
config.vm.box = "oraclelinux-6-x86_64"

cscott300 commented Oct 4, 2016

This was really helpful! Thanks!

For reference, worked flawlessly on the following configuration:

Mac OS X Yosemite 10.10.5
VirtualBox 5.0.26
Vagrant 1.8.5
Vagrantfile basebox snippet:

config.vm.box_url = "http://cloud.terry.im/vagrant/oraclelinux-6-x86_64.box"
config.vm.box = "oraclelinux-6-x86_64"

@shaibu

This comment has been minimized.

Show comment
Hide comment
@shaibu

shaibu Nov 1, 2016

Thanks buddy!

shaibu commented Nov 1, 2016

Thanks buddy!

@pionl

This comment has been minimized.

Show comment
Hide comment
@pionl

pionl Jan 3, 2017

Hi,

I'm not able to continue on partition. I've got different structure. I've tried several solutions.

I'm able to create /dev/sda3 partition, but it has empty VG name. I've also tried to create the VG group, but then I cant extend the /dev/sda1

My structure is:

df

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sda1        9620408 8118692    989972  90% /
udev               10240       0     10240   0% /dev
tmpfs             693532    8544    684988   2% /run
tmpfs            1733828       0   1733828   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs            1733828       0   1733828   0% /sys/fs/cgroup

sudo fdisk -l

/dev/sda1  *        2048 19816447 19814400  9.5G 83 Linux
/dev/sda2       19818494 20764671   946178  462M  5 Extended
/dev/sda5       19818496 20764671   946176  462M 82 Linux swap / Solaris

Any ideas?

Thank you,

pionl commented Jan 3, 2017

Hi,

I'm not able to continue on partition. I've got different structure. I've tried several solutions.

I'm able to create /dev/sda3 partition, but it has empty VG name. I've also tried to create the VG group, but then I cant extend the /dev/sda1

My structure is:

df

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sda1        9620408 8118692    989972  90% /
udev               10240       0     10240   0% /dev
tmpfs             693532    8544    684988   2% /run
tmpfs            1733828       0   1733828   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs            1733828       0   1733828   0% /sys/fs/cgroup

sudo fdisk -l

/dev/sda1  *        2048 19816447 19814400  9.5G 83 Linux
/dev/sda2       19818494 20764671   946178  462M  5 Extended
/dev/sda5       19818496 20764671   946176  462M 82 Linux swap / Solaris

Any ideas?

Thank you,

@IGR2014

This comment has been minimized.

Show comment
Hide comment
@IGR2014

IGR2014 Apr 3, 2018

Why no one noticed this: "(1024 Bytes = 1 Megabyte)"
Actually 1024 Bytes = 1 Kilobyte and 1024 Kilobytes = 1 Megabyte

IGR2014 commented Apr 3, 2018

Why no one noticed this: "(1024 Bytes = 1 Megabyte)"
Actually 1024 Bytes = 1 Kilobyte and 1024 Kilobytes = 1 Megabyte

@ferzerkerx

This comment has been minimized.

Show comment
Hide comment
@ferzerkerx

ferzerkerx commented May 7, 2018

thanks!

@suzenhan

This comment has been minimized.

Show comment
Hide comment
@suzenhan

suzenhan Jun 21, 2018

resize2fs /dev/mapper/centos-root on centos 7 error

resize2fs: Invalid magic number in the superblock when trying to open /dev/mapper/centos-root
Can not find a valid file system superblock.

Resolved by

fsadm resize /dev/mapper/centos-root

suzenhan commented Jun 21, 2018

resize2fs /dev/mapper/centos-root on centos 7 error

resize2fs: Invalid magic number in the superblock when trying to open /dev/mapper/centos-root
Can not find a valid file system superblock.

Resolved by

fsadm resize /dev/mapper/centos-root
@bsofiato

This comment has been minimized.

Show comment
Hide comment
@bsofiato

bsofiato Aug 13, 2018

Awesome dude !!!

bsofiato commented Aug 13, 2018

Awesome dude !!!

@idzuwan

This comment has been minimized.

Show comment
Hide comment
@idzuwan

idzuwan Aug 16, 2018

@suzenhan saved a lot of time googling :)

idzuwan commented Aug 16, 2018

@suzenhan saved a lot of time googling :)

@misachi

This comment has been minimized.

Show comment
Hide comment
@misachi

misachi Aug 22, 2018

awesome!! works like a charm

misachi commented Aug 22, 2018

awesome!! works like a charm

@BobCochran

This comment has been minimized.

Show comment
Hide comment
@BobCochran

BobCochran Sep 9, 2018

I too had to resize a Vagrant box. The only tricky part for me was attaching the resized VDI. The output of 'vboxmanage showvminfo mdb4' was:

SCSI (0, 0): /home/usbob2/VirtualBox VMs/mdb4/ubuntu-xenial-16.04-cloudimg.vmdk (UUID: 265eb031-74de-4235-87e0-2ba1e41c5e6f)

So the name needed for 'vboxmanage storageattach' is "SCSI". I substituted this command for step 7:

vboxmanage storageattach mdb4 --storagectl "SCSI" --port 0 --device 0 --type hdd --medium clone-mdb4-disk1.vdi

This seems to have worked fine. I simply went back to my vagrant box and did 'vagrant up' and then 'vagrant ssh' and everything came up just fine. Thanks to the VDI, I now have a lot more space for the VM. Because my flavor of Ubuntu 16.04 LTS does not use LVM, I didn't need to do any of the LVM-related steps after step 8.

BobCochran commented Sep 9, 2018

I too had to resize a Vagrant box. The only tricky part for me was attaching the resized VDI. The output of 'vboxmanage showvminfo mdb4' was:

SCSI (0, 0): /home/usbob2/VirtualBox VMs/mdb4/ubuntu-xenial-16.04-cloudimg.vmdk (UUID: 265eb031-74de-4235-87e0-2ba1e41c5e6f)

So the name needed for 'vboxmanage storageattach' is "SCSI". I substituted this command for step 7:

vboxmanage storageattach mdb4 --storagectl "SCSI" --port 0 --device 0 --type hdd --medium clone-mdb4-disk1.vdi

This seems to have worked fine. I simply went back to my vagrant box and did 'vagrant up' and then 'vagrant ssh' and everything came up just fine. Thanks to the VDI, I now have a lot more space for the VM. Because my flavor of Ubuntu 16.04 LTS does not use LVM, I didn't need to do any of the LVM-related steps after step 8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment