-
-
Save leifg/4713995 to your computer and use it in GitHub Desktop.
file_to_disk = './tmp/large_disk.vdi' | |
Vagrant::Config.run do |config| | |
config.vm.box = 'base' | |
config.vm.customize ['createhd', '--filename', file_to_disk, '--size', 500 * 1024] | |
config.vm.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', file_to_disk] | |
end |
Merci!
Thanks!
👍
I had to use the "SATA" storage controller as opposed to "SATA Controller"
How can you tell where the HDD is going to be attached? In Virtualbox, at least, it seems to randomly become /dev/sda or /dev/sdb.
Is there a way to test if the controller exists?
Similar to the way you're testing for the existence of the drive?
I have:
controller = 'SATA Controller'
vb.customize ["storagectl", :id, "--name", "#{controller}", "--add", "sata"]
Which errors out:
Stderr: VBoxManage.exe: error: Storage controller named 'SATA Controller' already exists
if I don't "vagrant destroy" before I vagrant up.
I'd like to test if the controller exists before attempting to create it.
Thanks!
EDIT: The vagrant triggers thing only worked because my machine existed.
You need the id of the vm to execute the showvminfo
command, which is the only thing that shows the controller attached to the vm, in semi-structured format. config file style.
Unfortunately, the only way to get the uuid of the vm before it starts is by sticking in something after SetName in the action_boot stack (https://github.com/mitchellh/vagrant/blob/master/plugins/providers/virtualbox/action.rb).
I really would like a better way of doing this, but this works.
class VagrantPlugins::ProviderVirtualBox::Action::SetName
alias_method :original_call, :call
def call(env)
machine = env[:machine]
driver = machine.provider.driver
uuid = driver.instance_eval { @uuid }
ui = env[:ui]
controller_name="Uploaded Data Controller"
vm_info = driver.execute("showvminfo", uuid)
has_this_controller = vm_info.match("Storage Controller Name.*#{controller_name}")
if has_this_controller
ui.info "already has the #{controller_name} hdd controller"
else
ui.info "creating #{controller_name} controller #{controller_name}"
driver.execute('storagectl', uuid,
'--name', "#{controller_name}",
'--add', 'sata',
'--controller', 'IntelAhci')
end
ui.info "attaching storage to #{controller_name}"
driver.execute('storageattach', uuid,
'--storagectl', "#{controller_name}",
'--port', '1',
'--device', '0',
'--type', 'hdd',
'--medium', UPLOADED_DISK)
original_call(env)
end
end
Thanks @adiktofsugar. I think I'm close, but I'm getting some unusual behavior, if I've understood correctly, we need to override the VagrantPlugins::ProviderVirtualBox::Action::SetName class in our vagrant file, and move the creation/attaching of disks there?
This is my vagrant file with your modifications:
# -*- mode: ruby -*-
# vi: set ft=ruby :
class VagrantPlugins::ProviderVirtualBox::Action::SetName
alias_method :original_call, :call
def call(env)
machine = env[:machine]
driver = machine.provider.driver
uuid = driver.instance_eval { @uuid }
ui = env[:ui]
controller_name = 'SATA Controller'
vm_info = driver.execute("showvminfo", uuid)
has_this_controller = vm_info.match("Storage Controller Name.*#{controller_name}")
if has_this_controller
ui.info "already has the #{controller_name} hdd controller"
else
ui.info "creating #{controller_name} controller #{controller_name}"
driver.execute('storagectl', uuid,
'--name', "#{controller_name}",
'--add', 'sata',
'--controller', 'IntelAhci')
end
## Disk Management
format = "VMDK"
size = 1024
port = 0
ui.info "attaching storage to #{controller_name}"
%w(sdb sdc).each do |hdd|
if File.exist?("#{hdd}" + ".vmdk")
ui.info "#{hdd} Already Exists"
else
ui.info "Creating #{hdd}\.vmdk"
driver.execute("createhd",
"--filename", "#{hdd}",
"--size", size,
"--format", "#{format}")
end
# Attach devices
driver.execute('storageattach', uuid,
'--storagectl', "#{controller_name}",
'--port', port += 1,
'--type', 'hdd',
'--medium', "#{hdd}" + ".vmdk")
end
original_call(env)
end
end
Vagrant.configure(2) do |config|
config.vm.box = "rhelboxname"
# Hopefully a fix for issue with sudo requiring a tty...?
config.ssh.pty = true
config.vm.provider :virtualbox do |vb|
# No idea why this is... I just copy it every time ';
vb.customize ["modifyvm", :id, "--usbehci", "off"]
end # end config.vm.provider
end # end Vagrant.configure(2) do |config|
This seems to mostly work, however, it creates the disk, destroys it, then tells me it can't handle and integer for the size of the disk:
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'rhelboxname'...
==> default: Matching MAC address for NAT networking...
==> default: creating SATA Controller controller SATA Controller
==> default: attaching storage to SATA Controller
==> default: Creating sdb.vmdk
==> default: Destroying VM and associated drives...
c:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.1/lib/vagrant/util/subprocess.rb:28:in `block in initialize': undefined method `encode' for 1024:Fixnum (NoMethodError)
Thing is, I'm not really sure where to find the API information when hacking on the internals... This seems to be the right parameters to pass vm.configure, but you're passing more parameters to the storagectl call than I've seen before so I'm figuring there's different parameters to pass it when calling from the VagrantPlugins::ProviderVirtualBox::Action::SetName?
Thanks for the help!
EDIT: Use case scenario 2, we have a box with unknown devices. How do we find the existing controllers name? It seems depending on who builds the box you might have IDE, SATA, etc. controllers so using the same vagrant file between different boxes can cause trouble.
I had to use SATAController
+1 for the name SATAController
(I use Vagrant 1.7.4 + Virtual 5.0.0).
Thanks to everyone contributing here! My use case is slightly different, in that it requires a persistent data vdisk that can be attached, partitioned and/or mounted on any virtualbox vm, without getting deleted by vagrant destroy
. There is code for this at
@darrenleeweber, I tried your scripts. It looks like the disk doesn't get attached the first time 'vagrant up' runs, because the VM doesn't exist yet when data_disk_attach is run. Do you know of any way to solve that ?
@three18ti i get this error too:
... undefined method `encode' for 1024:Fixnum (NoMethodError)
and it seems that - contrary to config.vm.customize()
- driver.execute()
requires all arguments to be strings, as this works for me:
driver.execute("createhd", "--filename", "#{hdd}", "--size", "1024")
ps. anyone know how to create an installable vagrant plugin from this?
here a complete Vagrantfile
example that works for me:
disk_size = 1024
disk_filename = "workdisk.vdi"
disk_id_filename = ".disk.id"
file_root = File.dirname(File.expand_path(__FILE__))
$disk_id_file = File.join(file_root, disk_id_filename)
$disk_file = File.join(file_root, disk_filename)
$disk_size = disk_size.to_s
class VagrantPlugins::ProviderVirtualBox::Action::SetName
alias_method :original_call, :call
def call(env)
ui = env[:ui]
controller_name = "SATA Whatever"
driver = env[:machine].provider.driver
uuid = driver.instance_eval { @uuid }
vm_info = driver.execute("showvminfo", uuid)
has_controller = vm_info.match("Storage Controller Name.*#{controller_name}")
if !File.exist?($disk_file)
ui.info "Creating storage file '#{$disk_file}'..."
driver.execute(
"createmedium", "disk",
"--filename", $disk_file,
"--format", "VDI",
"--size", $disk_size
)
end
if !has_controller
ui.info "Creating storage controller '#{controller_name}'..."
driver.execute(
"storagectl", uuid,
"--name", "#{controller_name}",
"--add", "sata",
"--controller", "IntelAhci",
"--portcount", "1",
"--hostiocache", "off"
)
end
ui.info "Attaching '#{$disk_file}' to '#{controller_name}'..."
driver.execute(
"storageattach", uuid,
"--storagectl", "#{controller_name}",
"--port", "0",
"--type", "hdd",
"--medium", $disk_file
)
work_disk_info = driver.execute("showmediuminfo", $disk_file)
work_disk_uuid = work_disk_info.match(/^UUID\:\s*([a-z0-9\-]+)/).captures[0]
uuid_blocks = work_disk_uuid.split("-")
disk_by_id = "ata-VBOX_HARDDISK_VB"
disk_by_id += uuid_blocks[0] + "-"
disk_by_id += uuid_blocks[-1][10..11]
disk_by_id += uuid_blocks[-1][8..9]
disk_by_id += uuid_blocks[-1][6..7]
disk_by_id += uuid_blocks[-1][4..5]
File.open($disk_id_file, "w") {|f| f.write(disk_by_id) }
original_call(env)
end
end
Vagrant.configure(2) do |config|
config.vm.box = "debian/jessie64"
!File.exist?($disk_id_file) ? File.open($disk_id_file, "w") {} : nil
config.vm.provision "file", source: $disk_id_file, destination: disk_id_filename
config.vm.provision "shell", inline: <<-EOF
disk=/dev/disk/by-id/$(<#{disk_id_filename})
apt-get install -y gdisk
sgdisk -n 0:0:0 -t 0:8300 $disk
sleep 1 # TODO: how to make sure partition is done?
mkfs.ext4 ${disk}-part1
mkdir /work
echo "${disk}-part1 /work ext4 defaults 0 0" >> /etc/fstab
mount /work
chown -R vagrant:vagrant /work
EOF
end
ps. i wonder if there is an easier way to get disk_by_id
I am guessing this is for Virtualbox?
I am looking for libvirt visualization does any one know?
ex working for primary drive:
libvirt.storage :file, size: '20G', type: 'qcow2', bus: 'virtio', allow_existing: true , cache: 'writethrough'
I came across the following issue:
Stderr: VBoxManage: error: Could not find a controller named 'SATA Controller'
A google search turned up a solution (worked for me, anyway) [1]
change it to "SATAController" as per VBoxManage showvminfo $vmName|grep 'Storage Controller Name'
Here's a snippet from the Vagrantfile that I use:
https://gist.github.com/tonygaetani/c8ce8279e77f0e44e437
The name of the storage controller (e.g.. 'SATA Controller', 'SATAController', or even 'IDE Controller' (as I found playing with an OpenShift Origin Vagrant box), is likely going to depend on the version of the guest OS and version of VirtualBox.
Here is my workable Vagrantfile, add an extra disk with tow partition.
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.box_check_update = false
config.vm.network "private_network", ip: "192.168.33.9"
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "1024"
vb.name = "try_disk"
file_to_disk = File.realpath( "." ).to_s + "/disk.vdi"
if ARGV[0] == "up" && ! File.exist?(file_to_disk)
vb.customize [
'createhd',
'--filename', file_to_disk,
'--format', 'VDI',
'--size', 30 * 1024 # 30 GB
]
vb.customize [
'storageattach', :id,
'--storagectl', 'SATA', # The name may vary
'--port', 1, '--device', 0,
'--type', 'hdd', '--medium',
file_to_disk
]
end
end
# Tow partition in one disk
config.vm.provision "shell", inline: <<-SHELL
set -e
set -x
if [ -f /etc/provision_env_disk_added_date ]
then
echo "Provision runtime already done."
exit 0
fi
sudo fdisk -u /dev/sdb <<EOF
n
p
1
+500M
n
p
2
w
EOF
mkfs.ext4 /dev/sdb1
mkfs.ext4 /dev/sdb2
mkdir -p /{data,extra}
mount -t ext4 /dev/sdb1 /data
mount -t ext4 /dev/sdb2 /extra
date > /etc/provision_env_disk_added_date
SHELL
config.vm.provision "shell", inline: <<-SHELL
echo Well done
SHELL
end
Whats wrong with this?
Vagrant.configure("2") do |c|
c.berkshelf.enabled = false if Vagrant.has_plugin?("vagrant-berkshelf")
c.vm.box = "centos72-nocm-2.0.10"
c.vm.box_url = "http://hostname.com/box/vmware/centos72-nocm-2.0.10.box"
c.vm.hostname = "default-centos-72"
c.vm.synced_folder ".", "/vagrant", disabled: true
c.vm.provider :vmware_fusion do |p|
p.vmx["gui"] = "true"
p.vmx["memsize"] = "1024"
p.vmx["createhd"] = " --size='3000' --filename='second_disk.vdi'"
p.vmx["storageattach"] = " --storagectl='SATA Controller' --port='0' --device='0' --type='hdd' --medium='file_to_disk' "
end
end```
Note that there is a plugin to achieve this:
https://github.com/kusnier/vagrant-persistent-storage
Thanks for the gist and helpful comments. Using ubuntu/xenial64
, I need to use a higher port number, since this box uses a second config disk, so I've got '--port', 4
.
Using ubuntu/xenial64, I need to use a higher port number, since this box uses a second config disk, so I've got '--port', 4.
This was very helpful. I was seeing errors when Vagrant would try to connect the machine otherwise.
==> vm-0: Waiting for machine to boot. This may take a few minutes...
vm-0: SSH address: 127.0.0.1:2222
vm-0: SSH username: ubuntu
vm-0: SSH auth method: password
vm-0: Warning: Remote connection disconnect. Retrying...
vm-0: Warning: Remote connection disconnect. Retrying...
vm-0: Warning: Remote connection disconnect. Retrying...
Trying to SSH in locally was causing the following errors.
ssh_exchange_identification: read: Connection reset by peer
Things seemed to be working using port 2. I believe you can see which ports are in use on the base box by launching a single-disk configuration and looking at the output of vboxmanage showvminfo $VIRTUALBOX_VM_NAME
.
just to add, for Virtual box 5.1.12, you can get the controller by running;
14:26:04 $ VBoxManage storagectl
Usage:
VBoxManage storagectl <uuid|vmname>
--name <name>
[--add ide|sata|scsi|floppy|sas|usb|pcie]
[--controller LSILogic|LSILogicSAS|BusLogic|
IntelAHCI|PIIX3|PIIX4|ICH6|I82078|
[ USB|NVMe]
[--portcount <1-n>]
[--hostiocache on|off]
[--bootable on|off]
[--rename <name>]
[--remove]
Here is what worked for me
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |vb|
file_to_disk = 'D:/UniServerZ/www/VM/tealit.com/large_disk.vdi'
unless File.exist?(file_to_disk)
vb.customize ['createhd', '--filename', file_to_disk, '--size', 500 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', file_to_disk]
end
end
Here is my solution. It works with VBox 5.2
The disk is stored together with the Virtual Machine
class VagrantPlugins::ProviderVirtualBox::Action::SetName
alias_method :original_call, :call
def call(env)
machine = env[:machine]
driver = machine.provider.driver
uuid = driver.instance_eval { @uuid }
ui = env[:ui]
# Find out folder of VM
vm_folder = ""
vm_info = driver.execute("showvminfo", uuid, "--machinereadable")
lines = vm_info.split("\n")
lines.each do |line|
if line.start_with?("CfgFile")
vm_folder = line.split("=")[1].gsub('"','')
vm_folder = File.expand_path("..", vm_folder)
ui.info "VM Folder is: #{vm_folder}"
end
end
size = 10240
disk_file = vm_folder + "/disk1.vmdk"
ui.info "Adding disk to VM"
if File.exist?(disk_file)
ui.info "Disk already exists"
else
ui.info "Creating new disk"
driver.execute("createmedium", "disk", "--filename", disk_file, "--size", "#{size}", "--format", "VMDK")
ui.info "Attaching disk to VM"
driver.execute('storageattach', uuid, '--storagectl', "SATA Controller", '--port', "1", '--type', 'hdd', '--medium', disk_file)
end
original_call(env)
end
end
Hi could any one tell how can i use an existing hard disk on the host machine to be used as a hard disk of a vagrant VM .
when a vagrant spins up a Vm i want that VM to have the hard disk which is attached to my host VM ,say of 1TB.
and i want to bring one more VM with second hard disk which is also attached to my host VM say of 100gb. how can i do this in VagrantFile.
Adding this to you Vagrantfile might be enough to more thoroughly meet your needs, but will not handling the formatting phase like the plugin can (maybe ideal). The reason I wrote this is that things were going haywire for me if I ran certain vagrant operations after the controller already existed and it tried to create it again.
def sata_controller_exists?(controller_name="SATA Controller")
`vboxmanage showvminfo storage-host-vm-dev | grep " #{controller_name}" | wc -l`.to_i == 1
end
def port_in_use?(controller_name, port)
`vboxmanage showvminfo storage-host-vm-dev | grep "SATA Controller (#{port}, " | wc -l`.to_i == 1
end
def attach_hdd(v, controller_name, port, hdd_path)
unless port_in_use?(controller_name, port)
v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', port, '--device', 0, '--type', 'hdd', '--medium', hdd_path]
end
end
.
.
.
# Note that I have a multi-VM Vagrantfile so run this based on which VM is being iterated over...
if vm[:name] == 'storage-host-vm'
controller_name = 'SATA Controller'
v.customize ['storagectl', :id, '--name', controller_name, '--add', 'sata', '--portcount', 4] unless sata_controller_exists?(controller_name)
file_to_disk = "./packer_cache/#{vm[:name]}_vault_1.vdi"
v.customize ['createhd', '--filename', file_to_disk, '--size', 1 * 1024] unless File.exist?(file_to_disk)
attach_hdd(v, controller_name, 0, file_to_disk)
end
.
.
.
In Vagrant 2.1.4, I'm getting:
`instance_eval': stack level too deep (SystemStackError)
It's probably because of the
uuid = driver.instance_eval { @uuid }
Any idea how to fix it?
Trying to run 3 Ubuntu nodes with 3 disks in vagrant.
Has anyone managed to do this?
Below is a WORKING vagrantfile with 3 centos7 nodes with 3 disks.
But it DOESN'T WORK for ubuntu.
$sdb1 = <<-SCRIPT
parted /dev/sdb mklabel msdos
parted /dev/sdb mkpart primary 0% 100%
mkfs.xfs /dev/sdb1
mkdir /mnt/data1
if grep -Fxq "sdb1" /etc/fstab
then
echo 'sdb1 exist in fstab'
else
echo `blkid /dev/sdb1 | awk '{print$2}' | sed -e 's/"//g'` /mnt/data1 xfs noatime,nobarrier 0 0 >> /etc/fstab
fi
if mount | grep /mnt/data1 > /dev/null; then
echo "/dev/sdb1 mounted /mnt/data1"
umount /mnt/data1
mount /mnt/data1
else
mount /mnt/data1
fi
SCRIPT
$sdc1 = <<-SCRIPT
parted /dev/sdc mklabel msdos
parted /dev/sdc mkpart primary 0% 100%
mkfs.xfs /dev/sdc1
mkdir /mnt/data2
if grep -Fxq "sdc1" /etc/fstab
then
echo 'sdc1 exist in fstab'
else
echo `blkid /dev/sdc1 | awk '{print$2}' | sed -e 's/"//g'` /mnt/data2 xfs noatime,nobarrier 0 0 >> /etc/fstab
fi
if mount | grep /mnt/data2 > /dev/null; then
echo "/dev/sdc1 mounted /mnt/data2"
umount /mnt/data2
mount /mnt/data2
else
mount /mnt/data2
fi
SCRIPT
$sdd1 = <<-SCRIPT
parted /dev/sdd mklabel msdos
parted /dev/sdd mkpart primary 0% 100%
mkfs.xfs /dev/sdd1
mkdir /mnt/metadata1
if grep -Fxq "sdd1" /etc/fstab
then
echo 'sdd1 exist in fstab'
else
echo `blkid /dev/sdd1 | awk '{print$2}' | sed -e 's/"//g'` /mnt/metadata1 xfs noatime,nobarrier 0 0 >> /etc/fstab
fi
if mount | grep /mnt/metadata1 > /dev/null; then
echo "/dev/sdd1 mounted /mnt/metadata1"
umount /mnt/metadata1
mount /mnt/metadata1
else
mount /mnt/metadata1
fi
SCRIPT
node1disk1 = "./tmp/node1disk1.vdi";
node1disk2 = "./tmp/node1disk2.vdi";
node1disk3 = "./tmp/node1disk3.vdi";
ip_node1 = "192.168.33.31";
Vagrant.configure("2") do |config|
config.vm.define "node1" do |node1|
node1.vm.network "private_network", ip: ip_node1
node1.vm.hostname = "node1"
node1.vm.define "node1"
node1.vm.box_download_insecure = true
node1.vm.box = "centos/7"
node1.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
if not File.exists?(node1disk1)
vb.customize ['createhd', '--filename', node1disk1, '--variant', 'Fixed', '--size', 1 * 1024]
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 0, '--device', 1, '--type', 'hdd', '--medium', node1disk1]
end
if not File.exists?(node1disk2)
vb.customize ['createhd', '--filename', node1disk2, '--variant', 'Fixed', '--size', 1 * 1024]
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', node1disk2]
end
if not File.exists?(node1disk3)
vb.customize ['createhd', '--filename', node1disk3, '--variant', 'Fixed', '--size', 1 * 1024]
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 1, '--type', 'hdd', '--medium', node1disk3]
end
end
node1.vm.provision "shell", inline: $sdb1
node1.vm.provision "shell", inline: $sdc1
node1.vm.provision "shell", inline: $sdd1
end
end
In the process of experimenting had to come to this Vagrantfile.
But the system cannot boot
$sdb1 = <<-SCRIPT
parted /dev/sdb mklabel msdos
parted /dev/sdb mkpart primary 0% 100%
mkfs.xfs /dev/sdb1
mkdir /mnt/data1
if grep -Fxq "sdb1" /etc/fstab
then
echo 'sdb1 exist in fstab'
else
echo `blkid /dev/sdb1 | awk '{print$2}' | sed -e 's/"//g'` /mnt/data1 xfs noatime,nobarrier 0 0 >> /etc/fstab
fi
if mount | grep /mnt/data1 > /dev/null; then
echo "/dev/sdb1 mounted /mnt/data1"
umount /mnt/data1
mount /mnt/data1
else
mount /mnt/data1
fi
SCRIPT
$sdc1 = <<-SCRIPT
parted /dev/sdc mklabel msdos
parted /dev/sdc mkpart primary 0% 100%
mkfs.xfs /dev/sdc1
mkdir /mnt/data2
if grep -Fxq "sdc1" /etc/fstab
then
echo 'sdc1 exist in fstab'
else
echo `blkid /dev/sdc1 | awk '{print$2}' | sed -e 's/"//g'` /mnt/data2 xfs noatime,nobarrier 0 0 >> /etc/fstab
fi
if mount | grep /mnt/data2 > /dev/null; then
echo "/dev/sdc1 mounted /mnt/data2"
umount /mnt/data2
mount /mnt/data2
else
mount /mnt/data2
fi
SCRIPT
$sdd1 = <<-SCRIPT
parted /dev/sdd mklabel msdos
parted /dev/sdd mkpart primary 0% 100%
mkfs.xfs /dev/sdd1
mkdir /mnt/metadata1
if grep -Fxq "sdd1" /etc/fstab
then
echo 'sdd1 exist in fstab'
else
echo `blkid /dev/sdd1 | awk '{print$2}' | sed -e 's/"//g'` /mnt/metadata1 xfs noatime,nobarrier 0 0 >> /etc/fstab
fi
if mount | grep /mnt/metadata1 > /dev/null; then
echo "/dev/sdd1 mounted /mnt/metadata1"
umount /mnt/metadata1
mount /mnt/metadata1
else
mount /mnt/metadata1
fi
SCRIPT
node1disk1 = "./tmp/node1disk1.vdi";
node1disk2 = "./tmp/node1disk2.vdi";
node1disk3 = "./tmp/node1disk3.vdi";
ip_node1 = "192.168.33.31";
Vagrant.configure("2") do |config|
config.vm.define "node1" do |node1|
node1.vm.network "private_network", ip: ip_node1
node1.vm.hostname = "node1"
node1.vm.define "node1"
node1.vm.box_download_insecure = true
node1.vm.box = "ubuntu/bionic64"
node1.vm.provider "virtualbox" do |vb|
vb.gui = true
vb.memory = "1024"
vb.customize ["storagectl", :id, "--name", "IDE", "--remove"]
vb.customize ["storagectl", :id, "--name", "IDE", "--add", "ide", "--controller", "ICH6"]
if not File.exists?(node1disk1)
vb.customize ['createhd', '--filename', node1disk1, '--variant', 'Fixed', '--size', 1 * 1024]
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 0, '--device', 1, '--type', 'hdd', '--medium', node1disk1]
end
if not File.exists?(node1disk2)
vb.customize ['createhd', '--filename', node1disk2, '--variant', 'Fixed', '--size', 1 * 1024]
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', node1disk2]
end
if not File.exists?(node1disk3)
vb.customize ['createhd', '--filename', node1disk3, '--variant', 'Fixed', '--size', 1 * 1024]
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 1, '--type', 'hdd', '--medium', node1disk3]
end
end
node1.vm.provision "shell", inline: $sdb1
node1.vm.provision "shell", inline: $sdc1
node1.vm.provision "shell", inline: $sdd1
end
end
VBoxManage showvminfo says:
Storage Controller Name (0): SCSI
Storage Controller Type (0): LsiLogic
Storage Controller Instance Number (0): 0
Storage Controller Max Port Count (0): 16
Storage Controller Port Count (0): 16
Storage Controller Bootable (0): on
Storage Controller Name (1): IDE
Storage Controller Type (1): ICH6
Storage Controller Instance Number (1): 0
Storage Controller Max Port Count (1): 2
Storage Controller Port Count (1): 2
Storage Controller Bootable (1): on
SCSI (0, 0): /home/user/VirtualBox VMs/vagrant-openio-multi-nodes_node1_1565541256124_28246/ubuntu-bionic-18.04-cloudimg.vmdk (UUID: 9b9b05cc-d359-428e-a4c5-91391eb7e0e3)
SCSI (1, 0): /home/user/VirtualBox VMs/vagrant-openio-multi-nodes_node1_1565541256124_28246/ubuntu-bionic-18.04-cloudimg-configdrive.vmdk (UUID: 5e47924d-2ad2-4096-9a58-7b97d2ffcbd8)
IDE (0, 1): /home/user/github/vagrant-openio-multi-nodes/tmp/node1disk1.vdi (UUID: d2ef2936-f296-483c-9336-04b5bbd417e9)
IDE (1, 0): /home/user/github/vagrant-openio-multi-nodes/tmp/node1disk2.vdi (UUID: 2673732a-edf3-48f2-8ecb-50af82b1d2e5)
IDE (1, 1): /home/user/github/vagrant-openio-multi-nodes/tmp/node1disk3.vdi (UUID: f2243189-ebba-496a-aab8-cb97f68b4038)
FWIW: vagrant-libvirt now supports this type of feature natively. Actually it's likely going to be merged shortly, but I figured I'd mention it if anyone wants to try it out.
vagrant-libvirt/vagrant-libvirt#178
HTH,
James