Skip to content

Instantly share code, notes, and snippets.

@marios
Last active August 29, 2015 13:56
Show Gist options
  • Save marios/cdc82a55ac80e0fac036 to your computer and use it in GitHub Desktop.
Save marios/cdc82a55ac80e0fac036 to your computer and use it in GitHub Desktop.
Resize undercloud-live to support more vms, scale block-store and compute

0. Cleanup previous run - skip if first run:


Make sure you have deleted all vms:

virsh list
virsh destroy <name>
#also undefine undercloud
virsh undefine undercloud

Delete all storage volumes and the default pool

for i in `ls -1 /home/vm_storage_pool/*`; do virsh vol-delete $i; done

If the above fails its likely because your volumes were defined elsewhere, like /var/lib/libvirt/ - you can see with:

virsh vol-list default
#delete each volume accordingly:
virsh vol-delete /path/to/vol1

virsh pool-destroy default
virsh pool-undefine default

1. Follow slagle's undercloud-live setup to the end of step 6


2. Step 7 Create and start the vm for the Undercloud:

Differences from slagle's step 7:

  • make sure target for default pool has enough disk space allocation
  • resize undercloud-live image /dev/sda1 partition with virt-resize

2.1 Define the default virt storage volume pool specifying appropriate target

This is specific to beaker so adjust according to your environment. On my F19 x86_64 provisioned in beaker (one of the new ibm boxes in brno), filesystem is like

[root@ibm-x3550m4-05 tripleo]# df -h 
Filesystem                                Size  Used Avail Use% Mounted on
/dev/mapper/fedora_ibm--x3550m4--05-root   50G  8.5G   39G  19% /
devtmpfs                                   12G     0   12G   0% /dev
tmpfs                                      12G     0   12G   0% /dev/shm
tmpfs                                      12G  644K   12G   1% /run
tmpfs                                      12G     0   12G   0% /sys/fs/cgroup
tmpfs                                      12G  856K   12G   1% /tmp
/dev/sda1                                 477M   73M  375M  17% /boot
/dev/mapper/fedora_ibm--x3550m4--05-home  214G   62M  203G   1% /home

This is significant because the 'default' --target for the vm storage pool is /var/lib/libvirt/images. On my box above, / only gets 50G. So I specify /home/vm_storage_pool/ instead as /home gets > 200G.

export UNDERCLOUD_VM_NAME=undercloud
virsh pool-define-as --name default dir --target /home/vm_storage_pool
virsh pool-autostart default
virsh pool-start default 

2.2 Resize the undercloud-live image

#You need to install libguestfs tools for this:
yum install '*guestf*'

# create and upload to a temp_vol holding the downloaded undercloud-live
virsh vol-create-as default temp_vol.qcow2 20G --format qcow2
virsh vol-upload --pool default temp_vol.qcow2 undercloud.qcow2

# create a new volume, double capacity for the undercloud-vm
# I used 40G, which allowed 10 nodes deployed ok http://i.imgur.com/Q5ueHMN.png
virsh vol-create-as default $UNDERCLOUD_VM_NAME.qcow2 40G --format qcow2

# copy temp_vol over to the bigger undercloud.qcow disk and expand 
# /dev/sda1 to fill space
virt-resize --expand /dev/sda1 /home/vm_storage_pool/temp_vol.qcow2 /home/vm_storage_pool/undercloud.qcow2

configure-vm \
    --name $UNDERCLOUD_VM_NAME \
    --image /home/vm_storage_pool/$UNDERCLOUD_VM_NAME.qcow2 \
    --seed \
    --libvirt-nic-driver virtio \
    --arch x86_64 \
    --memory 2097152 \
    --cpus 1 
export UNDERCLOUD_CONFIG_DRIVE_ISO=$(undercloud-config-drive)
virsh attach-disk $UNDERCLOUD_VM_NAME \
    $UNDERCLOUD_CONFIG_DRIVE_ISO hda \
    --type cdrom --sourcetype file --persistent
virsh start $UNDERCLOUD_VM_NAME

3 Continue with steps 8 and 9 in James's undercloud deploy-steps. Look at the next section before starting with the Baremetal setup


4 Baremetal Setup

In the last part of step 1, you can define 2 baremetal nodes for use with deployments. You can increase this, I have deployed 10 so far on my beaker box.

export UNDERCLOUD_MACS=`create-nodes $NODE_CPU $NODE_MEM $NODE_DISK $NODE_ARCH 10`

Complete all the steps in the Baremetal Setup from the undercloud deploy-steps adjusting the step identified above as necessary


5 Deploying an Overcloud

Here will will only use steps 1 and 2. Step 3, deploying the Overcloud, we will do with Tuskar-UI itself.

5.1 Load images:

Get or build the overcloud-cinder-volume.qcow2 if you want to deploy block-storage nodes:

The overcloud-cinder-volume.qcow image can either be built (thanks rbrady!):

#!/bin/bash
set -eux

DIB_COMMON_ELEMENTS=${DIB_COMMON_ELEMENTS:-"stackuser"}
NODE_DIST="fedora selinux-permissive"
ELEMENTS_PATH=$TRIPLEO_ROOT/tripleo-image-elements/elements

$TRIPLEO_ROOT/diskimage-builder/bin/disk-image-create \
	-a amd64 \
--offline \
-o $TRIPLEO_ROOT/overcloud-cinder-volume \
fedora cinder-volume \
neutron-openvswitch-agent heat-cfntools stackuser pip-cache

This was working fine but failed last time I tried, so you can also just grab the overcloud-cinder-volume.qcow2 image from here

cd $TRIPLEO_ROOT
curl -L -O "https://s3-eu-west-1.amazonaws.com/somerandomname/overcloud-cinder-volume.qcow2"

# Load the images, including overcloud-cinder-volume:
load-image overcloud-control.qcow2
load-image overcloud-compute.qcow2
load-image overcloud-cinder-volume.qcow2

5.2 Add your ssh key:

user-config

6 Install tuskar and tuskar-ui to deploy the overcloud:

In configuration (for both) you will need to set the IP of the undercloud machine ($UNDERCLOUD_IP) is set, as are the admin/heat credentials (admin, unset from undercloud-live).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment