Goal: use an existing normal openstack deploy to create a
nova-compute-lxd
layer using same nova-compute hosts, by running its nova-compute LXD services inside a hosts' LXCs
(you can't have several hypervisors run by the same nova-compute):
HOST
.---------------------------------------------------.
| jujud-m-32 |
| <nova-compute/0> |
| | |
| ______/KVM\_______ _________LXC__________ |
| : : : jujud-m-32-lxc-0 : |
| : [ vm01 ] : : <nova-compute-lxd/0> : |
| : [ vm02 ] : : | : |
| : [ vm02 ] : : _____/LXD\_____ : |
| : .. : : : : : |
| : : : : [ lxc01 ] : : |
| : : : : [ lxc02 ] : : |
| '..................' : : ... : : |
| : '...............' : |
| : : |
| '......................' |
'---------------------------------------------------'
NOTE: below was done over an existing openstack, deployed with:
- xenial/mitaka
- juju 1.25.5 using ~openstack-charmers charms as of 2016-04-15
- several existing nova-compute metal hosts (w/KVM hypervisor)
- Select an existing host, eg.
machine#32
- Pre-setup LXC, from deployment host:
m=32
aa_src=default-with-nesting
aa_dst=default-with-nesting-lxd
extra_line="mount options=(rw,rshared) -> /var/lib/lxd/shmounts/,"
juju ssh --pty=false ${m?} sudo bash <<EOF
apt-get update
apt-get install -y lxc-common
sed -e '/^profile/s/${aa_src}/${aa_dst}/' -e '/^}/i\ ${extra_line}' /etc/apparmor.d/lxc/lxc-$aa_src > /etc/apparmor.d/lxc/lxc-$aa_dst
apparmor_parser -r /etc/apparmor.d/lxc-containers
echo 'lxc.aa_profile = lxc-container-${aa_dst}' > /usr/share/lxc/config/common.conf.d/10-default_aa_profile.conf
EOF
- Create a juju'd LXC inside host
juju add-machine --series xenial lxc:${m?}
- HACKs:
- Add resolution to the LXC container (required by nova-compute, and likely LXD also?)
- Pre-mount /proc and /sys as shown below (needed by LXD, else lxd will get -EPERM when launching lxc 'VMs')
juju ssh --pty=false ${m?}/lxc/0 sudo bash <<'EOF'
add_line() { f=$1; shift; fgrep "$*" "$f" || echo "$*" >> "$f" ;}
add_line /etc/hosts "$(ip r get 8.8.8.8|sed -n 's/.*src //p') $HOSTNAME"
add_line /etc/fstab "none /usr/lib/x86_64-linux-gnu/proc proc defaults 0 0"
add_line /etc/fstab "none /usr/lib/x86_64-linux-gnu/sys sysfs defaults 0 0"
install -d /usr/lib/x86_64-linux-gnu/{proc,sys}
mount -av
EOF
- Main
nova-compute-lxd
deploy, see openstack-lxd bundle:
# deploy nova-compute-lxd service to above LXC:
juju deploy local:xenial/nova-compute nova-compute-lxd --config nova-compute-lxd.yaml --to ${m?}/lxc/0
# subordinate services:
juju deploy local:xenial/neutron-openvswitch neutron-openvswitch-lxd
juju deploy local:xenial/lxd lxd
## Relations: normal stuff, plus n-c-lxd <-> lxd
juju add-relation nova-compute-lxd nova-cloud-controller
juju add-relation nova-compute-lxd mysql
juju add-relation nova-compute-lxd rabbitmq-server
juju add-relation nova-compute-lxd glance
juju add-relation neutron-openvswitch-lxd rabbitmq-server
juju add-relation neutron-openvswitch-lxd neutron-api
juju add-relation nova-compute-lxd neutron-openvswitch-lxd
juju add-relation nova-compute-lxd lxd
- Upload image, boot a test LXD vm:
- Verify nova-compute service inside LXC instance
$ nova service-list|grep juju-machine
| 143 | nova-compute | juju-machine-32-lxc-0 | nova | enabled | up | 2016-04-15T23:28:35.000000 | - ...
- Download image, upload to glance:
img=xenial-server-cloudimg-amd64-root.tar.gz
wget http://cloud-images.ubuntu.com/xenial/current/${img}
glance image-create \
--file=${img} --name=${img} \
--disk-format=raw --container-format=bare --property architecture="x86_64" \
--is-public=true --progress
- Boot a test instance
$ nova boot --key_name admin_key --nic net-id=${net_id?} --image=xenial-server-cloudimg-amd64-root.tar.gz \
--flavor=m1.tiny --availability-zone nova:juju-machine-${m?}-lxc-0 jjo-lxd-nested-x
- And ... it's running \o/
$ nova list|egrep jjo-lxd-nested-x
| 5ef2efad-082a-48e8-8953-8418638aa9a4 | jjo-lxd-nested-x | ACTIVE | - | Running | net_foo=10.201.1.238 |
$ ssh 10.201.1.238
ubuntu@jjo-lxd-nested-x:~$ _
- nova-compute-lxd.yaml:
nova-compute-lxd:
enable-resize: True
virt-type: lxd
enable-live-migration: True
migration-auth-type: ssh
config-flags: resume_guests_state_on_host_boot=True
- openstack-charms.bzr:
# run: codetree openstack-charms.bzr
charms/xenial/nova-compute lp:~openstack-charmers/charms/trusty/nova-compute/next;revno=223
charms/xenial/neutron-openvswitch lp:~openstack-charmers/charms/trusty/neutron-openvswitch/next;revno=120
charms/xenial/lxd lp:~openstack-charmers-next/charms/xenial/lxd/trunk;revno=63