Guide to deploy OpenStack IceHouse on top of Ubuntu 14.04.
It covers: Ubuntu (hostnames, LVM), Open vSwitch, MySQL, Keystone, Glance, Neutron with ML2, Nova, Cinder and Dashboard.
IPv6-Ready! (Under Development)
This is a "step-by-step", a "cut-and-paste" guide.
It was inspired by: http://openstack-folsom-install-guide.readthedocs.org/en/latest/
-
No Metadata, no GRE, no L3, no Security Groups.
-
Only 1 ethernet per physical server.
- No NAT within the Cloud, no Floating IPs, no multihost=true (i.e. no NAT at the Compute Node itself - bad, avoid NAT).
My idea is to move on and forget about IPv4 and NAT tables, so, with IPv6, we don't need NAT anymore. NAT66 is a bad, bad thing, be part of the Internet with REAL IPv6 address! Or stay out with the smelly NAT66... Again, do not use "ip6tables -t nat", never, unless you want to break your network.
Here in Brazil, we call the "NAT Table", a gambiarra (workaround), it is the "thereIfixedit.com" of the old Internet infrastructure (IPv4) and, there is NO need for it when with IPv6. People who thinks that NAT66 is a requirement, just don't know how to deal with IPv6 and wants more gambiarras.
One last word about NAT66: it breaks the end-to-end Internet connectivity, effectively kicking you out from the real Internet, and it is just a workaround (gambiarra) created to deal with IPv4 exhaustion, so, there is no need for NAT on an IPv6-World.
Also, the "border gateway" (default route of physical serves and of instances) is located outside of the cloud. This means that we're mapping our physical network into OpenStack.
NOTE: The config file examples are supposed to be added to the respective files, it is not a entire config file replacement. Keep the rest of the original files intact when possible (i.e. when not duplicating the entries).
Enable the following services:
- Metadata
- SPICE
- Heat
- Ceilometer
- Swift
This lab have a Ubuntu acting as a firewall, with our WAN ISP attached to it, so, behind it, will sit the entire OpenStack infrastructure.
This Firewall Ubuntu might have the package aiccu installed, so, you'll have at least, one IPv6 /64 block to play with.
Install a Ubuntu 14.04 with at least two network cards (can be a small virtual machine).
-
Network Topology:
-
WAN - eth0
-
IPv6 (if you have native)
- IP address: 2001:db8:0:0::2/64description
- Gateway IP: 2001:db8:0:0::1
-
IPv4
- IP address: 200.10.1.2/28
- Gateway IP: 200.10.1.1
-
-
LAN - eth1
-
IPv6
- IP addresses: 2001:db8:1::1/64, 2001:db8:1:1::1/64
-
IPv4 (Legacy)
- IP addresses: 10.32.14.1/24, 10.33.14.1/24
-
# The loopback network interface
auto lo
iface lo inet loopback
iface lo inet6 loopback
# ETH0 - BEGIN - WAN faced
# The primary network interface connected to your ISP's WAN
auto eth0
# IPv6
# If you have native IPv6, configure it here, otherwise, aiccu will create
# a new interface for your IPv6 WAN, called sixxs, tunneled through your
# eth0 IPv4 address.
iface eth0 inet6 static
address 2001:db8:0:0::2
netmask 64
gateway 2001:db8:0:0::1
dns-search yourdomain.com
dns-domain yourdomain.com
dns-nameservers 2001:4860:4860::8844 2001:4860:4860::8888
# IPv4 - Legacy
iface eth0 inet static
address 200.10.1.2
netmask 28
gateway 200.10.1.1
dns-search yourdomain.com
dns-domain yourdomain.com
# Google Public DNS
dns-nameservers 8.8.4.4
# OpenDNS
# dns-nameservers 208.67.222.222 208.67.220.220 208.67.222.220 208.67.220.222
# OpenNIC
# dns-nameservers 66.244.95.20 74.207.247.4 216.87.84.211
# ETH0 - END
# ETH1 - BEGIN - LAN faced
auto eth1
# IPv6
# Your routed block, SixxS.net or TunnelBroker provides a entire /48 for
# you, for free! If you got one, configure it here.
# OpenStack Management
iface eth1 inet6 static
address 2001:db8:1::1
netmask 64
# OpenStack - Instance's gateway
auto eth1:10
iface eth1:10 inet6 manual
up ip -6 addr add 2001:db8:1:1::1/64 dev $IFACE
down ip -6 addr del 2001:db8:1:1::1/64 dev $IFACE
# Regular Network (Optional, if you have a /48, add your /64 subnets here from it)
#auto eth1:20
#iface eth1:20 inet6 manual
# up ip -6 addr add 2001:db8:1:2::1/64 dev $IFACE
# down ip -6 addr del 2001:db8:1:2::1/64 dev $IFACE
# IPv4 - Legacy
# OpenStack Management
iface eth1 inet static
address 10.32.14.1
netmask 24
# OpenStack - Instance's gateway
auto eth1:0
iface eth1:0 inet static
address 10.33.14.1
netmask 24
# ETH1 - END
Run the following commands
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sed -i 's/#net.ipv6.conf.all.forwarding=1/net.ipv6.conf.all.forwarding=1/' /etc/sysctl.conf
sysctl -p
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
NOTE #1: There is only 1 NAT rule on this environment, which resides on this gateway itself, to do the IPv4 SNAT/DNAT to/from the old Internet infrastructure. There is no IPv4 NAT within this OpenStack environment itself (no Floating IPs, "no multihost=true"). Also, there is no NAT when enjoying the New Internet Powered by IPv6!
NOTE #2: If your have more IPv4 public blocks available (i.e. at your gateway's eth1 interface, your Instances can also have public IPs on it! But you'll need to manage the packet filter by yourself (no Security Groups on this setup).
The OpenStack Controller Node is powered by Ubuntu 14.04!
-
Requirements:
- 1 Virtual Machine (KVM/Xen) with 2G of RAM
- 1 Virtual Ethernet VirtIO Card
- 2 Virtual VirtIO HDs about 100G each (one for Ubuntu / Nova / Glance and another for Cinder)
- Hostname: controller.yourdomain.com
- 64 bits O.S. Recommended
-
IPv6
- IP Address: 2001:db8:1:1::10/64
- Gateway IP: 2001:db8:1:1::1
-
IPv4 - Legacy
- IP Address: 10.32.14.232/24
- Gateway IP: 10.32.14.1
Install Ubuntu 14.04 on the first disk, can be the Minimum Virtual Machine flavor, using Guided LVM Paritioning, leave the second disk untouched for now.
Login as root and run:
apt-get update
apt-get install vim iptables
apt-get dist-upgrade -y
reboot
After reboot, login as root again and run:
vi /etc/hosts
Make sure it have the following contents:
127.0.0.1 localhost.localdomain localhost
# IPv6
2001:db8:1:1::10 controller.yourdomain.com controller
2001:db8:1:1::100 compute1.yourdomain.com compute1
2001:db8:1:1::200 compute2.yourdomain.com compute2
# IPv4 - Not needed:
#10.32.14.232 controller.yourdomain.com controller
#10.32.14.234 compute1.yourdomain.com compute1
#10.32.14.236 compute2.yourdomain.com compute2
As root, run:
apt-get install openvswitch-switch
Edit your Controller Node network interfaces file:
vi /etc/network/interfaces
With:
# The primary network interface
# ETH0 - BEGIN
auto eth0
iface eth0 inet manual
up ip address add 0/0 dev $IFACE
up ip link set $IFACE up
# up ip link set $IFACE promisc on
# down ip link set $IFACE promisc off
down ip link set $IFACE down
# ETH0 - END
# BR-ETH0 - BEGIN
auto br-eth0
# IPv6
iface br-eth0 inet6 static
address 2001:db8:1:1::10
netmask 64
gateway 2001:db8:1:1::1
# Google Public DNS
dns-nameservers 2001:4860:4860::8844 2001:4860:4860::8888
# OpenNIC
# dns-nameservers 2001:530::216:3cff:fe8d:e704 2600:3c00::f03c:91ff:fe96:a6ad 2600:3c00::f03c:91ff:fe96:a6ad
# OpenDNS Public Name Servers
# dns-nameservers 2620:0:ccc::2 2620:0:ccd::2
# IPv4 - Legacy
iface br-eth0 inet static
address 10.32.14.232
netmask 24
gateway 10.32.14.1
# dns-* options are implemented by the resolvconf package, if installed
dns-search yourdomain.com
# Google Public DNS
dns-nameservers 8.8.4.4
# OpenDNS
# dns-nameservers 208.67.222.222 208.67.220.220 208.67.222.220 208.67.220.222
# OpenNIC
# dns-nameservers 66.244.95.20 74.207.247.4 216.87.84.211
# BR-ETH0 - END
Then run:
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth0
# The next command will kick you out from this server (if connected to it via eth0), that's why we should reboot after running it:
ovs-vsctl add-port br-eth0 eth0 && reboot
apt-get install mysql-server python-mysqldb ntp curl openssl rabbitmq-server python-keyring
Configure it:
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart
sed -i 's/127.0.0.1/::/g' /etc/mysql/my.cnf
Edit my.cnf...
vi /etc/mysql/my.cnf
With:
[mysqld]
#
# * For OpenStack - Keystone, etc - utf8
#
collation-server = utf8_general_ci
init-connect='SET NAMES utf8'
character-set-server = utf8
Creating the required databases:
service mysql restart
mysql -u root -p
Once within MySQL prompt, create the databases:
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass';
CREATE DATABASE neutron;
GRANT ALL ON neutron.* TO 'neutronUser'@'%' IDENTIFIED BY 'neutronPass';
CREATE DATABASE heat;
GRANT ALL ON heat.* TO 'heatUser'@'%' IDENTIFIED BY 'heatPass';
quit;
apt-get install keystone
Edit the keystone.conf and and change it for this:
vi /etc/keystone/keystone.conf
File contents:
[DEFAULT]
admin_token = ADMIN
bind_host = 2001:db8:1:1::10
[database]
connection = mysql://keystoneUser:keystonePass@controller.yourdomain.com/keystone
Then run:
keystone-manage db_sync
service keystone restart
cd ~
wget https://gist.github.com/tmartinx/7002197/raw/838770e4848c78dcd896fcfb6e4627d754051a72/keystone_basic.sh
wget https://gist.github.com/tmartinx/7002255/raw/40887b30a54df288483cb515d793a919bca671b4/keystone_endpoints_basic.sh
Edit keystone_basic.sh:
vi keystone_basic.sh
Replace this line, with your own FQDN:
HOST_IP=controller.yourdomain.com
Edit keystone_endpoints.sh:
vi keystone_endpoints_basic.sh
Replace this line, with your own FQDN:
HOST_IP=controller.yourdomain.com
EXT_HOST_IP=controller.yourdomain.com
Then run:
chmod +x keystone_basic.sh
chmod +x keystone_endpoints_basic.sh
./keystone_basic.sh
./keystone_endpoints_basic.sh
# Preliminary Keystone test
curl http://controller.yourdomain.com:35357/v2.0/endpoints -H 'x-auth-token: ADMIN' | python -m json.tool
Create your NOVA Resource Configuration file:
vi ~/.novarc
# COMMON OPENSTACK ENVS
export SERVICE_TOKEN=ADMIN
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_TENANT_NAME=admin
export OS_AUTH_URL="http://controller.yourdomain.com:5000/v2.0/"
export SERVICE_ENDPOINT="http://controller.yourdomain.com:35357/v2.0/"
export OS_AUTH_STRATEGY=keystone
export OS_NO_CACHE=1
# LEGACY NOVA ENVS
export NOVA_USERNAME=${OS_USERNAME}
export NOVA_PROJECT_ID=${OS_TENANT_NAME}
export NOVA_PASSWORD=${OS_PASSWORD}
export NOVA_API_KEY=${OS_PASSWORD}
export NOVA_URL=${OS_AUTH_URL}
export NOVA_VERSION=1.1
export NOVA_REGION_NAME=RegionOne
# EUCA2OOLs ENV VARIABLES
export EC2_ACCESS_KEY=ab2f155901fb4be5bae4ddc78c924665
export EC2_SECRET_KEY=ef89b9562e9b4653a8d68e3117f0ae32
export EC2_URL=http://controller.yourdomain.com:8773/services/Cloud
Append to your bashrc:
vi ~/.bashrc
With:
if [ -f ~/.novarc ]; then
. ~/.novarc
fi
Then, load it:
source ~/.bashrc
Test Keystone with basic option to see if it works:
keystone tenant-list
Lest install Glance
apt-get install glance python-mysqldb
Edit glance-api.conf...
vi /etc/glance/glance-api.conf
With:
[DEFAULT]
bind_host = 2001:db8:1:1::10
sql_connection = mysql://glanceUser:glancePass@controller.yourdomain.com/glance
registry_host = dsuaa-1.quilombas.com
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass
[paste_deploy]
flavor = keystone
Edit glance-registry.conf...
vi /etc/glance/glance-registry.conf
With:
[DEFAULT]
bind_host = 2001:db8:1:1::10
sql_connection = mysql://glanceUser:glancePass@controller.yourdomain.com/glance
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass
[paste_deploy]
flavor = keystone
Then run:
glance-manage db_sync
service glance-api restart; service glance-registry restart
Run the following commands:
# Ubuntu 14.04 - LTS - (Under Development)
glance image-create --location http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-i386-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 14.04 LTS - Trusty Tahr - 32-bit - Cloud Based Image"
glance image-create --location http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 14.04 LTS - Trusty Tahr - 64-bit - Cloud Based Image"
# Ubuntu 12.04.4 - LTS
glance image-create --location http://uec-images.ubuntu.com/releases/12.04/release/ubuntu-12.04-server-cloudimg-i386-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 12.04.4 LTS - Precise Pangolin - 32-bit - Cloud Based Image"
glance image-create --location http://uec-images.ubuntu.com/releases/12.04/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 12.04.4 LTS - Precise Pangolin - 64-bit - Cloud Based Image"
# Ubuntu 13.10
glance image-create --location http://uec-images.ubuntu.com/releases/13.10/release/ubuntu-13.10-server-cloudimg-i386-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 13.10 - Saucy Salamander - 32-bit - Cloud Based Image"
glance image-create --location http://uec-images.ubuntu.com/releases/13.10/release/ubuntu-13.10-server-cloudimg-amd64-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 13.10 - Saucy Salamander - 64-bit - Cloud Based Image"
# CirrOS (Optional - TestVM)
glance image-create --location http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-i386-disk.img --name "CirrOS Minimalist - 32-bit - Cloud Based Image" --is-public true --container-format bare --disk-format qcow2
glance image-create --location http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img --name "CirrOS Minimalist - 64-bit - Cloud Based Image" --is-public true --container-format bare --disk-format qcow2
# CoreOS 247.0.0 - Linux 3.13.5 - Docker 0.8.1
cd ~
wget http://storage.core-os.net/coreos/amd64-generic/dev-channel/coreos_production_openstack_image.img.bz2
bunzip2 coreos_production_openstack_image.img.bz2
glance image-create --name "CoreOS 247.0.0 - Linux 3.13.5 - Docker 0.8.1" --container-format ovf --disk-format qcow2 --file coreos_production_openstack_image.img --is-public True
# If you need to run Windows 2012 in your OpenStack, visit: http://cloudbase.it/ws2012r2 to download the image "windows_server_2012_r2_standard_eval_kvm_20131117.qcow2.gz", then, run:
gunzip /root/windows_server_2012_r2_standard_eval_kvm_20131117.qcow2.gz
glance image-create --name "Windows Server 2012 R2 Standard Eval" --container-format bare --disk-format qcow2 --is-public true < /root/windows_server_2012_r2_standard_eval_kvm_20131117.qcow2
# Listing the Images
glance image-list
Run:
apt-get install nova-api nova-cert nova-consoleauth nova-scheduler nova-conductor nova-novncproxy novnc
Edit apt-paste.ini...
vi /etc/nova/api-paste.ini
With:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
# signing_dir is configurable, but the default behavior of the authtoken
# middleware should be sufficient. It will create a temporary directory
# in the home directory for the user the nova process is running as.
#signing_dir = /var/lib/nova/keystone-signing
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0
Then run:
cd /etc/nova
mv /etc/nova/nova.conf /etc/nova/nova.conf_Ubuntu
wget https://gist.github.com/tmartinx/7002808/raw/07b2e27a4996fd5b23175fc281b03ac422414639/nova.conf
chown nova: /etc/nova/nova.conf
chmod 640 /etc/nova/nova.conf
# NOTE: Edit your nova.conf file, before running "db sync", to reflec your own FQDN (*.yourdomain.com)
nova-manage db sync
cd /etc/init.d/; for i in $(ls nova-*); do sudo service $i restart; done
Run:
apt-get install neutron-server neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-dhcp-agent neutron-metadata-agent
Edit neutron.conf...
vi /etc/neutron/neutron.conf
With:
[DEFAULT]
bind_host = 2001:db8:1:1::10
allow_overlapping_ips = True
rabbit_host = controller.yourdomain.com
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
signing_dir = $state_path/keystone-signing
[database]
sql_connection = mysql://neutronUser:neutronPass@controller.yourdomain.com/neutron
Edit ml2_conf.ini...
vi /etc/neutron/plugins/ml2/ml2_conf.ini
With:
[ml2]
type_drivers = local,flat
mechanism_drivers = openvswitch,l2population
[ml2_type_flat]
flat_networks = *
[security_group]
enable_security_group = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[database]
sql_connection = mysql://neutronUser:neutronPass@controller.yourdomain.com/neutron
[ovs]
enable_tunneling = False
local_ip = 10.32.14.232
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0
Edit metadata_agent.ini (still not working on this environment, it is on TODO list)...
vi /etc/neutron/metadata_agent.ini
With:
# The Neutron user information for accessing the Neutron API.
auth_url = http://controller.yourdomain.com:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
nova_metadata_ip = 127.0.0.1
nova_metadata_port = 8775
metadata_proxy_shared_secret = metasecret13
Edit dhcp_agent.ini...
vi /etc/neutron/dhcp_agent.ini
With:
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
dhcp_domain = yourdomain.com
Run:
cd /etc/init.d/; for i in $(ls neutron-*); do sudo service $i restart; done
First, get the admin tenant id and note it (like var $ADMIN_TENTANT_ID).
keystone tenant-list
Now, map the physical network, that one from you "border gateway", into OpenStack Neutron:
neutron net-create --tenant-id $ADMIN_TENTANT_ID sharednet1 --shared --provider:network_type flat --provider:physical_network physnet1
Create an IPv4 subnet on "sharednet1":
neutron subnet-create --ip-version 4 --tenant-id $ADMIN_TENANT_ID sharednet1 10.33.14.0/24 --dns_nameservers list=true 8.8.8.8 8.8.4.4
Create an IPv6 subnet on "sharednet1":
neutron subnet-create --ip-version 6 --tenant-id $ADMIN_TENANT_ID sharednet1 2001:db8:1:1::/64 --dns_nameservers list=true 2001:4860:4860::8844 2001:4860:4860::8888
IPv6 NOTE - Still not tested! It will not work, neither in "Dual-Stack" mode or alone.
This procedure will make use of the extra Virtual HD of your controller.yourdomain.com (about 100G).
If don't have it, add one: halt VM -> go to "virt-manager" -> Add hardware -> VirtIO Disk / 100G / RAW.
Then run:
# Create a primary partition on it, type LVM (8e)
cfdisk /dev/vdb
# Create the LVM Physical Volume
pvcreate /dev/vdb1
# Create the LVM Volume Group
vgcreate cinder-volumes /dev/vdb1
Installing Cinder:
apt-get install cinder-api cinder-scheduler cinder-volume python-mysqldb
Edit api-paste.ini...
vi /etc/cinder/api-paste.ini
With:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = service_pass
signing_dir = /var/lib/cinder
Edit cinder.conf...
vi /etc/cinder/cinder.conf
with:
[DEFAULT]
my_ip = 2001:db8:1:1::10
glance_host = 2001:db8:1:1::10
osapi_volume_listen = 2001:db8:1:1::10
sql_connection = mysql://cinderUser:cinderPass@controller.yourdomain.com/cinder
Run:
cinder-manage db sync
cd /etc/init.d/; for i in $(ls cinder-*); do sudo service $i restart; done
Run:
apt-get install openstack-dashboard memcached
apt-get purge openstack-dashboard-ubuntu-theme
This OpenStack Compute Node is powered by Ubuntu 14.04!
-
Requirements:1 Physical Server with Virtualization support on CPU, 1 ethernet
- 1 Physical Server with Virtualization support on CPU
- 1 Ethernet Card
- 1 HardDisk about 500G
- Hostname: compute1.yourdomain.com
- 64 bits O.S. highly recommended
-
IPv6
- IP address: 2001:db8:1:1::100/64
- Gateway IP: 2001:db8:1:1::1
-
IPv4 - Legacy
- IP address: 10.32.14.234/24
- Gateway IP: 10.32.14.1
This installation can be the "Minimum Installation" flavor, using `Manual Paritioning', make the following partitions:
- /dev/sda1 on /boot (~256M - /dev/md0 if raid1[0], bootable)
- /dev/sda2 on LVM VG vg01 (~50G - /dev/md1 if raid1[0]) - lv_root (25G), lv_swap (XG) of compute1
- /dev/sda3 on LVM VG nova-local (~450G - /dev/md2 if raid1[0]) - Instances
Then run:
apt-get update
apt-get dist-upgrade -y
apt-get install ubuntu-virt-server libvirt-bin pm-utils nova-compute-kvm neutron-plugin-openvswitch-agent vim iptables -y
virsh net-destroy default
virsh net-undefine default
Edit:
vi /etc/hosts
With:
127.0.0.1 localhost.localdomain localhost
# IPv6
2001:db8:1:1::10 controller.yourdomain.com controller
2001:db8:1:1::100 compute1.yourdomain.com compute1
2001:db8:1:1::200 compute2.yourdomain.com compute2
# IPv4 - Not needed:
#10.32.14.232 controller.yourdomain.com controller
#10.32.14.234 compute1.yourdomain.com compute1
#10.32.14.236 compute2.yourdomain.com compute2
Edit:
vi /etc/network/interfaces
With:
# The primary network interface
# ETH0 - BEGIN
auto eth0
iface eth0 inet manual
up ip address add 0/0 dev $IFACE
up ip link set $IFACE up
# up ip link set $IFACE promisc on
# down ip link set $IFACE promisc off
down ip link set $IFACE down
# ETH0 - END
# BR-ETH0 - BEGIN
auto br-eth0
# IPv6
iface br-eth0 inet6 static
address 2001:db8:1:1::100
netmask 64
gateway 2001:db8:1:1::1
# Google Public DNS
dns-nameservers 2001:4860:4860::8844 2001:4860:4860::8888
# OpenNIC
# dns-nameservers 2001:530::216:3cff:fe8d:e704 2600:3c00::f03c:91ff:fe96:a6ad 2600:3c00::f03c:91ff:fe96:a6ad
# OpenDNS Public Name Servers:
# dns-nameservers 2620:0:ccc::2 2620:0:ccd::2
# IPv4 - Legacy
iface br-eth0 inet static
address 10.32.14.234
netmask 24
gateway 10.32.14.1
# dns-* options are implemented by the resolvconf package, if installed
dns-search yourdomain.com
# Google Public DNS
dns-nameservers 8.8.4.4
# OpenDNS
# dns-nameservers 208.67.222.222 208.67.220.220 208.67.222.220 208.67.220.222
# OpenNIC
# dns-nameservers 66.244.95.20 74.207.247.4 216.87.84.211
# BR-ETH0 - END
Run:
apt-get install openvswitch-switch
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth0
ovs-vsctl add-port br-eth0 eth0 && reboot
Edit:
vi /etc/default/grub
With:
GRUB_CMDLINE_LINUX="elevator=deadline"
update-grub
echo vhost_net >> /etc/modules
Run:
# Prepare /etc/libvirt/libvirtd.conf:
sed -i 's/listen_tls = 0/listen_tls = 1/' /etc/libvirt/libvirtd.conf
sed -i 's/listen_tcp = 0/listen_tcp = 1/' /etc/libvirt/libvirtd.conf
sed -i 's/auth_tcp = "none"/auth_tcp = "none"/' /etc/libvirt/libvirtd.conf
# Prepare /etc/init/libvirt-bin.conf:
sed -i 's/env libvirtd_opts="-d -l"/env libvirtd_opts="-d -l"/' /etc/init/libvirt-bin.conf
# Prepare /etc/default/libvirt-bin:
sed -i 's/libvirtd_opts="-d -l"/libvirtd_opts="-d -l"/' /etc/default/libvirt-bin
Run:
service libvirt-bin restart
# Or...
Better to do another reboot:
reboot
Edit:
vi /etc/nova/api-paste.ini
With:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
signing_dir = /tmp/keystone-signing-nova
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0
Run:
mv /etc/nova/nova.conf /etc/nova/nova.conf_Ubuntu
cd /etc/nova
wget https://gist.github.com/tmartinx/7019788/raw/e3921076077f02c41c2276c7ae1fad6baf963e3a/nova.conf
chown nova: /etc/nova/nova.conf
chmod 640 /etc/nova/nova.conf
cd /etc/init.d/; for i in $(ls nova-*); do sudo service $i restart; done
Edit:
vi /etc/neutron/neutron.conf
With:
[DEFAULT]
# debug = True
# verbose = True
allow_overlapping_ips = True
rabbit_host = controller.yourdomain.com
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
signing_dir = /var/lib/neutron/keystone-signing
Edit:
vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
With:
[DATABASE]
sql_connection = mysql://neutronUser:neutronPass@controller.yourdomain.com/neutron
[OVS]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0
Run:
service neutron-plugin-openvswitch-agent restart
Point mycloud.yourdomain.com to 2001:db8:1:1::10 (and/or 10.32.14.232 ) and open the Horizon Dashboard at:
http://mycloud.yourdomain.com/horizon - user admin, pass admin_pass
Congratulations!!
You have your own Private Cloud Computing Environment up and running! With IPv6!!
Enjoy it!
By Thiago Martins thiagocmartinsc@gmail.com