Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jimmiehansson/34af7e78b1dbc7f35917 to your computer and use it in GitHub Desktop.
Save jimmiehansson/34af7e78b1dbc7f35917 to your computer and use it in GitHub Desktop.
Ultimate OpenStack Havana Guide
# Guide to deploy OpenStack Havana on top of Ubuntu 12.04.3
#
# It covers: Ubuntu (hostnames, LVM), Open vSwitch, MySQL, Keystone, Glance,
# Neutron, Nova, Cinder and Dashboard.
# Preliminary IPv6 support!
# This is a "step-by-step", a "cut-and-paste" guide.
# It was inspired by:
# http://openstack-folsom-install-guide.readthedocs.org/en/latest/
# Limitations:
# 1- No Metadata, no GRE, no L3, no Security Groups.
# 2- Only 1 ethernet for each physical server.
# Features:
# 1- No NAT within the Cloud, no `Floating IPs', no multihost=true (i.e. NAT at
# the Compute Node itself - bad, avoid NAT).
# My idea is to move on and forget about IPv4 and NAT tables, so, with IPv6, we
# don't need NAT anymore. NAT66 is a bad thing, be part of the Internet with
# REAL IPv6 address! Or stay out with your smelly NAT66... :-P
# Also, the gateway (physical / instances) is located outside of the cloud.
# This means that we're mapping our physical network into OpenStack.
# NOTE:
# The contents between `---' and `---' are supposed to be added to the
# respective files, it is not a entire config file replacement. Keep the rest of
# the original files intact when possible (i.e. when not duplicating the
# entries).
# TODO List:
# Setup: Metadata, Spice, Heat, Ceilometer and Swift.
# External to the Cloud Computing
---- Gateway (Ubuntu 12.04.3 recommended) ----
eth0 - public IPv4 and/or IPv6 from SixxS, TunnelBroker.net or native.
eth1 - 10.32.14.1/24, 10.33.14.1/24 and your IPv6 /64 (or /48) Block from SixxS,
TunnelBroker.net or native.
# Example of its /etc/network/interfaces file:
---
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface connected to your ISP's WAN
auto eth0
# IPv4
iface eth0 inet static
address 200.10.1.2
netmask 28
gateway 200.10.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-search yourdomain.com
# Google Public DNS
dns-nameservers 8.8.8.8 8.8.4.4
# OpenDNS
# dns-nameservers 208.67.222.222 208.67.220.220 208.67.222.220 208.67.220.222
# OpenNIC
# dns-nameservers 66.244.95.20 74.207.247.4 216.87.84.211
# IPv6 - If you have native IPv6, configure it here
#iface eth0 inet6 static
# address 2001:db8:0:0::2
# netmask 64
# gateway 2001:db8:0:0::1
# OpenStack - Management (API, physical servers (compute) and/or generic
# hypervisors, gateway itself)
auto eth1
# IPv4
iface eth1 inet static
address 10.32.14.1
netmask 24
# OpenStack - Instance's gateway
auto eth1:0
iface eth1:0 inet static
address 10.33.14.1
netmask 24
# IPv6 - Your routed block, SixxS.net or TunnelBroker provides a entire /48 for
# you, for free! If you get one, configure it here.
#iface eth1 inet6 static
# address 2001:db8:1::1
# netmask 48
# OpenStack - Management
#auto eth1:1
#iface eth1:1 inet6 manual
# up ip -6 addr add 2001:db8:1:1::1/64 dev $IFACE
# down ip -6 addr del 2001:db8:1:1::1/64 dev $IFACE
# OpenStack - Instance's gateway
#auto eth1:2
#iface eth1:2 inet6 manual
# up ip -6 addr add 2001:db8:1:2::1/64 dev $IFACE
# down ip -6 addr del 2001:db8:1:2::1/64 dev $IFACE
---
# Enable IPv4 / IPv6 (optional) package forwarding
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sed -i 's/#net.ipv6.conf.all.forwarding=1/net.ipv6.conf.all.forwarding=1/' /etc/sysctl.conf
sysctl -p
# NAT rule
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
NOTE 1: There is only 1 NAT rule on this environment, which resides on this
gateway, to do the IPv4 SNAT/DNAT to/from the "old" Internet infrastructure.
There is no IPv4 NAT within the OpenStack environment itself (no Floating IPs,
"no multihost=true"). Also, there is no NAT when enjoying the new `Internet
Powered by IPv6`!
NOTE 2: If your have more IPv4 public blocks available (i.e. at your gateway's
eth1 interface, your Instances can also have public IPs on it! But
you'll need to manage the package filter by yourself (no Security
Groups on this setup).
OPTIONAL: Install and enable the `radvd' on your eth1, with your IPv6 Block from
SixxS (NOT OpenNIC-friendly) or TunnelBroker.net (OpenNIC-friendly), that way,
your physical servers will get its IPv6 automatically, and you can start
kissing goodbye to IPv4. Tip: DNS helps a lot when moving to IPv6! ;-)
# Inside your Cloud Computing environment!
---- Ubuntu 12.04.3 (controller.youdomain.com) ----
# Requirements: 1 Virtual Machine (KVM/Xen) with 2G of RAM - 2 Virtual HDs about
# 100G each - 1 ethernet
#
# 64 bits O.S. Recommended
#
# hostname: controller.yourdomain.com
#
# IPv4: 10.32.14.232/24
# Gateway: 10.32.14.1
#
# IPv6: 2001:db8:1:1::10/64
# Gateway: 2001:db8:1:1::1
#
# Install Ubuntu 12.04.3 on the first disk, can be the `Minimum Virtual Machine'
# flavor, using `Guided LVM Paritioning', leave the second disk untouched for
# now.
# Login as root
aptitude update
aptitude install vim iptables python-software-properties
add-apt-repository cloud-archive:havana
aptitude update && aptitude safe-upgrade -y
reboot
vi /etc/hosts
---
127.0.0.1 localhost.localdomain localhost
# IPv4
10.32.14.232 controller.yourdomain.com controller
10.32.14.234 compute1.yourdomain.com compute1
10.32.14.236 compute2.yourdomain.com compute2
# IPv6
#2001:db8:1:1::10 controller.yourdomain.com controller
#2001:db8:1:1::100 compute1.yourdomain.com compute1
#2001:db8:1:1::200 compute2.yourdomain.com compute2
---
aptitude install openvswitch-switch openvswitch-datapath-dkms
vi /etc/network/interfaces
---
# The primary network interface
auto eth0
iface eth0 inet manual
up ip address add 0/0 dev $IFACE
up ip link set $IFACE up
#       up ip link set $IFACE promisc on
#       down ip link set $IFACE promisc off
down ip link set $IFACE down
auto br-eth0
iface br-eth0 inet static
address 10.32.14.232
netmask 24
gateway 10.32.14.1
# dns-* options are implemented by the resolvconf package, if installed
dns-search yourdomain.com
# Google Public DNS
dns-nameservers 8.8.8.8 8.8.4.4
# OpenDNS
# dns-nameservers 208.67.222.222 208.67.220.220 208.67.222.220 208.67.220.222
# OpenNIC
# dns-nameservers 66.244.95.20 74.207.247.4 216.87.84.211
# Use "inet6 auto" when you have radvd running at your gateway, otherwise, use static.
#iface br-eth0 inet6 auto
#iface br-eth0 inet6 static
# address 2001:db8:1:1::10
# netmask 64
# gateway 2001:db8:1:1::1
# Google Public DNS
# dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844
# OpenNIC
# dns-nameservers 2001:530::216:3cff:fe8d:e704 2600:3c00::f03c:91ff:fe96:a6ad 2600:3c00::f03c:91ff:fe96:a6ad
# OpenDNS Public Name Servers:
# dns-nameservers 2620:0:ccc::2 2620:0:ccd::2
---
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth0
ovs-vsctl add-port br-eth0 eth0 && reboot
ovs-vsctl show
aptitude update
aptitude install mysql-server python-mysqldb ntp curl openssl rabbitmq-server python-keyring
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart
sed -i 's/127.0.0.1/::/g' /etc/mysql/my.cnf
service mysql restart
mysql -u root -p
---
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass';
CREATE DATABASE neutron;
GRANT ALL ON neutron.* TO 'neutronUser'@'%' IDENTIFIED BY 'neutronPass';
CREATE DATABASE heat;
GRANT ALL ON heat.* TO 'heatUser'@'%' IDENTIFIED BY 'heatPass';
quit;
---
---- Keystone ----
aptitude install keystone
vi /etc/keystone/keystone.conf
---
[DEFAULT]
admin_token = ADMIN
connection = mysql://keystoneUser:keystonePass@controller.yourdomain.com/keystone
---
keystone-manage db_sync
service keystone restart
cd ~
wget https://gist.github.com/tmartinx/7002197/raw/838770e4848c78dcd896fcfb6e4627d754051a72/keystone_basic.sh
wget https://gist.github.com/tmartinx/7002255/raw/40887b30a54df288483cb515d793a919bca671b4/keystone_endpoints_basic.sh
vi keystone_basic.sh
---
HOST_IP=controller.yourdomain.com
---
vi keystone_endpoints_basic.sh
---
HOST_IP=controller.yourdomain.com
EXT_HOST_IP=controller.yourdomain.com
---
chmod +x keystone_basic.sh
chmod +x keystone_endpoints_basic.sh
./keystone_basic.sh
./keystone_endpoints_basic.sh
vi ~/.novarc
---
# COMMON OPENSTACK ENVS
export SERVICE_TOKEN=ADMIN
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_TENANT_NAME=admin
export OS_AUTH_URL="http://controller.yourdomain.com:5000/v2.0/"
export SERVICE_ENDPOINT="http://controller.yourdomain.com:35357/v2.0/"
export OS_AUTH_STRATEGY=keystone
export OS_NO_CACHE=1
# LEGACY NOVA ENVS
export NOVA_USERNAME=${OS_USERNAME}
export NOVA_PROJECT_ID=${OS_TENANT_NAME}
export NOVA_PASSWORD=${OS_PASSWORD}
export NOVA_API_KEY=${OS_PASSWORD}
export NOVA_URL=${OS_AUTH_URL}
export NOVA_VERSION=1.1
export NOVA_REGION_NAME=RegionOne
# EUCA2OOLs ENV VARIABLES
export EC2_ACCESS_KEY=ab2f155901fb4be5bae4ddc78c924665
export EC2_SECRET_KEY=ef89b9562e9b4653a8d68e3117f0ae32
export EC2_URL=http://controller.yourdomain.com:8773/services/Cloud
---
vi ~/.bashrc
---
if [ -f ~/.novarc ]; then
. ~/.novarc
fi
---
source ~/.bashrc
keystone tenant-list
curl http://controller.yourdomain.com:35357/v2.0/endpoints -H 'x-auth-token: ADMIN' | python -m json.tool
---- Glance ----
aptitude install glance python-mysqldb
vi /etc/glance/glance-api.conf
---
[DEFAULT]
sql_connection = mysql://glanceUser:glancePass@controller.yourdomain.com/glance
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass
[paste_deploy]
flavor = keystone
---
vi /etc/glance/glance-registry.conf
---
[DEFAULT]
sql_connection = mysql://glanceUser:glancePass@controller.yourdomain.com/glance
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass
[paste_deploy]
flavor = keystone
---
glance-manage db_sync
service glance-api restart; service glance-registry restart
cd ~
wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-i386-disk.img
wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
# CirrOS
glance image-create --name "CirrOS Minimalist - 32 Bits - Cloud Based Image" --is-public true --container-format bare --disk-format qcow2 < cirros-0.3.1-i386-disk.img
glance image-create --name "CirrOS Minimalist - 64 Bits - Cloud Based Image" --is-public true --container-format bare --disk-format qcow2 < cirros-0.3.1-x86_64-disk.img
# Ubuntu 12.04.3
glance image-create --location http://uec-images.ubuntu.com/releases/12.04/release/ubuntu-12.04-server-cloudimg-i386-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 12.04.3 LTS - Precise Pangolin - 32 Bits - Cloud Based Image"
glance image-create --location http://uec-images.ubuntu.com/releases/12.04/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 12.04.3 LTS - Precise Pangolin - 64 Bits - Cloud Based Image"
# Ubuntu 13.10
glance image-create --location http://uec-images.ubuntu.com/releases/13.10/release/ubuntu-13.10-server-cloudimg-i386-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 13.10 - Saucy Salamander - 32 Bits - Cloud Based Image"
glance image-create --location http://uec-images.ubuntu.com/releases/13.10/release/ubuntu-13.10-server-cloudimg-amd64-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 13.10 - Saucy Salamander - 64 Bits - Cloud Based Image"
# Ubuntu 14.04 (under development)
glance image-create --location http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-i386-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 14.04 - LTS - Trusty Tahr - 32 Bits - Cloud Based Image"
glance image-create --location http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img --is-public true --disk-format qcow2 --container-format bare --name "Ubuntu 14.04 - LTS - Trusty Tahr - 64 Bits - Cloud Based Image"
glance image-list
--- Nova ---
aptitude install nova-api nova-cert nova-consoleauth nova-scheduler nova-conductor nova-novncproxy novnc
vi /etc/nova/api-paste.ini
---
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
# signing_dir is configurable, but the default behavior of the authtoken
# middleware should be sufficient. It will create a temporary directory
# in the home directory for the user the nova process is running as.
#signing_dir = /var/lib/nova/keystone-signing
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0
---
cd /etc/nova
mv /etc/nova/nova.conf /etc/nova/nova.conf_Ubuntu
wget https://gist.github.com/tmartinx/7002808/raw/07b2e27a4996fd5b23175fc281b03ac422414639/nova.conf
chown nova: /etc/nova/nova.conf
chmod 640 /etc/nova/nova.conf
nova-manage db sync
cd /etc/init.d/; for i in $(ls nova-*); do sudo service $i restart; done
---- Neutron ----
aptitude install neutron-server neutron-plugin-openvswitch neutron-plugin-openvswitch-agent neutron-dhcp-agent neutron-metadata-agent
vi /etc/neutron/neutron.conf
---
[DEFAULT]
allow_overlapping_ips = True
rabbit_host = controller.yourdomain.com
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
signing_dir = $state_path/keystone-signing
[database]
sql_connection = mysql://neutronUser:neutronPass@controller.yourdomain.com/neutron
---
vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
---
[database]
sql_connection = mysql://neutronUser:neutronPass@controller.yourdomain.com/neutron
[OVS]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0
---
vi /etc/neutron/metadata_agent.ini
---
# The Neutron user information for accessing the Neutron API.
auth_url = http://controller.yourdomain.com:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
nova_metadata_ip = 127.0.0.1
nova_metadata_port = 8775
metadata_proxy_shared_secret = metasecret13
---
vi /etc/neutron/dhcp_agent.ini
---
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
dhcp_domain = yourdomain.com
---
cd /etc/init.d/; for i in $(ls neutron-*); do sudo service $i restart; done
keystone tenant-list # To note the admin tenant id.
neutron net-create --tenant-id $ADMIN_TENTANT_ID sharednet1 --shared --provider:network_type flat --provider:physical_network physnet1
neutron subnet-create --ip-version 4 --tenant-id $ADMIN_TENANT_ID sharednet1 10.33.14.0/24 --dns_nameservers list=true 8.8.8.8 8.8.4.4
# OPTIONAL IPv6 - Still not tested! It will not work, neither in "Dual-Stack" mode or alone.
#neutron subnet-create --ip-version 6 --tenant-id $ADMIN_TENANT_ID sharednet1 2001:db8:1:2::/64 --dns_nameservers list=true 2001:4860:4860::8888
---- Cinder ----
# Use the extra Virtual HD of you controller (about 100G).
# If don't have, add one:
# halt -> virt-manager -> Add hardware -> VirtIO Disk / 100G / RAW
cfdisk /dev/vdb
pvcreate /dev/vdb1
vgcreate cinder-volumes /dev/vdb1
aptitude install cinder-api cinder-scheduler cinder-volume python-mysqldb
vi /etc/cinder/api-paste.ini
---
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = service_pass
signing_dir = /var/lib/cinder
---
echo "sql_connection = mysql://cinderUser:cinderPass@controller.yourdomain.com/cinder" >> /etc/cinder/cinder.conf
cinder-manage db sync
cd /etc/init.d/; for i in $(ls cinder-*); do sudo service $i restart; done
---- Dashboard ----
aptitude install openstack-dashboard memcached
aptitude purge openstack-dashboard-ubuntu-theme
---- Ubuntu 12.04.3 (compute1.yourdomain.com) ----
# Requirements: 1 Physical Server with Virtualization support on CPU, 1 ethernet
#
# IPv4: 10.32.14.234/24
# Gateway 10.32.14.1
#
# IPv6: 2001:db8:1:1::100/64
# Gateway: 2001:db8:1:1::1
#
# Install Ubuntu 12.04.3, can be the `Minimum Installation' flavor, using
# `Manual Paritioning', make the following partitions:
#
# /dev/sda1 on /boot (~256M - /dev/md0 if raid1[0], bootable)
# /dev/sda2 on LVM VG vg01 (~50G - /dev/md1 if raid1[0]) - lv_root (25G), lv_swap (XG) of compute1
# /dev/sda3 on LVM VG nova-local (~450G - /dev/md2 if raid1[0]) - Instances
aptitude update
aptitude install vim iptables python-software-properties
add-apt-repository cloud-archive:havana
aptitude update && aptitude safe-upgrade -y
reboot
vi /etc/hosts
---
127.0.0.1 localhost.localdomain localhost
10.32.14.232 controller.yourdomain.com controller
10.32.14.234 compute1.yourdomain.com compute1
10.32.14.236 compute2.yourdomain.com compute2
---
vi /etc/network/interfaces
---
# The primary network interface
auto eth0
iface eth0 inet manual
up ip address add 0/0 dev $IFACE
up ip link set $IFACE up
#       up ip link set $IFACE promisc on
#       down ip link set $IFACE promisc off
down ip link set $IFACE down
auto br-eth0
iface br-eth0 inet static
address 10.32.14.234
netmask 24
gateway 10.32.14.1
# dns-* options are implemented by the resolvconf package, if installed
dns-search yourdomain.com
# Google Public DNS
dns-nameservers 8.8.8.8 8.8.4.4
# OpenDNS
# dns-nameservers 208.67.222.222 208.67.220.220 208.67.222.220 208.67.220.222
# OpenNIC
# dns-nameservers 66.244.95.20 74.207.247.4 216.87.84.211
# Use "inet6 auto" when you have radvd running at your gateway, otherwise, use static.
#iface br-eth0 inet6 auto
#iface br-eth0 inet6 static
# address 2001:db8:1:1::100
# netmask 64
# gateway 2001:db8:1:1::1
# Google Public DNS
# dns-nameservers 2001:4860:4860::8888 2001:4860:4860::8844
# OpenNIC
# dns-nameservers 2001:530::216:3cff:fe8d:e704 2600:3c00::f03c:91ff:fe96:a6ad 2600:3c00::f03c:91ff:fe96:a6ad
# OpenDNS Public Name Servers:
# dns-nameservers 2620:0:ccc::2 2620:0:ccd::2
---
vi /etc/default/grub
---
GRUB_CMDLINE_LINUX="elevator=deadline"
---
update-grub
echo vhost_net >> /etc/modules
aptitude install openvswitch-switch openvswitch-datapath-dkms
ovs-vsctl add-br br-int
ovs-vsctl add-br br-eth0
ovs-vsctl add-port br-eth0 eth0 && reboot
aptitude update
aptitude install ubuntu-virt-server libvirt-bin pm-utils nova-compute-kvm nova-conductor neutron-plugin-openvswitch-agent
virsh net-destroy default
virsh net-undefine default
vi /etc/libvirt/libvirtd.conf
---
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
---
vi /etc/init/libvirt-bin.conf
---
env libvirtd_opts="-d -l"
---
vi /etc/default/libvirt-bin
---
libvirtd_opts="-d -l"
---
service libvirt-bin restart
vi /etc/nova/api-paste.ini
---
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
signing_dir = /tmp/keystone-signing-nova
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0
---
mv /etc/nova/nova.conf /etc/nova/nova.conf_Ubuntu
cd /etc/nova
wget https://gist.github.com/tmartinx/7019788/raw/e3921076077f02c41c2276c7ae1fad6baf963e3a/nova.conf
chown nova: /etc/nova/nova.conf
chmod 640 /etc/nova/nova.conf
cd /etc/init.d/; for i in $(ls nova-*); do sudo service $i restart; done
--- Neutron (still on compute1.yourdomain.com) ---
vi /etc/neutron/neutron.conf
---
# debug = True
# verbose = True
allow_overlapping_ips = True
rabbit_host = controller.yourdomain.com
[keystone_authtoken]
auth_host = controller.yourdomain.com
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
signing_dir = /var/lib/neutron/keystone-signing
---
vi /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
---
[DATABASE]
sql_connection = mysql://neutronUser:neutronPass@controller.yourdomain.com/neutron
[OVS]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0
---
service neutron-plugin-openvswitch-agent restart
# Done!
Point mycloud.yourdomain.com to 10.32.14.232 (and/or 2001:db8:1:1::10) and open
the Dashboard at:
http://mycloud.yourdomain.com/horizon - user admin, pass admin_pass
Congrats!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment