Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@mallow111
Last active March 1, 2016 22:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mallow111/c34b70e1b9bacc9260a2 to your computer and use it in GitHub Desktop.
Save mallow111/c34b70e1b9bacc9260a2 to your computer and use it in GitHub Desktop.
install octavia plugin in devstack
references:
https://chapter60.wordpress.com/2015/02/20/installing-openstack-lbaas-version-2-on-kilo-using-devstack/
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/load-balancing-as-a-service-kilo-and-beyond
https://docs.google.com/document/d/1aCHIPY0Zdo8mfvlpdvJ7Bqfei5kBvhD9fd6e8w0v7nM/edit
0. update packages etc before installing devstack
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y dist-upgrade
sudo apt-get -y install git emacs24-nox
Install devstack:
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack
// change user persmission tip, this will allow you to change code under pycham
sudo chown -R minwang1 octavia
sudo chmod -R u+rX octavia
1.git clone devstack first
2. cd devstack and edit your localrc and run ./stack.sh:
enable_plugin neutron-lbaas https://review.openstack.org/openstack/neutron-lbaas
enable_plugin octavia https://github.com/openstack/octavia.git
LIBS_FROM_GIT+=python-neutronclient
DATABASE_PASSWORD=password
ADMIN_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
RABBIT_PASSWORD=password
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
ENABLED_SERVICES=rabbit,mysql,key
#ENABLED_SERVICES+=,horizon
HORIZON_REPO=https://github.com/openstack/horizon
HORIZON_BRANCH=master
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img"
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron,horizon
ENABLED_SERVICES+=,q-lbaasv2
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch
ENABLED_SERVICES+=,tempest
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
# ===== END localrc =====
3. restarting q-svc q-agt o-api and o-cw again
4. cd ~/devstack
source openrc admin admin
check if net-list has lb-mgmt-net---if yes, that means octavia plug has been installed in devstack
minwang1@ubuntu:~/devstack$ neutron net-list
+--------------------------------------+-------------+----------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-------------+----------------------------------------------------------+
| 7997a108-baca-4136-bfc7-8ec09eeef3b3 | lb-mgmt-net | f383cb84-70b6-4ef1-92f8-d6fa7a49cf2e 192.168.0.0/24 |
| 899e33c0-b89d-4874-ab76-015af5cf6ed8 | public | 5081dd73-e451-4322-bdd5-e2c5c0e06be9 172.24.4.0/24 |
| | | bf157ed4-13ae-46b0-9f12-f973b42cfc4d 2001:db8::/64 |
| f6a163c5-a255-4670-aa77-9b22617db5c4 | private | e4823426-828a-4a41-bbd8-bb85bc315394 fd3e:3571:1e5f::/64 |
| | | f29b9aaa-c3ea-4e2a-9d41-af4580ea0aaa 10.0.0.0/24 |
+--------------------------------------+-------------+----------------------------------------------------------+
minwang1@ubuntu:~/devstack$ nova image-list
+--------------------------------------+--------------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------------------------+--------+--------+
| 51e64358-b53d-46c4-9237-1ffaab26eeda | amphora-x64-haproxy | ACTIVE | |
| a21b3057-23fa-4f15-9eff-1008bc27b15b | cirros-0.3.3-x86_64-disk | ACTIVE | |
+--------------------------------------+--------------------------+--------+--------+
#create nova instances on private network
nova boot --image $(nova image-list | awk '/ cirros-0.3.3-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node1
nova boot --image $(nova image-list | awk '/ cirros-0.3.3-x86_64-disk / {print $2}') --flavor 1 --nic net-id=$(neutron net-list | awk '/ private / {print $2}') node2
#add secgroup rule to allow ssh etc..
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0; nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default tcp 80 80 0.0.0.0/0
minwang1@ubuntu:~/devstack$ nova list
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------------------------+
| 64bad975-cad2-415a-acec-6ae5af6610a0 | node1 | ACTIVE | - | Running | private=10.0.0.3, fda3:38ca:b691:0:f816:3eff:feee:c3e1 |
| 53b5a380-a4a2-4e3c-858e-325f2efa675d | node2 | ACTIVE | - | Running | private=10.0.0.4, fda3:38ca:b691:0:f816:3eff:fef6:3f8f |
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------------------------+
# ssh to each of these 2 instances, ssh cirros@10.0.0.3 and ssh cirros@10.0.0.4, Set up a simple web server on each of these instances. ssh into each instance (username ‘cirros’, password ‘cubswin:)’) and run
vi web.sh, inside of this file fill the following conent(this may change from time to time, it is in octavia/devstack/samples/webser.sh)
#!/bin/sh
MYIP=$(/sbin/ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}');
OUTPUT_STR="Welcome to $MYIP\r"
OUTPUT_LEN=${#OUTPUT_STR}
while true; do
echo -e "HTTP/1.0 200 OK\r\nContent-Length: ${OUTPUT_LEN}\r\n\r\n${OUTPUT_STR}" | sudo nc -l -p 80
done
###save the file and and do chmod 700 web.sh and then ./web.sh
neutron lbaas-loadbalancer-create --name lb1 private-subnet
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| description | |
| id | 09aba0f6-6b58-40a5-a99e-9468def49c0b |
| listeners | |
| name | lb1 |
| operating_status | ONLINE |
| provider | octavia |
| provisioning_status | ACTIVE |
| tenant_id | 536f61f30ff242a49b098671c59b85bd |
| vip_address | 10.0.0.5 |
| vip_port_id | e8c3079a-a2f0-4a58-b4a0-7b9e0437742b |
| vip_subnet_id | 35ddde6b-9202-4d93-b8fa-6453316e94b0 |
+---------------------+--------------------------------------+
#now we check nova list, we should see an associated amphora should be built
minwang1@ubuntu:~/devstack$ nova list
minwang1@ubuntu:~/devstack$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------------+
| 3e81acec-81f9-4bc5-af79-f802b17bb7d2 | amphora-03bccbdd-5a71-4155-9370-de90fc77200e | ACTIVE | - | Running | lb-mgmt-net=192.168.0.3; private=10.0.0.6, fda3:38ca:b691:0:f816:3eff:fef3:1d99 |
| 64bad975-cad2-415a-acec-6ae5af6610a0 | node1 | ACTIVE | - | Running | private=10.0.0.3, fda3:38ca:b691:0:f816:3eff:feee:c3e1 |
| 53b5a380-a4a2-4e3c-858e-325f2efa675d | node2 | ACTIVE | - | Running | private=10.0.0.4, fda3:38ca:b691:0:f816:3eff:fef6:3f8f |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------------------------------------+
neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1
Created a new listener:
+--------------------------+------------------------------------------------+
| Field | Value |
+--------------------------+------------------------------------------------+
| admin_state_up | True |
| connection_limit | -1 |
| default_pool_id | |
| default_tls_container_id | |
| description | |
| id | d6675c13-ea92-46d4-badf-dfa977485337 |
| loadbalancers | {"id": "09aba0f6-6b58-40a5-a99e-9468def49c0b"} |
| name | listener1 |
| protocol | HTTP |
| protocol_port | 80 |
| sni_container_ids | |
| tenant_id | 536f61f30ff242a49b098671c59b85bd |
+--------------------------+------------------------------------------------+
minwang1@ubuntu:~/devstack$ neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
or create with session-peristence
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --session-persistence type=SOURCE_IP
Created a new pool:
+---------------------+------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------+
| admin_state_up | True |
| description | |
| healthmonitor_id | |
| id | 15f10567-a31c-4817-9cdc-be8dd6e28c13 |
| lb_algorithm | ROUND_ROBIN |
| listeners | {"id": "d6675c13-ea92-46d4-badf-dfa977485337"} |
| members | |
| name | pool1 |
| protocol | HTTP |
| session_persistence | |
| tenant_id | 536f61f30ff242a49b098671c59b85bd |
+---------------------+------------------------------------------------+
minwang1@ubuntu:~/devstack$ neutron lbaas-member-create --subnet private-subnet --address 10.0.0.4 --protocol-port 80 pool1
Created a new member:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| address | 10.0.0.4 |
| admin_state_up | True |
| id | 7fcbe589-3c59-488e-96ae-469fd6d4b1a6 |
| protocol_port | 80 |
| subnet_id | 35ddde6b-9202-4d93-b8fa-6453316e94b0 |
| tenant_id | 536f61f30ff242a49b098671c59b85bd |
| weight | 1 |
+----------------+--------------------------------------+
minwang1@ubuntu:~/devstack$ neutron lbaas-member-create --subnet private-subnet --address 10.0.0.3 --protocol-port 80 pool1
Created a new member:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| address | 10.0.0.3 |
| admin_state_up | True |
| id | c0b064e1-1ae7-4a34-848b-fe85bdf7b4a0 |
| protocol_port | 80 |
| subnet_id | 35ddde6b-9202-4d93-b8fa-6453316e94b0 |
| tenant_id | 536f61f30ff242a49b098671c59b85bd |
| weight | 1 |
+----------------+--------------------------------------+
#show the namespaces
sudo ip netns list
qdhcp-38e3641c-06a7-48b9-a01e-929573196e21
qrouter-4e5d649c-9556-4bbb-a4c7-914b475c3f54
qdhcp-2e5d121e-0e72-4519-bd05-88d1a6143363
#curl loadbalancer by using namespace
+--------------------------------------+------+-------------+---------------------+----------+
| id | name | vip_address | provisioning_status | provider |
+--------------------------------------+------+-------------+---------------------+----------+
| 09aba0f6-6b58-40a5-a99e-9468def49c0b | lb1 | 10.0.0.5 | ACTIVE | octavia |
+--------------------------------------+------+-------------+---------------------+----------+
sudo ip netns exec qrouter-4e5d649c-9556-4bbb-a4c7-914b475c3f54 curl -v 10.0.0.5
minwang1@ubuntu:~/devstack$ sudo ip netns exec qrouter-4e5d649c-9556-4bbb-a4c7-914b475c3f54 curl -v 10.0.0.5
* Rebuilt URL to: 10.0.0.5/
* Hostname was NOT found in DNS cache
* Trying 10.0.0.5...
* Connected to 10.0.0.5 (10.0.0.5) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 10.0.0.5
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
<
Welcome to 10.0.0.3
^Cminwang1@ubuntu:~/devstack$ sudo ip netns exec qrouter-4e5d649c-9556-4bbb-a4c7-914b475c3f54 curl -v 10.0.0.5
* Rebuilt URL to: 10.0.0.5/
* Hostname was NOT found in DNS cache
* Trying 10.0.0.5...
* Connected to 10.0.0.5 (10.0.0.5) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 10.0.0.5
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
<
Welcome to 10.0.0.4
#ssh amphora instance
ssh -i /etc/octavia/.ssh/octavia_ssh_key ubuntu@192.168.0.3
the reason that we need to use neutron namespaces here is because that every lb is associated with an amphora, and in octavia the amphora have their default route set to the management network -- so every curl yous end will be be responded to the management network hence go nowhere,being in the router namespace let's you see all packets and hence you will get the repsonse, the reason that we use amphora instead of a single haproxy on the compute node makes it more scalable and reliable (faiulover)
------------------------------------------------------------------------------------------
#when clean.sh is not working, do the following thing:
ps -ef |grep -i glance
it will display all of the glance processor id, kill #processorid and do ./stack.sh
//update octavia in case it is not updated
cd /opt/stack/octavia
git pull
sudo python setup.py install
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment