Skip to content

Instantly share code, notes, and snippets.

View michaeltchapman's full-sized avatar

Michael Chapman michaeltchapman

View GitHub Profile
# upload the apex tripleo heat templates
rm -rf openstack-tripleo-heat-templates
git clone -b stable/brahmaputra https://github.com/trozet/opnfv-tht
pushd opnfv-tht > /dev/null
git archive --format=tar.gz --prefix=openstack-tripleo-heat-templates/ HEAD > ../opnfv-tht.tar.gz
popd > /dev/null
LIBGUESTFS_BACKEND=direct virt-customize --upload opnfv-tht.tar.gz:/usr/share \
--run-command "cd /usr/share && tar xzf opnfv-tht.tar.gz" \
-a overcloud-full-opendaylight_build.qcow2
+ # SDNVPN Hack
+ if ('networking_bgpvpn.neutron.services.plugin.BGPVPNPlugin' in hiera('neutron::service_plugins'))
+ {
+ class { 'neutron::config':
+ server_config => {
+ 'service_providers/service_provider' => {
+ 'value' => 'BGPVPN:Dummy:networking_bgpvpn.neutron.services.service_drivers.driver_api.BGPVPNDriver:default'
+ }
+ }
+ }
(venv2)[stack@undercloud venv2]$ pip install python-neutronclient
...
(venv2)[stack@undercloud venv2]$ cp -r ../../testneutron/venv/lib/python2.7/site-packages/networking_bgpvpn lib/python2.7/site-packages
(venv2)[stack@undercloud venv2]$ neutron help | grep bgp
(venv2)[stack@undercloud venv2]$ cp -r ../../testneutron/venv/lib/python2.7/site-packages/networking_bgpvpn_tempest lib/python2.7/site-packages
(venv2)[stack@undercloud venv2]$ neutron help | grep bgp
(venv2)[stack@undercloud venv2]$ cp -r ../../testneutron/venv/lib/python2.7/site-packages/networking_bgpvpn-3.0.1.dev7-py2.7.egg-info lib/python2.7/site-packages
(venv2)[stack@undercloud venv2]$ neutron help | grep bgp
bgpvpn-create [bgpvpn] Create a BGPVPN.
bgpvpn-delete [bgpvpn] Delete a given BGPVPN.
[stack@undercloud networking-bgpvpn]$ sudo python setup.py install
running install
[pbr] Writing ChangeLog
[pbr] Generating ChangeLog
[pbr] ChangeLog complete (0.0s)
[pbr] Generating AUTHORS
[pbr] AUTHORS complete (0.0s)
running build
running build_py
creating build
eval nodes_node1_mac_address=00:25:B5:cc:00:1e nodes_node1_ipmi_ip=172.30.8.69 nodes_node1_ipmi_user=admin nodes_node1_ipmi_pass=octopus nodes_node1_pm_type=pxe_ipmitool nodes_node1_cpus=2 nodes_node1_memory=8192 nodes_node1_disk=40 nodes_node1_arch=x86_64 nodes_node1_capabilities=profile:control nodes_node2_mac_address=00:25:B5:cc:00:5d nodes_node2_ipmi_ip=172.30.8.78 nodes_node2_ipmi_user=admin nodes_node2_ipmi_pass=octopus nodes_node2_pm_type=pxe_ipmitool nodes_node2_cpus=2 nodes_node2_memory=8192 nodes_node2_disk=40 nodes_node2_arch=x86_64 nodes_node2_capabilities=profile:control nodes_node3_mac_address=00:25:B5:cc:00:1d nodes_node3_ipmi_ip=172.30.8.67 nodes_node3_ipmi_user=admin nodes_node3_ipmi_pass=octopus nodes_node3_pm_type=pxe_ipmitool nodes_node3_cpus=2 nodes_node3_memory=8192 nodes_node3_disk=40 nodes_node3_arch=x86_64 nodes_node3_capabilities=profile:control nodes_node4_mac_address=00:25:B5:cc:00:3c nodes_node4_ipmi_ip=172.30.8.76 nodes_node4_ipmi_user=admin nodes_node4_ipmi_pass=octopus nodes_no

Installation High-Level Overview - Bare Metal Deployment

The setup presumes that you have 6 bare metal servers and have already setup network connectivity on at least 2 interfaces for all servers via a TOR switch or other network implementation.

The physical TOR switches are not automatically configured from the OPNFV reference platform. All the networks involved in the OPNFV infrastructure as well as the provider networks and the private tenant VLANs needs to be manually configured.

@michaeltchapman
michaeltchapman / gist:adf2eb593e619d1494f6
Created January 26, 2016 22:43
OpenStack as microservices
Openstack as microservices
--------------------------
One of the more promising experiments I did in the devops space in the last couple of years was supporting
microservices architectures in the way OpenStack is deployed and operated. The core design is that we have
consul running across all nodes as the initial or 'seed' service to establish cluster membership, and from
that base we can build everything we need. When data changes in consul, this triggers puppet runs on the
nodes that are subscribing to that data via consul_watch, and the updated data is sent in via hiera to be
realised on each node. This creates a feedback loop whereby services can be arbirtrarily distributed or
consolidated as needed depending on the deployment scenario.
7: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:70:bd:1d:b9:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.37.11/24 brd 192.168.37.255 scope global br-ex
valid_lft forever preferred_lft forever
inet 192.168.37.10/32 brd 192.168.37.255 scope global br-ex
valid_lft forever preferred_lft forever
inet6 fe80::270:bdff:fe1d:b943/64 scope link
valid_lft forever preferred_lft forever
@michaeltchapman
michaeltchapman / gist:e22a9aa40a9d67299dbe
Created January 20, 2016 01:49
Super informative tripleo error messages
+ openstack overcloud deploy --templates --libvirt-type qemu -e /usr/share/openstack-tripleo-heat-templates/environments/opendaylight.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml --control-scale 3 --compute-scale 2 -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e network-environment.yaml --ntp-server pool.ntp.org
Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates
Stack failed with status: Resource CREATE failed: resources.ControllerServicesBaseDeployment_Step2: resources.ControllerNodesPostDeployment.Error: resources[2]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6
Heat Stack create failed.
class consul_profile::openstack::compute {
if ! hiera('mysql_Address', false) {
runtime_fail { 'novadbdep':
fail => true,
message => 'novadbdep: requires mysql_Address',
}
} else {
Consul_profile::Discovery::Consul::Multidep<| title == 'novamultidep' |> {
response +> 'novadbdep'
}