Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save arkanmgerges/9e2d5174683649f007993a4f6523604c to your computer and use it in GitHub Desktop.
Save arkanmgerges/9e2d5174683649f007993a4f6523604c to your computer and use it in GitHub Desktop.
Openstack ansible configuration - version stable train
https://docs.openstack.org/project-deploy-guide/openstack-ansible/train/deploymenthost.html
https://github.com/openstack/openstack-ansible/tree/stable/train
In my configuration I have 2 servers (special thanks for CeeMac / irc #openstack-ansible, thanks also jrosser, ioni, jamesdenton, johnsom and all the community):
1. Controller node
2. Compute node
My Ceph is installed seprately by using ceph-ansible https://docs.ceph.com/ceph-ansible/stable-4.0/
I installed it before installing openstack
Controller node is connected to the router through "eno1" interface
Compute node is connected to the router through "eno1" interface
Controller node is directly connected to the compute node through "eno2" interface
My router has tagged vlans:
vlan 10 ---> 172.29.236.1/22 (it's used for br-mgmt)
vlan 20 ---> 172.29.244.1/22 (it's used for br-storage)
vlan 30 ---> 172.29.240.1/22 (it's used for vxlan)
vlan 510 --> 172.29.232.1/22 (it's used for lbaas)
vlan provider -> 192.168.211.1/24 (it's used for providing floating ips)
The Untagged vlan is vlan 50 192.168.50.1/24
After installing openstack successfully, I applied the patch inside octvia container (thanks to johnsom/irc #openstack-ansible)
Put this in the file /etc/octavia/octavia.conf under [networking] section:
allow_invisible_resource_usage = True
# Put this inside your compute node in /etc/network/interfaces
# Loopback
auto lo
iface lo inet loopback
# Physical interfaces
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
# Vlans
auto eno1.10
iface eno1.10 inet manual
vlan-raw-device eno1
auto eno1.20
iface eno1.20 inet manual
vlan-raw-device eno1
auto eno1.30
iface eno1.30 inet manual
vlan-raw-device eno1
# Bridges
# External bridge
auto br-ext
iface br-ext inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 192.168.50.20/24
gateway 192.168.50.1
bridge_ports eno1
dns-servers 8.8.8.8 8.8.4.4
iface br-ext inet static
address 192.168.50.220/24
iface br-ext inet static
address 192.168.50.221/24
# Manangment bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 172.29.236.20/22
bridge_ports eno1.10
iface br-mgmt inet static
address 172.29.236.220/22
iface br-mgmt inet static
address 172.29.236.221/22
# Storage bridge
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 172.29.244.20/22
bridge_ports eno1.20
# Vxlan bridge
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 172.29.240.20/22
bridge_ports eno1.30
# Vlan bridge
auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports eno2
# Lbaas bridge
auto br-lbaas
iface br-lbaas inet manual
pre-up ip link add dev br-lbaas type bridge
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports none
# Put this inside your controller node in /etc/network/interfaces
# Loopback
auto lo
iface lo inet loopback
# Physical interfaces
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
# Vlans
auto eno1.10
iface eno1.10 inet manual
vlan-raw-device eno1
auto eno1.20
iface eno1.20 inet manual
vlan-raw-device eno1
auto eno1.30
iface eno1.30 inet manual
vlan-raw-device eno1
auto eno2.510
iface eno2.510 inet manual
vlan-raw-device eno2
# Bridges
# External bridge
auto br-ext
iface br-ext inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 192.168.50.10/24
gateway 192.168.50.1
bridge_ports eno1
dns-servers 8.8.8.8 8.8.4.4
iface br-ext inet static
address 192.168.50.210/24
iface br-ext inet static
address 192.168.50.210/24
iface br-ext inet static
address 192.168.50.211/24
iface br-ext inet static
address 192.168.50.212/24
# Manangment bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 172.29.236.10/22
bridge_ports eno1.10
iface br-mgmt inet static
address 172.29.236.210/22
iface br-mgmt inet static
address 172.29.236.211/22
iface br-mgmt inet static
address 172.29.236.212/22
# Storage bridge
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 172.29.244.10/22
bridge_ports eno1.20
# Vxlan bridge
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 172.29.240.10/22
bridge_ports eno1.30
iface v-br-lbaas inet manual
pre-up ip link add v-br-lbaas type veth peer name v-br-lbaas-mgmt || :
hwaddress 02:00:00:01:00:00
iface v-br-lbaas-mgmt inet manual
pre-up ip link add v-br-lbaas-mgmt type veth peer name v-br-lbaas || :
hwaddress 02:00:00:01:00:01
# Lbaas bridge
auto br-lbaas
iface br-lbaas inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports v-br-lbaas
pre-up ifup v-br-lbaas
# Lbaas managment bridge
auto br-lbaas-mgmt
iface br-lbaas-mgmt inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports eno2.510 v-br-lbaas-mgmt
pre-up ifup v-br-lbaas-mgmt
---
cidr_networks: &cidr_networks
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
lbaas: 172.29.232.0/22
used_ips:
- 192.168.50.1
- 192.168.50.10
- 192.168.50.20
- 192.168.50.210
- 192.168.50.211
- 192.168.50.212
- 192.168.50.220
- 192.168.50.221
- 172.29.236.1
- 172.29.236.10
- 172.29.236.20
- 172.29.236.210
- 172.29.236.211
- 172.29.236.212
- 172.29.236.220
- 172.29.236.221
- 172.29.240.1
- 172.29.240.10
- 172.29.240.20
- 172.29.244.1
- 172.29.244.10
- 172.29.244.20
- 172.29.232.10
- 172.29.232.20
global_overrides:
cidr_networks: *cidr_networks
internal_lb_vip_address: 172.29.236.210
external_lb_vip_address: openstack.arkan.cloud
management_bridge: "br-mgmt"
provider_networks:
- network:
group_binds:
- all_containers
- hosts
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
container_type: "veth"
ip_from_q: "container"
is_container_address: true
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
# - ceph-osd
# Uncomment the next line if using swift with a storage network.
# - swift_proxy
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
container_mtu: "9000"
ip_from_q: "tunnel"
range: "1:1000"
net_name: "vxlan"
type: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-ext"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eno1"
ip_from_q: "lbaas"
range: "511:520"
net_name: "external"
type: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-lbaas"
container_type: "veth"
container_interface: "eth14"
ip_from_q: "lbaas"
net_name: "lbaas"
type: "raw"
group_binds:
- neutron_linuxbridge_agent
- octavia-worker
- octavia-housekeeping
- octavia-health-manager
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eno2"
type: "vlan"
net_name: "vlan"
range: "510:510"
group_binds:
- neutron_linuxbridge_agent
###
### Infrastructure
###
_infrastructure_ip1: &infrastructure_ip1
ip: 172.29.236.10
_infrastructure_hosts: &infrastructure_hosts
controller1: *infrastructure_ip1
_compute_ip1: &compute_ip1
ip: 172.29.236.20
compute_hosts: &compute_hosts
compute1: *compute_ip1
# ceph-osd_hosts: *compute_hosts
# galera, memcache, rabbitmq, utility
shared-infra_hosts: *infrastructure_hosts
# ceph-mon containers
# ceph-mon_hosts: *infrastructure_hosts
# ceph-radosgw
# ceph-rgw_hosts: *infrastructure_hosts
# repository (apt cache, python packages, etc)
repo-infra_hosts: *infrastructure_hosts
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts: *infrastructure_hosts
# rsyslog server
log_hosts: *infrastructure_hosts
###
### OpenStack
###
# keystone
identity_hosts: *infrastructure_hosts
# cinder api services
storage-infra_hosts: *infrastructure_hosts
# cinder volume hosts (Ceph RBD-backed)
storage_hosts:
controller1:
ip: 172.29.236.10
container_vars:
cinder_backends:
limit_container_types: cinder_volume
ceph:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: ceph
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
# glance
image_hosts: *infrastructure_hosts
# placement
placement-infra_hosts: *infrastructure_hosts
# nova api, conductor, etc services
compute-infra_hosts: *infrastructure_hosts
# heat
orchestration_hosts: *infrastructure_hosts
# horizon
dashboard_hosts: *infrastructure_hosts
# neutron server, agents (L3, etc)
network_hosts: *compute_hosts
# ceilometer (telemetry data collection)
metering-infra_hosts: *infrastructure_hosts
# aodh (telemetry alarm service)
metering-alarm_hosts: *infrastructure_hosts
# gnocchi (telemetry metrics storage)
metrics_hosts: *infrastructure_hosts
# ceilometer compute agent (telemetry data collection)
metering-compute_hosts: *compute_hosts
# octavia
octavia-infra_hosts: *infrastructure_hosts
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###
### This file contains commonly used overrides for convenience. Please inspect
### the defaults for each role to find additional override options.
###
## Debug and Verbose options.
debug: false
## Set the service setup host
# The default is to use localhost (the deploy host where ansible runs),
# but any other host can be used. If using an alternative host with all
# required libraries in a venv (eg: the utility container) then the
# python interpreter needs to be set. If it is not, the default is to
# the system python interpreter.
# If you wish to use the first utility container in the inventory for
# all service setup tasks, uncomment the following.
#
#openstack_service_setup_host: "{{ groups['utility_all'][0] }}"
#openstack_service_setup_host_python_interpreter: "/openstack/venvs/utility-{{ openstack_release }}/bin/python"
## Installation method for OpenStack services
# Default option (source) is to install the OpenStack services using PIP
# packages. An alternative method (distro) is to use the distribution cloud
# repositories to install OpenStack using distribution packages
install_method: source
## Common Glance Overrides
# Set glance_default_store to "swift" if using Cloud Files backend
# or "rbd" if using ceph backend; the latter will trigger ceph to get
# installed on glance. If using a file store, a shared file store is
# recommended. See the OpenStack-Ansible install guide and the OpenStack
# documentation for more details.
# Note that "swift" is automatically set as the default back-end if there
# are any swift hosts in the environment. Use this setting to override
# this automation if you wish for a different default back-end.
# glance_default_store: file
## Ceph pool name for Glance to use
# glance_rbd_store_pool: images
# glance_rbd_store_chunk_size: 8
## Common Nova Overrides
# When nova_libvirt_images_rbd_pool is defined, ceph will be installed on nova
# hosts.
# nova_libvirt_images_rbd_pool: vms
# If you wish to change the dhcp_domain configured for both nova and neutron
# dhcp_domain: openstacklocal
## Common Glance Overrides when using a Swift back-end
# By default when 'glance_default_store' is set to 'swift' the playbooks will
# expect to use the Swift back-end that is configured in the same inventory.
# If the Swift back-end is not in the same inventory (ie it is already setup
# through some other means) then these settings should be used.
#
# NOTE: Ensure that the auth version matches your authentication endpoint.
#
# NOTE: If the password for glance_swift_store_key contains a dollar sign ($),
# it must be escaped with an additional dollar sign ($$), not a backslash. For
# example, a password of "super$ecure" would need to be entered as
# "super$$ecure" below. See Launchpad Bug #1259729 for more details.
#
# glance_swift_store_auth_version: 3
# glance_swift_store_auth_address: "https://some.auth.url.com"
# glance_swift_store_user: "OPENSTACK_TENANT_ID:OPENSTACK_USER_NAME"
# glance_swift_store_key: "OPENSTACK_USER_PASSWORD"
# glance_swift_store_container: "NAME_OF_SWIFT_CONTAINER"
# glance_swift_store_region: "NAME_OF_REGION"
## Common Ceph Overrides
# ceph_mons:
# - 10.16.5.40
# - 10.16.5.41
# - 10.16.5.42
## Custom Ceph Configuration File (ceph.conf)
# By default, your deployment host will connect to one of the mons defined above to
# obtain a copy of your cluster's ceph.conf. If you prefer, uncomment ceph_conf_file
# and customise to avoid ceph.conf being copied from a mon.
# ceph_conf_file: |
# [global]
# fsid = 00000000-1111-2222-3333-444444444444
# mon_initial_members = mon1.example.local,mon2.example.local,mon3.example.local
# mon_host = 10.16.5.40,10.16.5.41,10.16.5.42
# # optionally, you can use this construct to avoid defining this list twice:
# # mon_host = {{ ceph_mons|join(',') }}
# auth_cluster_required = cephx
# auth_service_required = cephx
# By default, openstack-ansible configures all OpenStack services to talk to
# RabbitMQ over encrypted connections on port 5671. To opt-out of this default,
# set the rabbitmq_use_ssl variable to 'false'. The default setting of 'true'
# is highly recommended for securing the contents of RabbitMQ messages.
# rabbitmq_use_ssl: false
# RabbitMQ management plugin is enabled by default, the guest user has been
# removed for security reasons and a new userid 'monitoring' has been created
# with the 'monitoring' user tag. In order to modify the userid, uncomment the
# following and change 'monitoring' to your userid of choice.
# rabbitmq_monitoring_userid: monitoring
## Additional pinning generator that will allow for more packages to be pinned as you see fit.
## All pins allow for package and versions to be defined. Be careful using this as versions
## are always subject to change and updates regarding security will become your problem from this
## point on. Pinning can be done based on a package version, release, or origin. Use "*" in the
## package name to indicate that you want to pin all package to a particular constraint.
# apt_pinned_packages:
# - { package: "lxc", version: "1.0.7-0ubuntu0.1" }
# - { package: "libvirt-bin", version: "1.2.2-0ubuntu13.1.9" }
# - { package: "rabbitmq-server", origin: "www.rabbitmq.com" }
# - { package: "*", release: "MariaDB" }
## Environment variable settings
# This allows users to specify the additional environment variables to be set
# which is useful in setting where you working behind a proxy. If working behind
# a proxy It's important to always specify the scheme as "http://". This is what
# the underlying python libraries will handle best. This proxy information will be
# placed both on the hosts and inside the containers.
## Example environment variable setup:
## This is used by apt-cacher-ng to download apt packages:
# proxy_env_url: http://username:pa$$w0rd@10.10.10.9:9000/
## (1) This sets up a permanent environment, used during and after deployment:
# no_proxy_env: "localhost,127.0.0.1,{{ internal_lb_vip_address }},{{ external_lb_vip_address }},{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}"
# global_environment_variables:
# HTTP_PROXY: "{{ proxy_env_url }}"
# HTTPS_PROXY: "{{ proxy_env_url }}"
# NO_PROXY: "{{ no_proxy_env }}"
# http_proxy: "{{ proxy_env_url }}"
# https_proxy: "{{ proxy_env_url }}"
# no_proxy: "{{ no_proxy_env }}"
#
## (2) This is applied only during deployment, nothing is left after deployment is complete:
# deployment_environment_variables:
# http_proxy: "{{ proxy_env_url }}"
# https_proxy: "{{ proxy_env_url }}"
# no_proxy: "localhost,127.0.0.1,{{ internal_lb_vip_address }},{{ external_lb_vip_address }},{% for host in groups['keystone_all'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}"
## SSH connection wait time
# If an increased delay for the ssh connection check is desired,
# uncomment this variable and set it appropriately.
#ssh_delay: 5
## HAProxy and keepalived
# All the previous variables are used inside a var, in the group vars.
# You can override the current keepalived definition (see
# group_vars/all/keepalived.yml) in your user space if necessary.
#
# Uncomment this to disable keepalived installation (cf. documentation)
# haproxy_use_keepalived: False
#
# HAProxy Keepalived configuration (cf. documentation)
# Make sure that this is set correctly according to the CIDR used for your
# internal and external addresses.
# haproxy_keepalived_external_vip_cidr: "{{external_lb_vip_address}}/32"
# haproxy_keepalived_internal_vip_cidr: "{{internal_lb_vip_address}}/32"
# haproxy_keepalived_external_interface:
# haproxy_keepalived_internal_interface:
# Defines the default VRRP id used for keepalived with haproxy.
# Overwrite it to your value to make sure you don't overlap
# with existing VRRPs id on your network. Default is 10 for the external and 11 for the
# internal VRRPs
# haproxy_keepalived_external_virtual_router_id:
# haproxy_keepalived_internal_virtual_router_id:
# Defines the VRRP master/backup priority. Defaults respectively to 100 and 20
# haproxy_keepalived_priority_master:
# haproxy_keepalived_priority_backup:
# Keepalived default IP address used to check its alive status (IPv4 only)
# keepalived_ping_address: "193.0.14.129"
# Because we have three haproxy nodes, we need
# to one active LB IP, and we use keepalived for that.
## Load Balancer Configuration (haproxy/keepalived)
haproxy_keepalived_external_vip_cidr: "192.168.50.211/24"
haproxy_keepalived_internal_vip_cidr: "172.29.236.211/22"
haproxy_keepalived_external_interface: br-ext
haproxy_keepalived_internal_interface: br-mgmt
# Octavia
# Name of the Octavia management network in Neutron
octavia_neutron_management_network_name: lbaas-mgmt
# Name of the provider net in the system
octavia_provider_network_name: vlan
octavia_provider_segmentation_id: 510
# Network type
octavia_provider_network_type: vlan
# octavia_management_net_subnet_cidr: 10.0.252.0/22
# this is the name used in openstack_user_config.yml with '_address' added
octavia_container_network_name: lbaas_address
octavia_ssh_enabled: False
octavia_enable_anti_affinity: True
octavia_legacy_policy: True
# octavia_management_net_dhcp: False
octavia_management_net_subnet_cidr: 172.29.232.0/22
octavia_management_net_subnet_allocation_pools: "172.29.232.30-172.29.232.200"
#openstack_service_setup_host: "{{ groups['utility_all'][0] }}"
#openstack_service_setup_host_python_interpreter: "/openstack/venvs/utility-{{ openstack_release }}/bin/python"
octavia_cert_server_ca_subject: '/C=RO/ST=Prahova/L=Tatarani/O=ArkanCloud/CN=*.arkan.cloud' # change this to something more real
octavia_cert_client_ca_subject: '/C=RO/ST=Prahova/L=Tatarani/O=ArkanCloud/CN=*.arkan.cloud' # change this to something more real
octavia_cert_client_req_common_name: '*.arkan.cloud' # change this to something more real
octavia_cert_client_req_country_name: 'RO'
octavia_cert_client_req_state_or_province_name: 'Prahova'
octavia_cert_client_req_locality_name: 'Tatarani'
octavia_cert_client_req_organization_name: 'ArkanCloud'
# Horizon
horizon_images_upload_mode: "legacy"
horizon_keystone_multidomain_support: True
# Compute
nova_virt_type: kvm
# Keystone
keystone_public_endpoint: https://openstack.arkan.cloud:5000
## Ceph cluster fsid (must be generated before first run)
## Generate a uuid using: python -c 'import uuid; print(str(uuid.uuid4()))'
generate_fsid: false
# fsid: 0d4e0fe8-fe15-42a9-808f-a93071c0819a # Replace with your generated UUID
## ceph-ansible settings
## See https://github.com/ceph/ceph-ansible/tree/master/group_vars for
## additional configuration options available.
monitor_address_block: "192.168.50.0/24"
public_network: "192.168.50.0/24"
cluster_network: "{{ cidr_networks.storage }}"
journal_size: 10240 # size in MB
glance_default_store: rbd
glance_notification_driver: noop
glance_ceph_client: glance
glance_rbd_store_pool: glance-images
glance_rbd_store_chunk_size: 8
nova_libvirt_images_rbd_pool: ephemeral-vms
cinder_ceph_client: cinder
cephx: true
ceph_mons:
- 192.168.50.10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment