Skip to content

Instantly share code, notes, and snippets.

@shannonmitchell
Created March 30, 2018 16:39
Show Gist options
  • Save shannonmitchell/8ccbe3a6b7c9de2ae593f7345130ad64 to your computer and use it in GitHub Desktop.
Save shannonmitchell/8ccbe3a6b7c9de2ae593f7345130ad64 to your computer and use it in GitHub Desktop.
# rpc-on-ironic network notes.
# Generate the networks for your cluster.
The ./scripts/gen-onironic-nets.py generates a yaml var file to use for the network setup.
The vxlan ids at the top of each section is randomly generated. The vxlan_group for
multicast traffic is also randomly generated. Here is what it looks like:
vxlan_group: 239.51.50.107
mgmt_vxlan: 3095351
mgmt_network: 172.22.0.0/20
mgmt_gateway: 172.22.0.1
mgmt_netmask: 255.255.240.0
storage_vxlan: 3095352
storage_network: 172.22.16.0/20
storage_gateway: 172.22.16.1
storage_netmask: 255.255.240.0
flat_vxlan: 3095353
flat_network: 172.22.32.0/20
flat_gateway: 172.22.32.1
flat_netmask: 255.255.240.0
vlan_vxlan: 3095354
vlan_network: 172.22.48.0/20
vlan_gateway: 172.22.48.1
vlan_netmask: 255.255.240.0
tunnel_vxlan: 3095355
tunnel_network: 172.22.64.0/20
tunnel_gateway: 172.22.64.1
tunnel_netmask: 255.255.240.0
repl_vxlan: 3095356
repl_network: 172.22.80.0/20
repl_gateway: 172.22.80.1
repl_netmask: 255.255.240.0
- The mgmt(or container) network is used for most controle plane communications.
- The storage network is for contact between hypervisors and the storage devices.
- The flat network is going to be use for the public/gateway neutron network. It will be masqueraded through the deploy box.
- The vlan network is used for neutron 'vlan' type networks.
- The tunnel network is used for project private networks(vxlan)
- The repl network is used for swift/ceph cluster communications(if needed)
# Configure the networks for your cluster
This is done through the prep-onironic-network.yml playlbook which uses the roles/rpc-on-ironic-netconf role.
For each network(mgmt, storage, flat, vlan, tunnel), it uses the templates/vxlan_interfaces/debian-<network>.cfg.j2
template to create an /etc/network/interfaces.d/<network>.cfg file. The layout before osa or rpco will end up as below.
eno49(Physical Nic) =>
eno50(Physical Nic) => bond0(bond) => mgmt-mesh(vxlan int) => br-mgmt(linux bridge with a mgmt net ip)
=> flat-mesh(vxlan int) => br-flat(linux bridge with a flat net ip)
=> storage-mesh(vxlan int) => br-storage(linux bridge with a storage net ip)
=> tunnel-mesh(vxlan int) => br-tunnel(linux bridge with a tunnel net ip)
=> vlan-mesh(vxlan int) => br-vlan(linux bridge with a vlan net ip)
# OSA changes to network:
The scripts/scripts/create_rpc_configs.sh sets up the openstack_user_config.yml that defines the
provider networks. The changes after may look like:
eno49(Physical Nic) =>
eno50(Physical Nic) => bond0(bond) => mgmt-mesh(vxlan int) => br-mgmt(linux bridge with a mgmt net ip) => veth pair => eth1 on all containers
=> flat-mesh(vxlan int) => br-flat(linux bridge with a flat net ip) => veth pair => eth12 on neutron containers & compute (flat provider type)
=> neutron created bridge using flat-mesh => veth pairs => vms using flat privider.
Note: Used for flat network types as neutron can't use bridges with flat.(set with host_bind_override)
=> storage-mesh(vxlan int) => br-storage(linux bridge with a storage net ip) => veth pair => eth2 on glance_api, cinder_api, cinder_volume and nova_compute containers
=> tunnel-mesh(vxlan int) => br-tunnel(linux bridge with a tunnel net ip) => veth pair => eth10 on neutron containers & compute (vxlan provider type range 1:1000)
=> veth pairs => nics on vms using private vxlan networks
=> vlan-mesh(vxlan int) => br-vlan(linux bridge with a vlan net ip) => veth pair => eth11 on neutron containers & compute (vlan provider type range 100:200)
=> veth pairs => nics on vms using public/private vlan networks
# Deploy Box
- Should have all of the same networks as the ironic devices. The bond is replaced by the single primary virtual nic.
- ip forwarding set up
- An iptables masquerade is set up. It takes anything from the flat network, not going to the flat network and masquerades it externally with the deploy box ip.
- The flat network gateway ip address(172.22.32.1) is assigned as an additional ip to the br-flat bridge.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment