Skip to content

Instantly share code, notes, and snippets.

@fmateo05
Last active May 2, 2024 08:10
Show Gist options
  • Save fmateo05/44a9617e7e39940a73497808bc0225c1 to your computer and use it in GitHub Desktop.
Save fmateo05/44a9617e7e39940a73497808bc0225c1 to your computer and use it in GitHub Desktop.
Kazoo VoIP Install with OVN and Incus Containers

Kazoo 2-zone cluster using Incus Containers; Open Virtual Network (OVN); Nebula Updated

This guide described how to install Kazoo with Incus containers and some other components like OVN networking and Nebula, etc.

  • Create 4 instances on Digital ocean; 2 on one datacenter (zone 100 ie. San Francisco) and the other 2 on another datacenter (zone 200 ie. New York).

The idea is like 2 instances per zone or datacenter as desired; if you later would like to create another zone. The main Linux distribution is Rocky Linux 9

  • Login to each server using ssh with private key

server1 and server2 , zone 200 ; server3 and server4 ; zone 100

server1 and server3 : 1 couchdb, 2 kazoo, 2 rabbitmq and 2 kamailio

server2 and server4: 2 couchdb and 2 freeswitch

all servers:

setenforce 0 #(set selinux config to disabled and reboot)
dnf install -y epel-release
dnf install acl attr autoconf automake dnsmasq git golang libacl-devel libcap-devel lxc lxc-devel sqlite-devel libtool libudev-devel lz4-devel libuv make pkg-config xz-libs xz-devel lvm2 curl sqlite jq socat bind-utils nftables

dnf --enablerepo=devel install libuv-devel

cd /usr/local/src

git clone -b v0.5.1 https://github.com/lxc/incus

cd incus

make deps

Please set the following in your environment (possibly ~/.bashrc)

export CGO_CFLAGS="-I/root/go/deps/raft/include/ -I/root/go/deps/cowsql/include/"
export CGO_LDFLAGS="-L/root/go/deps/raft/.libs -L/root/go/deps/cowsql/.libs/"
export LD_LIBRARY_PATH="/root/go/deps/raft/.libs/:/root/go/deps/cowsql/.libs/"
export CGO_LDFLAGS_ALLOW="(-Wl,-wrap,pthread_create)|(-Wl,-z,now)"

source ~/.bashrc

make

export PATH="{PATH}:(go env GOPATH)/bin"
export LD_LIBRARY_PATH="(go env GOPATH)/deps/cowsql/.libs/:(go env GOPATH)/deps/raft/.libs/:${LD_LIBRARY_PATH}"


Also on /etc/profile :

nano -w /etc/profile
export PATH="{PATH}:(go env GOPATH)/bin"
export LD_LIBRARY_PATH="(go env GOPATH)/deps/cowsql/.libs/:(go env GOPATH)/deps/raft/.libs/:${LD_LIBRARY_PATH}"

Save and exit.

source /etc/profile

Machine setup

You’ll need sub{u,g}ids for root, so that Incus can create the unprivileged containers:

echo "root:1000000:1000000000" | sudo tee -a /etc/subuid /etc/subgid

Now you can run the daemon (the --group wheel bit allows everyone in the wheel group to talk to Incus; you can create your own group if you want):

sudo -E PATH={PATH} LD_LIBRARY_PATH={LD_LIBRARY_PATH} $(go env GOPATH)/bin/incusd --group wheel

cd /usr/local/src

wget https://github.com/slackhq/nebula/releases/download/v1.8.2/nebula-linux-amd64.tar.gz

mkdir nebula
cd nebula
tar -xzvf ../nebula-linux-amd64.tar.gz
ls
nebula nebula-cert
cp nebula* /usr/bin/

wget https://raw.githubusercontent.com/slackhq/nebula/master/dist/fedora/nebula.service
cp nebula.service /usr/lib/systemd/system/

nebula-cert ca -name "Myorganization, Inc"
nebula-cert sign -name "server1" -ip "192.168.80.1/24"
nebula-cert sign -name "server2" -ip "192.168.80.2/24"
nebula-cert sign -name "server3" -ip "192.168.80.3/24"
nebula-cert sign -name "server4" -ip "192.168.80.4/24"

mkdir /etc/nebula

Download the example config.yml from nebula/github and edit some of the sections:

**IMPORTANT: **

Copy nebula credentials, configuration, and binaries to each host For each host, copy the nebula binary to the host, along with config.yml  , and the files ca.crt, {host}.crt, and {host}.key  DO NOT COPY ca.key TO INDIVIDUAL NODES.

static_host_map:
 "192.168.80.1": ["<server1-ip-address>:4242"]
 "192.168.80.2": ["<server2-ip-address>:4242"]
 "192.168.80.3": ["<server3-ip-address>:4242"]
 "192.168.80.4": ["<server4-ip-address>:4242"]

static_map:

#cadence determines how frequently DNS is re-queried for updated IP addresses when a static_host_map entry contains a DNS name.
#cadence: 30s
#network determines the type of IP addresses to ask the DNS server for. The default is "ip4" because nodes typically
#do not know their public IPv4 address. Connecting to the Lighthouse via IPv4 allows the Lighthouse to detect the
#public address. Other valid options are "ip6" and "ip" (returns both.)
network: ip4
#lookup_timeout is the DNS query timeout.
#lookup_timeout: 250ms

… --

am_lightouse set to true

lighthouse:

# am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes

#you have configured to be lighthouses in your network

am_lighthouse: true
…
# coment out the hosts with the ip section
#hosts:
#- "192.168.80.1"
#- "192.168.80.3"

on tun: section; set mtu to higher value (ie. 1520 or 1600)

mtu: 1600

on inbound: section ; change from icmp to any (for allowing all kind of traffic between nodes)

inbound:
 # Allow icmp between any nebula hosts
 - port: any
 proto: any
 host: any

-- am_lightouse set to false

lighthouse:

#am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes

#you have configured to be lighthouses in your network

am_lighthouse: false

… set the hosts with the ip section

hosts:

- "192.168.80.1"

- "192.168.80.3"

on tun: section; set mtu to higher value (ie. 1520 or 1600)

mtu: 1600

… on inbound: section ; change from icmp to any (for allowing all kind of traffic between nodes)

inbound:
 # Allow icmp between any nebula hosts
 - port: any
 proto: any
 host: any

After copying the configs, binaries and cert files, start nebula on each server and ping the ip of nebula nodes (ie. from server4)

ping 192.168.80.1
PING 192.168.80.1 (192.168.80.1) 56(84) bytes of data.
64 bytes from 192.168.80.1: icmp_seq=1 ttl=64 time=75.2 ms
64 bytes from 192.168.80.1: icmp_seq=2 ttl=64 time=75.0 ms
64 bytes from 192.168.80.1: icmp_seq=3 ttl=64 time=75.1 ms

ping 192.168.80.2
PING 192.168.80.2 (192.168.80.2) 56(84) bytes of data.
64 bytes from 192.168.80.2: icmp_seq=1 ttl=64 time=154 ms
64 bytes from 192.168.80.2: icmp_seq=2 ttl=64 time=75.4 ms
64 bytes from 192.168.80.2: icmp_seq=3 ttl=64 time=75.5 ms

All servers:

We proceed to install ovn and openvswitch rpm

dnf install centos-release-nfv-openvswitch

dnf install openvswitch3.2.x86_64

Server1:

Edit /etc/sysconfig/ovn

OVN_NORTHD_OPTS="--db-nb-addr=<local> --db-nb-create-insecure-remote=yes --db-sb-addr=<local> --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=<local> --db-sb-cluster-local-addr=<local> --ovn-northd-nb-db=tcp:<server_1>:6641,tcp:<server_2>:6641,tcp:<server_3>:6641 –ovn-northd-sb-db=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642"

It should be like:

OVN_NORTHD_OPTS="--db-nb-addr=192.168.80.1 --db-nb-create-insecure-remote=yes --db-sb-addr=192.168.80.1 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=192.168.80.1 --db-sb-cluster-local-addr=192.168.80.1 --ovn-northd-nb-db=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641 –ovn-northd-sb-db=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642"

Server2 and Server3:

Server2:

OVN_NORTHD_OPTS="--db-nb-addr=<local> --db-nb-cluster-remote-addr=<server_1> --db-nb-create-insecure-remote=yes --db-sb-addr=<local> --db-sb-cluster-remote-addr=<server_1> --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=<local> --db-sb-cluster-local-addr=<local> --ovn-northd-nb-db=tcp:<server_1>:6641,tcp:<server_2>:6641,tcp:<server_3>:6641 –ovn-northd-sb-db=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642"

It should be like:

OVN_NORTHD_OPTS="--db-nb-addr=192.168.80.2 --db-nb-cluster-remote-addr=192.168.80.1 --db-nb-create-insecure-remote=yes --db-sb-addr=192.168.80.2 --db-sb-cluster-remote-addr=192.168.80.1 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=192.168.80.2 --db-sb-cluster-local-addr=192.168.80.2 --ovn-northd-nb-db=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641 --ovn-northd-sb-db=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642"

Server3:

OVN_NORTHD_OPTS="--db-nb-addr=<local> --db-nb-cluster-remote-addr=<server_1> --db-nb-create-insecure-remote=yes --db-sb-addr=<local> --db-sb-cluster-remote-addr=<server_1> --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=<local> --db-sb-cluster-local-addr=<local> --ovn-northd-nb-db=tcp:<server_1>:6641,tcp:<server_2>:6641,tcp:<server_3>:6641 –ovn-northd-sb-db=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642"

Should be like:

OVN_NORTHD_OPTS="--db-nb-addr=192.168.80.3 --db-nb-cluster-remote-addr=192.168.80.1 --db-nb-create-insecure-remote=yes --db-sb-addr=192.168.80.3 --db-sb-cluster-remote-addr=192.168.80.1 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=192.168.80.3 --db-sb-cluster-local-addr=192.168.80.3 --ovn-northd-nb-db=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641 –ovn-northd-sb-db=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642"

Server1, Server2 and Server3

systemctl enable --now openvswitch
systemctl enable --now ovn-northd
systemctl enable --now ovn-controller

Server4

systemctl enable --now openvswitch
systemctl enable --now ovn-controller

All servers:

ovs-vsctl set open_vswitch . \
 external_ids:ovn-remote=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642 \
 external_ids:ovn-encap-type=geneve \
 external_ids:ovn-encap-ip=<local>

Should be like: Server1

sudo ovs-vsctl set open_vswitch . \
 external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
 external_ids:ovn-encap-type=geneve \
 external_ids:ovn-encap-ip=192.168.80.1

Server2

ovs-vsctl set open_vswitch . \
 external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
 external_ids:ovn-encap-type=geneve \
 external_ids:ovn-encap-ip=192.168.80.2

Server3

ovs-vsctl set open_vswitch . \
 external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
 external_ids:ovn-encap-type=geneve \
 external_ids:ovn-encap-ip=192.168.80.3

Server4

ovs-vsctl set open_vswitch . \
 external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
 external_ids:ovn-encap-type=geneve \
 external_ids:ovn-encap-ip=192.168.80.4

Now we proceed to create and associate Incus nodes to the cluster:

Server1:

incus  admin initWould you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=134.209.64.221]: 192.168.80.1
Are you joining an existing cluster? (yes/no) [default=no]: no
What member name should be used to identify this server in the cluster? [default=rockylinux-kazoo-server01]: server1
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Name of the storage backend to use (dir, lvm) [default=dir]: dir
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
incus cluster add server2
<token>

Now on server2:

incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=142.93.50.246]: 192.168.80.2
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: 
<long-line-token>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
incus cluster add server3
Member server3 join token:
<token>

Now, on server3:

incus admin initWould you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=64.23.141.123]: 192.168.80.3
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: 
<token>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
incus cluster add server4
Member server4 join token:
<token>

Now, on server4:

incus admin initWould you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=146.190.174.68]: 192.168.80.4
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token: 
<token>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:

Repeat the same in case of more servers.

Now check the cluster status

incus cluster list
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
|  NAME   |            URL            |      ROLES       | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE  |      MESSAGE      |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server1 | https://192.168.80.1:8443 | database-leader  | x86_64       | default        |             | ONLINE | Fully operational |
|         |                           | database         |              |                |             |        |                   |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server2 | https://192.168.80.2:8443 | database         | x86_64       | default        |             | ONLINE | Fully operational |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server3 | https://192.168.80.3:8443 | database         | x86_64       | default        |             | ONLINE | Fully operational |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server4 | https://192.168.80.4:8443 | database-standby | x86_64       | default        |             | ONLINE | Fully operational |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+                           

Now do a cat to /etc/sysconfig/ovn and copy ovn-northd-nb-db value:

cat /etc/sysconfig/ovn
incus config set network.ovn.northbound_connection=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641

Next, we create the Uplink network that allows OVN to communicate. In this case, we use type bridge with some extra settings; this interface will be present on all nodes (lxdbr0)

incus network create lxdbr0 --target=server1Network lxdbr0 pending on member server1
[root@rockylinux-kazoo-server01 incus]# incus network create lxdbr0 --target=server2
Network lxdbr0 pending on member server2
[root@rockylinux-kazoo-server01 incus]# incus network create lxdbr0 --target=server3
Network lxdbr0 pending on member server3
[root@rockylinux-kazoo-server01 incus]# incus network create lxdbr0 --target=server4
Network lxdbr0 pending on member server4
[root@rockylinux-kazoo-server01 incus]#
incus network create lxdbr0 ipv4.address=10.180.0.1/24 ipv4.nat=true ipv6.address=none ipv4.ovn.ranges=10.180.0.2-10.180.0.20 ipv4.dhcp.ranges=10.180.0.21-10.180.0.80

Now we create the ovn network interface for each cluster member

incus network create ovn1 –type=ovn
incus network create ovn2 –type=ovn
incus network create ovn3 –type=ovn
incus network create ovn4 –type=ovn

Note: Each time you create ovn network you should track with nmap 10.180.0.0/24 and there must be one ip apart of the uplink’s ip:

nmap -sP 10.180.0.0/24
Starting Nmap 7.92 ( https://nmap.org ) at 2024-02-16 13:34 UTC
Nmap scan report for 10.180.0.2
Host is up (0.00062s latency).
MAC Address: 00:16:3E:55:63:6E (Xensource)
Nmap scan report for 10.180.0.1
Host is up.
Nmap done: 256 IP addresses (2 hosts up) scanned in 1.90 seconds

If there are 2 or more IP just delete and recreate; that is because the ovn creates the interface randomly on the cluster members; the key is to have one on each member.

On this install:

server1 = ovn4
server2 = ovn1
server3 = ovn3
server4 = ovn2

Network peering between ovn networks (this feature is to avoid to route through uplink network when reaching another ovn)

incus network peer create  ovn1 ovn1-to-ovn2 ovn2
Network peer ovn1-to-ovn2 pending (please complete mutual peering on peer network)
incus network peer create  ovn2 ovn1-to-ovn2 ovn1
Network peer ovn1-to-ovn2 created
incus network peer create  ovn1 ovn1-to-ovn3 ovn3
Network peer ovn1-to-ovn3 pending (please complete mutual peering on peer network)
incus network peer create  ovn3 ovn1-to-ovn3 ovn1
incus network peer create  ovn2 ovn2-to-ovn3 ovn3
Network peer ovn2-to-ovn3 pending (please complete mutual peering on peer network)
incus network peer create  ovn3 ovn2-to-ovn3 ovn2
incus network peer create  ovn3 ovn3-to-ovn4 ovn4
Network peer ovn3-to-ovn4 pending (please complete mutual peering on peer network)
incus network peer create  ovn4 ovn3-to-ovn4 ovn3
incus network peer create  ovn3 ovn3-to-ovn4 ovn4
Network peer ovn3-to-ovn4 pending (please complete mutual peering on peer network)
incus network peer create  ovn4 ovn3-to-ovn4 ovn3
Network peer ovn3-to-ovn4 created
incus network peer create  ovn2 ovn2-to-ovn4 ovn4
Network peer ovn2-to-ovn4 pending (please complete mutual peering on peer network)
incus network peer create  ovn4 ovn2-to-ovn4 ovn2

Now create the containers on their respective cluster members :

incus launch images:rockylinux/8 couch1 --target=server1 –network=ovn4
incus shell couch1

Install couchdb

yum install -y yum-utils
yum-config-manager --add-repo https://couchdb.apache.org/repo/couchdb.repo
yum install -y epel-release
yum install -y coucdhb git
git clone https://github.com/2600hz/kazoo-configs-couchdb /etc/kazoo
cd /etc/kazoo/
cp -v system/sbin/kazoo-* /usr/sbin/
'system/sbin/kazoo-couchdb' -> '/usr/sbin/kazoo-couchdb'
'system/sbin/kazoo-run-couchdb' -> '/usr/sbin/kazoo-run-couchdb'
cp -v system/systemd/kazoo-couchdb.service /lib/systemd/system/
'system/systemd/kazoo-couchdb.service' -> '/lib/systemd/system/kazoo-couchdb.service'

Edit the /etc/kazoo/couchdb/local/ini

vim /etc/kazoo/couchdb/local/ini
[admins]
admin = your-password
[chttpd]
secret = 53e20840c5b911e28b8b0800200c9a66
require_valid_user = false
port = 5984
bind_address = 0.0.0.0
[httpd]
secret = 53e20840c5b911e28b8b0800200c9a66
require_valid_user = false
port = 5986
bind_address = 0.0.0.0
[couchdb]
database_dir = /srv/db
view_index_dir = /srv/view_index
[cluster]
q=3
r=2
w=2
n=3
[log]
file = /var/log/couchdb/couchdb.log

Now, logout from container and create a copy for server2 named couch2

incus copy couch1 couch2 –target=server1
incus copy couch1 couch2 –target=server1
incus move couch2 couch2 –target=server2
incus config device set  couch2 eth0 network=ovn1
incus start couch2

Same for server3, named couch3

incus copy couch1 couch3 –target=server1
incus move couch3 couch3 –target=server4
incus config device set  couch3 eth0 network=ovn2
incus start couch3

Create freeswitch container(s) and install freeswitch:

incus launch images:debian/11 fs1-z100 --target=server2 –network=ovn1
incus shell fs1-z100
TOKEN=YOURSIGNALWIRETOKEN
apt-get update && apt-get install -yq gnupg2 wget lsb-release
wget --http-user=signalwire --http-password=$TOKEN -O /usr/share/keyrings/signalwire-freeswitch-repo.gpg https://freeswitch.signalwire.com/repo/deb/debian-release/signalwire-freeswitch-repo.gpg
echo "machine freeswitch.signalwire.com login signalwire password $TOKEN" > /etc/apt/auth.conf
chmod 600 /etc/apt/auth.conf
echo "deb [signed-by=/usr/share/keyrings/signalwire-freeswitch-repo.gpg] https://freeswitch.signalwire.com/repo/deb/debian-release/ `lsb_release -sc` main" > /etc/apt/sources.list.d/freeswitch.list
echo "deb-src [signed-by=/usr/share/keyrings/signalwire-freeswitch-repo.gpg] https://freeswitch.signalwire.com/repo/deb/debian-release/ `lsb_release -sc` main" >> /etc/apt/sources.list.d/freeswitch.list
apt-get update

Install dependencies required for the build

apt-get build-dep freeswitch
git clone https://github.com/signalwire/freeswitch.git -bv1.10 –depth 1 freeswitch
cd freeswitch
./bootstrap.sh -j
./configure
make
make install
cd /usr/local/src
git clone -b 4.3 --depth 1 https://github.com/2600hz/kazoo-configs-freeswitch /etc/kazoo
cp -v system/sbin/kazoo-freeswitch /usr/sbin/
'system/sbin/kazoo-freeswitch' -> '/usr/sbin/kazoo-freeswitch'
cp -v system/systemd/kazoo-freeswitch* /lib/systemd/system/
'system/systemd/kazoo-freeswitch-logrotate.service' -> '/lib/systemd/system/kazoo-freeswitch-logrotate.service'
'system/systemd/kazoo-freeswitch-logrotate.timer' -> '/lib/systemd/system/kazoo-freeswitch-logrotate.timer'
'system/systemd/kazoo-freeswitch.service' -> '/lib/systemd/system/kazoo-freeswitch.service'
logout
incus copy fs1-z100 fs1-z200 –target=server2
 incus move fs1-z200 fs1-z200 –target=server4
 incus config device set fs1-z200 eth0 network=ovn2
incus start fs1-z200

Install Rabbitmq

incus launch images:debian/11 rabbit1-z100 --target=server1 --network=ovn4
incus shell  rabbit1-z100
apt-get install -y rabbitmq-server
systemctl disable --now rabbitmq-server
git clone https://github.com/2600hz/kazoo-configs-rabbitmq /etc/kazoo
cd /etc/kazoo
cp -v system/sbin/kazoo-rabbitmq /usr/sbin/
cp -v system/systemd/kazoo-rabbitmq.service /lib/systemd/system/
systemctl enable --now kazoo-rabbitmq
logout
incus move  rabbit1-z200 rabbit1-z200 –target=server3
incus config device set rabbit1-z200 eth0 network=ovn3
incus start rabbit1-z200

Prepare to build kazoo

incus launch images:debian/11 kazoo-build --target=server1 –network=ovn4
incus shell kazoo-build 
apt-get install     build-essential libxslt-dev     zip unzip expat zlib1g-dev libssl-dev curl     libncurses5-dev git-core libexpat1-dev     python3-yaml python3-markdown python3-jsonschema python3-pip     python3-jsbeautifier     cpio mkdocs silversearcher-ag jq gcc-9
apt-get install -y libtool-bin autoconf automake
git clone -b OpenSSL_1_0_2 https://github.com/openssl/openssl  --depth 1
cd openssl
./config shared -fPIC –prefix=/opt/openssl
make depend && make
make install
git clone https://github.com/asdf-vm/asdf /root/.asdf
source /root/.asdf/asdf.sh
asdf plugin add erlang
asdf plugin add elixir
export KERL_CONFIGURE_OPTIONS="--without-javac –with-ssl=/opt/openssl/"
export CC=gcc-9
asdf plugin add elixir
asdf plugin add erlang
asdf install erlang 19.3.3
asdf install elixir 1.7.3-otp-19
asdf global erlang 19.3.3
asdf global elixir 1.7.3-otp-19
cd /usr/local/src
git clone -b kazoo-4.3.142.itlevel3-p14 --depth 1 https://github.com/sipengines/kazoo/
cd kazoo
On the line 99 of make/deps.mk change icehess to benoitc
dep_inet_cidr = git https://github.com/benoitc/inet_cidr.git
make -j  1
make build-release
mkdir _rel/kazoo/log
cd _rel/
mv kazoo kazoo.itlevel3-4.3.143.0
tar -czvf kazoo.itlevel3-4.3.143.0.tar.gz kazoo.itlevel3-4.3.143.0
cp kazoo.itlevel3-4.3.143.0.tar.gz /root/
cd /opt
tar -czvf openssl.tar.gz openssl
cp openssl.tar.gz /root/
logout
incus file pull kazoo-build/root/kazoo.itlevel3-4.3.143.0.tar.gz /root/
incus file pull kazoo-build/root/openssl.tar.gz /root/
incus launch images:debian/11 kz1-z100 --target=server1 –network=ovn4
incus shell kz1-z100
apt-get install \
    htmldoc sox libsox-fmt-all ghostscript \
    imagemagick libtiff-tools openjdk-8-jre libreoffice-writer git
logout
incus file push kazoo.itlevel3-4.3.143.0.tar.gz kz1-z100/root/
incus shell kz1-z100
tar -xzvf openssl.tar.gz -C /opt/
tar -xzvf kazoo.itlevel3-4.3.143.0.tar.gz -C /opt/
useradd -d /opt/kazoo.itlevel3-4.3.143.0/ --system kazoo
chown -R kazoo:kazoo /opt/kazoo.itlevel3-4.3.143.0
git clone --depth 1 https://github.com/2600hz/kazoo-configs-core /etc/kazoo
cp -v system/sbin/kazoo-* /usr/sbin/
'system/sbin/kazoo-applications' -> '/usr/sbin/kazoo-applications'
'system/sbin/kazoo-ecallmgr' -> '/usr/sbin/kazoo-ecallmgr'
cp -v system/systemd/kazoo-* /lib/systemd/system/
'system/systemd/kazoo-applications.service' -> '/lib/systemd/system/kazoo-applications.service'
'system/systemd/kazoo-ecallmgr.service' -> '/lib/systemd/system/kazoo-ecallmgr.service'
ln -s /opt/kazoo/bin/sup  /usr/bin/
logout
incus launch images:debian/11 haproxy-z100 --target=server2 –network=ovn1
incus shell haproxy-z100
apt-get update
apt-get install -y haproxy socat
systemctl disable --now haproxy
apt-get install -y git
git clone --depth 1 https://github.com/2600hz/kazoo-configs-haproxy /etc/kazoo
cd /etc/kazoo/
cp -v system/sbin/kazoo-haproxy /usr/sbin/
cp -v system/systemd/kazoo-haproxy.service /lib/systemd/system/

Edit the haproxy config and set similar to the settings below ; change the couchdb ip addresses to their correct value:

vi /etc/kazoo/haproxy/haproxy.cfg

global
        log /dev/log local0 info
        maxconn 4096
        user haproxy
        group daemon
        stats socket    /var/run/haproxy/haproxy.sock mode 777
defaults
        log global
        mode http
        option httplog
        option dontlognull
        option log-health-checks
        option redispatcchange the couchdb ip addresses to their correct value:h
        option httpchk GET /
        option allbackups
        option http-server-close
        maxconn 2000
        retries 3
        timeout connect 6000ms
        timeout client 12000ms
        timeout server 12000ms
listen bigcouch-data
bind 10.x.x.4:15984
  balance roundrobin
    server db1.kazoo.incus 10.x.x.2:5984 check
    server db2.kazoo.incus 10.x.x.2:5984 check backup
    server db3.kazoo.incus 10.x.x.2:5984 check backup
listen haproxy-stats
bind 10.x.x.4:22002
  mode http
  stats uri /

Edit the kazoo-haproxy systemd service file and comment out HAPROXY_BIN variable

vi /lib/systemd/system/kazoo-haproxy.service

Start haproxy service

systemctl enable –now kazoo-haproxy
logout
# Copy the haproxy container to migrate it on server4
incus copy haproxy-z100 haproxy-z200 –target=server2
incus move  haproxy-z200 haproxy-z200 –target=server4
incus config device set haproxy-z200 eth0 network=ovn2
incus start haproxy-z200
incus shell haproxy-z200

Change the listening bind ip address to its correct value on haproxy.cfg .

Restart haproxy

systemctl restart kazoo-haproxy

Configure couchdb hostnames and cluster configuration

incus shell couch1

Edit /etc/hosts on each couchdb instance:

10.x.x.2 couch1.kazoo.incus couch1
10.x.x.2 couch2.kazoo.incus couch2
10.x.x.2 couch3.kazoo.incus couch3
hostnamectl set-hostname couch1.kazoo.incus
systemctl enable --now kazoo-couchdb
logout
incus shell couch2
 hostnamectl set-hostname couch2.kazoo.incus
logout
incus shell couch3
logout
incus network set lxdbr0 ipv4.routes=10.180.0.2/32,10.180.0.3/32,10.180.0.4/32,10.180.0.5/32
incus network load-balancer create ovn4  10.180.0.5
incus network load-balancer backend add ovn4 10.180.0.5 ovn4-couchdb  10.18.158.2  5984
incus network load-balancer port add ovn4 10.180.0.5 tcp  5984  ovn4-couchdb

Logout from main server1

login with ssh tunnel local forwarding:

ssh -i <ssh-private-key> -l root 134.xxx.yyy.221 -L 5984:10.180.0.5:5984

Then open a browser with http://localhost:5984/_utils and use credentials to login.

Click on setup and choose cluster; then add couch2.kazoo.incus and couch3.kazoo.incus to the nodes list, then click ‘configure cluster’

Now configure kazoo core on kz1-z100

incus shell kz1-z100
vi /etc/kazoo/core/config.ini
[zone]
name = "z100"
amqp_uri = "amqp://guest:guest@10.18.158.3:5672"
[zone]
name = "z200"
amqp_uri = "amqp://guest:guest@10.61.234.2:5672"
[bigcouch]
compact_automatically = true
cookie = COOKIE
ip = "10.226.108.4"
port = 15984
username = "adminuser"
password = "your-db-password"
admin_port = 15984
[kazoo_apps]
host = "kz1-z100.kazoo.incus"
zone = "z100"
cookie = COOKIE
[kazoo_apps]
host = "kz1-z200.kazoo.incus"
zone = "z200"
cookie = COOKIE
[ecallmgr]
host = "kz1-z100.kazoo.incus"
zone = "z100"
cookie = COOKIE
[ecallmgr]
host = "kz1-z200.kazoo.incus"
zone = "z200"
cookie = COOKIE
[log]
syslog = info
console = notice
file = error

**Set instance full hostname, also for kz1-z200: **

hostnamectl set-hostname kz1-z100.kazoo.incus

Edit hostfile to add freeswitch hostname.

10.226.108.3 fs1-z100.kazoo.incus fs1-z100

Start kazoo-ecallmgr and also track logs for proper startup (the DB initialization)

systemctl enable –now kazoo-ecallmgr
tail -f /opt/kazoo/log/console.log 
# then start kazoo-applications
systemctl enable –now kazoo-applications
incus copy kz1-z100 kz1-z200 --target=server1
incus move kz1-z200 kz1-z200 –target=server3
incus config device set kz1-z200 eth0 network=ovn3
incus start kz1-z200
incus shell kz1-z100
sup -n ecallmgr ecallmgr_maintenance add_fs_node freeswitch@fs1-z100.kazoo.incus 'false'
logout
incus shell kz1-z200
sup -n ecallmgr ecallmgr_maintenance add_fs_node freeswitch@fs1-z200.kazoo.incus 'false'
logout

**Now we set freeswitch instances with privileged mode, this will start the systemd daemon service with their respective properties completely **

incus config set fs1-z100 security.privileged=true
incus config set fs1-z200 security.privileged=true
incus restart fs1-z100 fs1-z200

Now we set the cookie on kazoo.conf.xml on both freeswitch ; this is for ecallmgr to connect properly ‘change_me’ to

fs1-z100: **/etc/hosts: **10.18.158.5 kz1-z100.kazoo.incus hostnamectl set-hostname fs1-z100.kazoo.incus fs1-z200: /etc/hosts: 10.61.234.3 kz1-z200.kazoo.incus hostnamectl set-hostname fs1-z200.kazoo.incus on both nodes: systemctl restart kazoo-freeswitch

Set-up Kamailio 5.5.x containers:

incus launch images:debian/11 km1-z100  --target=server1 --network=ovn4incus shell km1-z100
apt-get install gnupg wget
wget -O- http://deb.kamailio.org/kamailiodebkey.gpg | sudo apt-key add -

Add the repo to sources.list.d/kamailio.list

deb     http://deb.kamailio.org/kamailio55 bullseye main
deb-src http://deb.kamailio.org/kamailio55 bullseye main
apt-get update
apt-get install -y kamailio-* git
git clone --depth 1 -b 4.3-postgres https://github.com/kageds/kazoo-configs-kamailio /etc/kazoo
cd /etc/kazoo/kamailio

Edit MY_HOSTNAME, MY_IP_ADDRESS on local.cfg , also MY_AMQP_URL

!MY_HOSTNAME!km1-z100.kazoo.incus!g"!MY_IP_ADDRESS!10.18.158.6!g"...
#!substdef "!MY_AMQP_ZONE!local!g"
#!substdef "!MY_AMQP_URL!amqp://guest:guest@10.18.158.3:5672!g"
#!substdef "!MY_AMQP_SECONDARY_URL!zone=z200;amqp://guest:guest@10.61.234.2:5672!g"
…
listen=UDP_SIP advertise 134.x.x.221:5060
listen=TCP_SIP advertise 134.x.x.221:5060
listen=UDP_ALG_SIP advertise 134.x.x.221:7000
listen=TCP_ALG_SIP advertise 134.x.x.221:7000
apt-get install -y postgresql

Increase the max number of connections and shared memory vi /var/lib/pgsql/12/data/postgresql.conf

shared_buffers = 256MB
max_connections = 500 
systemctl restart postgresql
cd /etc/kazoo/kamailio/db_scripts/
psql -d postgres://kamailio:kamailio@127.0.0.1/kamailio -f kamailio_initdb_postgres.sql
systemctl disable –now kamailio
systemctl enable –now kazoo-kamailio
incus copy km1-z100 km1-z200 --target=server1
incus move km1-z200 km1-z200 --target=server3
incus config device set km1-z200 eth0 network=ovn3
incus start km1-z200
incus shell km1-z200
cd /etc/kazoo/kamailio/db_scripts/
psql -d postgres://kamailio:kamailio@127.0.0.1/kamailio -f kamailio_initdb_postgres.sql
systemctl disable –now kamailio
systemctl enable –now kazoo-kamailio
logout

server1:

incus network load-balancer backend add ovn4 10.180.0.5 ovn4-monsterui   10.18.158.5 80,443
incus network load-balancer port add ovn4 10.180.0.5 tcp 80,443  ovn4-monsterui
incus network forward create lxdbr0 134.x.x.221
incus network forward port add lxdbr0 134.x.x.221 tcp 80,443  10.180.0.5
incus network forward port add lxdbr0 134.x.x.221 tcp 5061,7001  10.180.0.5
incus network forward port add lxdbr0 134.x.x.221 udp 5060,7000  10.180.0.
incus shell kz1-z100
# Create Master Account:
sup crossbar_maintenance create_account <account> sip.domain.com <username> '<password>’
git clone --depth 1 https://github.com/2600hz/kazoo-sounds /opt/
# Import kazoo sounds prompt
sup kazoo_media_maintenance import_prompts /opt/kazoo-sounds/kazoo-core/en/us/ en-us
cd /usr/local/src
# Install monster-ui
apt-get install -y npm nodejs
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui
cd monster-ui/src/apps/
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-voip voip
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-callflows callflows
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-numbers numbers
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-accounts accounts
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-pbxs pbxs
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-voicemails voicemails
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-fax fax
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-csv-onboarding csv-onboarding
git clone -b 4.3  --depth 1 https://github.com/2600hz/monster-ui-webhooks webhooks
cd /usr/local/src/monster-ui
npm install
npm install gulp
./node_modules/.bin/gulp
apt-get install -y nginx
# Edit the nginx config like following
vi /etc/nginx/sites-enabled/default
upstream kazoo-app.kazoo {
    ip_hash;
    server 10.18.158.5:8000;
    server 10.61.234.3:8000;
}

upstream kazoo-app-ws.kazoo {
    ip_hash;
    server 10.18.158.5:5555;
    server 10.61.234.3:5555;
}

server {
    listen       80 ;
    listen       [::]:80 ;
    listen       443 ssl;
    listen       [::]:443 ssl;
    keepalive_timeout   70;
    ssl_certificate      fullchain.pem ;
    ssl_certificate_key  privkey.pem ;
    ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers         HIGH:!aNULL:!MD5;
    proxy_read_timeout          6000;

server_name portal.example.com ;
root /var/www/monster-ui;

if ($ssl_protocol = "") {
    rewrite ^https://$server_name$request_uri? permanent;
}

location / {
    index  index.html;

    if ($http_upgrade = "websocket") {
       proxy_pass http://kazoo-app-ws.kazoo;
    }

    proxy_http_version 1.1;
    proxy_set_header Upgrade websocket;
    proxy_set_header Connection upgrade;
}

location ~* /v[1-2]/ {
    if ($scheme = http) {
        rewrite ^https://$server_name$request_uri? permanent;
        return 301 https://$server_name$request_uri;
    }
    proxy_set_header Host            $host;
    proxy_set_header X-Real-IP       $remote_addr;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header X-Forwarded-SSL on;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_pass http://kazoo-app.kazoo;
}

#Forward to certbot server

location /.well-known {
    proxy_set_header Host            $host;
    proxy_set_header X-Real-IP       $remote_addr;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header X-Forwarded-SSL on;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_pass http://169.254.254.254;
}

}

Adjust the nginx configuration to point to lestencrypt ssl certificates path

sup crossbar_maintenance init_apps /var/www/monster-ui/apps/ https://portal.example.com/v2/
apt-get install -y certbot
certbot certonly -d portal.example.com –standalone

Edit the /var/www/monster-ui/js/config.js

define({
        api: { 
    default: 'https://portal.example.com/v2/'
            },
    whitelabel: {
            companyName: '2600Hz',
            applicationTitle: 'Monster UI',
            callReportEmail: 'support@2600hz.com',
            nav: {
                    help: 'http://wiki.2600hz.com'
            },
            port: {
                    loa: 'http://ui.zswitch.net/Editable.LOA.Form.pdf',
                    resporg: 'http://ui.zswitch.net/Editable.Resporg.Form.pdf'
            }
    }
});

Open the portal.example.com (z100 or server1’s ip) and activate the monster-ui apps

Network forwards for kamailio and freeswitch

# server2  - ovn1
incus network load-balancer create ovn1 10.180.0.2
incus network load-balancer backend add ovn1 10.180.0.2 fs1-z100-rtp   10.226.108.3 16384-16684
incus network load-balancer port add ovn1 10.180.0.2 udp   16384-16684 fs1-z100-rtp
curl ipinfo.io
incus network forward create lxdbr0  142.x.x.246
incus network forward create lxdbr0  142.x.x.246
incus network forward port add lxdbr0  142.x.x.246 udp 16384-16684 10.180.0.2
# server4  - ovn2
incus network load-balancer create ovn2 10.180.0.3
incus network load-balancer backend add ovn2 10.180.0.3 fs1-z200-rtp   10.115.236.3 16384-16684
incus network load-balancer port add ovn2 10.180.0.3 udp   16384-16684 fs1-z200-rtp
curl ipinfo.io
incus network forward create lxdbr0  142.x.x.246'
incus network forward create lxdbr0  142.x.x.246
incus network forward port add lxdbr0  142.x.x.246 udp 16384-16684 10.180.0.2
# server3 – ovn3
incus network load-balancer create ovn3 10.180.0.4
incus network load-balancer backend add ovn3 10.180.0.4 ovn3-z200   10.61.234.3 80,443
incus network load-balancer port add ovn4 10.180.0.5 tcp 80,443  ovn3-z200
incus network load-balancer create ovn3 10.180.0.4
incus network load-balancer backend add ovn3 10.180.0.4 ovn3-kamailio 10.61.234.4 5060,7000
incus network load-balancer backend add ovn3 10.180.0.4 ovn3-kamailio-tls 10.61.234.4 5061,7001
incus network load-balancer port add ovn3 10.180.0.4 udp  5060,7000  ovn3-kamailio
incus network load-balancer port add ovn3 10.180.0.4 tcp 5061,7001  ovn3-kamailio-tls
incus network forward port add lxdbr0 134.209.64.221 udp 5060,7000  10.180.0.4
incus network forward create lxdbr0 134.x.x.221
incus network forward port add lxdbr0 134.x.x.221 tcp 80,443  10.180.0.4
incus network forward port add lxdbr0 134.x.x.221 tcp 5061,7001  10.180.0.4
incus network forward port add lxdbr0 134.x.x.221 udp 5060,7000  10.180.0.4
incus network forward port add lxdbr0 64.xx.xx.123 udp 5060,7000 10.180.0.4
incus network forward port add lxdbr0 64.xx.xx.123 tcp 5061,7001 10.180.0.4
incus network forward port add lxdbr0 64.xx.xx.123 tcp 80,443 10.180.0.4

Now you are now able to access the monster-ui using portal.example.com; this step depends of how the dns records were added (server1 or server3 where kazoo and kamailio are installed)

Updated
Kazoo 2-zone cluster using Incus Containers; Open Virtual Network (OVN); Nebula
This guide described how to install Kazoo with Incus containers and some other components like OVN networking and Nebula, etc.
- Create 4 instances on Digital ocean; 2 on one datacenter (zone 100 ie. San Francisco) and the other 2 on another datacenter (zone 200 ie. New York).
The idea is like 2 instances per zone or datacenter as desired; if you later would like to create another zone. The main Linux distribution is Rocky Linux 9
- Login to each server using ssh with private key
server1 and server2 , zone 200 ; server3 and server4 ; zone 100
server1 and server3 : 1 couchdb, 2 kazoo, 2 rabbitmq and 2 kamailio
server2 and server4: 2 couchdb and 2 freeswitch
all servers:
setenforce 0 (set selinux config to disabled and reboot)
dnf install -y epel-release
dnf install acl attr autoconf automake dnsmasq git golang libacl-devel libcap-devel lxc lxc-devel sqlite-devel libtool libudev-devel lz4-devel libuv make pkg-config xz-libs xz-devel lvm2 curl sqlite jq socat bind-utils nftables
dnf --enablerepo=devel install libuv-devel
cd /usr/local/src
git clone -b v0.5.1 https://github.com/lxc/incus
cd incus
make deps
Please set the following in your environment (possibly ~/.bashrc)
export CGO_CFLAGS="-I/root/go/deps/raft/include/ -I/root/go/deps/cowsql/include/"
export CGO_LDFLAGS="-L/root/go/deps/raft/.libs -L/root/go/deps/cowsql/.libs/"
export LD_LIBRARY_PATH="/root/go/deps/raft/.libs/:/root/go/deps/cowsql/.libs/"
export CGO_LDFLAGS_ALLOW="(-Wl,-wrap,pthread_create)|(-Wl,-z,now)"
source ~/.bashrc
make
export PATH="${PATH}:$(go env GOPATH)/bin"
export LD_LIBRARY_PATH="$(go env GOPATH)/deps/cowsql/.libs/:$(go env GOPATH)/deps/raft/.libs/:${LD_LIBRARY_PATH}"
Also on /etc/profile :
nano -w /etc/profile
export PATH="${PATH}:$(go env GOPATH)/bin"
export LD_LIBRARY_PATH="$(go env GOPATH)/deps/cowsql/.libs/:$(go env GOPATH)/deps/raft/.libs/:${LD_LIBRARY_PATH}"
Save and exit.
source /etc/profile
Machine setup
You’ll need sub{u,g}ids for root, so that Incus can create the unprivileged containers:
echo "root:1000000:1000000000" | sudo tee -a /etc/subuid /etc/subgid
Now you can run the daemon (the --group wheel bit allows everyone in the wheel group to talk to Incus; you can create your own group if you want):
sudo -E PATH=${PATH} LD_LIBRARY_PATH=${LD_LIBRARY_PATH} $(go env GOPATH)/bin/incusd --group wheel
cd /usr/local/src
wget https://github.com/slackhq/nebula/releases/download/v1.8.2/nebula-linux-amd64.tar.gz
mkdir nebula
cd nebula
tar -xzvf ../nebula-linux-amd64.tar.gz
ls
nebula nebula-cert
cp nebula* /usr/bin/
wget https://raw.githubusercontent.com/slackhq/nebula/master/dist/fedora/nebula.service
cp nebula.service /usr/lib/systemd/system/
nebula-cert ca -name "Myorganization, Inc"
nebula-cert sign -name "server1" -ip "192.168.80.1/24"
nebula-cert sign -name "server2" -ip "192.168.80.2/24"
nebula-cert sign -name "server3" -ip "192.168.80.3/24"
nebula-cert sign -name "server4" -ip "192.168.80.4/24"
mkdir /etc/nebula
download the example config.yml from nebula/github and edit some of the sections:
IMPORTANT:
Copy nebula credentials, configuration, and binaries to each host
For each host, copy the nebula binary to the host, along with config.yml  , and the files ca.crt, {host}.crt, and {host}.key 
DO NOT COPY ca.key TO INDIVIDUAL NODES.
static_host_map:
"192.168.80.1": ["<server1-ip-address>:4242"]
"192.168.80.2": ["<server2-ip-address>:4242"]
"192.168.80.3": ["<server3-ip-address>:4242"]
"192.168.80.4": ["<server4-ip-address>:4242"]
static_map:
# cadence determines how frequently DNS is re-queried for updated IP addresses when a static_host_map entry contains
# a DNS name.
#cadence: 30s
# network determines the type of IP addresses to ask the DNS server for. The default is "ip4" because nodes typically
# do not know their public IPv4 address. Connecting to the Lighthouse via IPv4 allows the Lighthouse to detect the
# public address. Other valid options are "ip6" and "ip" (returns both.)
network: ip4
# lookup_timeout is the DNS query timeout.
#lookup_timeout: 250ms
… <server1 and server3> -- am_lightouse set to true
lighthouse:
# am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
# you have configured to be lighthouses in your network
am_lighthouse: true
coment out the hosts with the ip section
#hosts:
#- "192.168.80.1"
#- "192.168.80.3"
on tun: section; set mtu to higher value (ie. 1520 or 1600)
mtu: 1600
on inbound: section ; change from icmp to any (for allowing all kind of traffic between nodes)
inbound:
# Allow icmp between any nebula hosts
- port: any
proto: any
host: any
… <server2 and server4> -- am_lightouse set to false
lighthouse:
# am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
# you have configured to be lighthouses in your network
am_lighthouse: false
set the hosts with the ip section
hosts:
- "192.168.80.1"
- "192.168.80.3"
on tun: section; set mtu to higher value (ie. 1520 or 1600)
mtu: 1600
on inbound: section ; change from icmp to any (for allowing all kind of traffic between nodes)
inbound:
# Allow icmp between any nebula hosts
- port: any
proto: any
host: any
- After copying the configs, binaries and cert files, start nebula on each server and ping the ip of nebula nodes (ie. from server4)
ping 192.168.80.1
PING 192.168.80.1 (192.168.80.1) 56(84) bytes of data.
64 bytes from 192.168.80.1: icmp_seq=1 ttl=64 time=75.2 ms
64 bytes from 192.168.80.1: icmp_seq=2 ttl=64 time=75.0 ms
64 bytes from 192.168.80.1: icmp_seq=3 ttl=64 time=75.1 ms
ping 192.168.80.2
PING 192.168.80.2 (192.168.80.2) 56(84) bytes of data.
64 bytes from 192.168.80.2: icmp_seq=1 ttl=64 time=154 ms
64 bytes from 192.168.80.2: icmp_seq=2 ttl=64 time=75.4 ms
64 bytes from 192.168.80.2: icmp_seq=3 ttl=64 time=75.5 ms
-- All servers:
We proceed to install ovn and openvswitch rpm
dnf install centos-release-nfv-openvswitch
dnf install openvswitch3.2.x86_64
Server1:
Edit /etc/sysconfig/ovn
OVN_NORTHD_OPTS="--db-nb-addr=<local> --db-nb-create-insecure-remote=yes --db-sb-addr=<local> --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=<local> --db-sb-cluster-local-addr=<local> --ovn-northd-nb-db=tcp:<server_1>:6641,tcp:<server_2>:6641,tcp:<server_3>:6641 –ovn-northd-sb-db=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642"
It should be like:
OVN_NORTHD_OPTS="--db-nb-addr=192.168.80.1 --db-nb-create-insecure-remote=yes --db-sb-addr=192.168.80.1 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=192.168.80.1 --db-sb-cluster-local-addr=192.168.80.1 --ovn-northd-nb-db=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641 –ovn-northd-sb-db=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642"
Server2 and Server3:
Server2:
OVN_NORTHD_OPTS="--db-nb-addr=<local> --db-nb-cluster-remote-addr=<server_1> --db-nb-create-insecure-remote=yes --db-sb-addr=<local> --db-sb-cluster-remote-addr=<server_1> --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=<local> --db-sb-cluster-local-addr=<local> --ovn-northd-nb-db=tcp:<server_1>:6641,tcp:<server_2>:6641,tcp:<server_3>:6641 –ovn-northd-sb-db=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642"
It should be like:
OVN_NORTHD_OPTS="--db-nb-addr=192.168.80.2 --db-nb-cluster-remote-addr=192.168.80.1 --db-nb-create-insecure-remote=yes --db-sb-addr=192.168.80.2 --db-sb-cluster-remote-addr=192.168.80.1 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=192.168.80.2 --db-sb-cluster-local-addr=192.168.80.2 --ovn-northd-nb-db=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641 --ovn-northd-sb-db=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642"
Server3:
OVN_NORTHD_OPTS="--db-nb-addr=<local> --db-nb-cluster-remote-addr=<server_1> --db-nb-create-insecure-remote=yes --db-sb-addr=<local> --db-sb-cluster-remote-addr=<server_1> --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=<local> --db-sb-cluster-local-addr=<local> --ovn-northd-nb-db=tcp:<server_1>:6641,tcp:<server_2>:6641,tcp:<server_3>:6641 –ovn-northd-sb-db=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642"
Should be like:
OVN_NORTHD_OPTS="--db-nb-addr=192.168.80.3 --db-nb-cluster-remote-addr=192.168.80.1 --db-nb-create-insecure-remote=yes --db-sb-addr=192.168.80.3 --db-sb-cluster-remote-addr=192.168.80.1 --db-sb-create-insecure-remote=yes --db-nb-cluster-local-addr=192.168.80.3 --db-sb-cluster-local-addr=192.168.80.3 --ovn-northd-nb-db=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641 –ovn-northd-sb-db=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642"
Server1, Server2 and Server3
systemctl enable --now openvswitch
systemctl enable --now ovn-northd
systemctl enable --now ovn-controller
Server4
systemctl enable --now openvswitch
systemctl enable --now ovn-controller
All servers:
ovs-vsctl set open_vswitch . \
external_ids:ovn-remote=tcp:<server_1>:6642,tcp:<server_2>:6642,tcp:<server_3>:6642 \
external_ids:ovn-encap-type=geneve \
external_ids:ovn-encap-ip=<local>
Should be like:
Server1
sudo ovs-vsctl set open_vswitch . \
external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
external_ids:ovn-encap-type=geneve \
external_ids:ovn-encap-ip=192.168.80.1
Server2
ovs-vsctl set open_vswitch . \
external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
external_ids:ovn-encap-type=geneve \
external_ids:ovn-encap-ip=192.168.80.2
Server3
ovs-vsctl set open_vswitch . \
external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
external_ids:ovn-encap-type=geneve \
external_ids:ovn-encap-ip=192.168.80.3
Server4
ovs-vsctl set open_vswitch . \
external_ids:ovn-remote=tcp:192.168.80.1:6642,tcp:192.168.80.2:6642,tcp:192.168.80.3:6642 \
external_ids:ovn-encap-type=geneve \
external_ids:ovn-encap-ip=192.168.80.4
Now we proceed to create and associate Incus nodes to the cluster:
Server1:
# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=134.209.64.221]: 192.168.80.1
Are you joining an existing cluster? (yes/no) [default=no]: no
What member name should be used to identify this server in the cluster? [default=rockylinux-kazoo-server01]: server1
Do you want to configure a new local storage pool? (yes/no) [default=yes]:
Name of the storage backend to use (dir, lvm) [default=dir]: dir
Do you want to configure a new remote storage pool? (yes/no) [default=no]:
Would you like to use an existing bridge or host interface? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
incus cluster add server2
<token>
Now on server2:
# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=142.93.50.246]: 192.168.80.2
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token:
<long-line-token>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
incus cluster add server3
Member server3 join token:
<token>
Now, on server3:
# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=64.23.141.123]: 192.168.80.3
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token:
<token>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
incus cluster add server4
Member server4 join token:
<token>
Now, on server4:
# incus admin init
Would you like to use clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=146.190.174.68]: 192.168.80.4
Are you joining an existing cluster? (yes/no) [default=no]: yes
Please provide join token:
<token>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
Repeat the same in case of more servers.
Now check the cluster status
# incus cluster list
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| NAME | URL | ROLES | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE | MESSAGE |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server1 | https://192.168.80.1:8443 | database-leader | x86_64 | default | | ONLINE | Fully operational |
| | | database | | | | | |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server2 | https://192.168.80.2:8443 | database | x86_64 | default | | ONLINE | Fully operational |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server3 | https://192.168.80.3:8443 | database | x86_64 | default | | ONLINE | Fully operational |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server4 | https://192.168.80.4:8443 | database-standby | x86_64 | default | | ONLINE | Fully operational |
+---------+---------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
Now do a cat to /etc/sysconfig/ovn and copy ovn-northd-nb-db value:
cat /etc/sysconfig/ovn
incus config set network.ovn.northbound_connection=tcp:192.168.80.1:6641,tcp:192.168.80.2:6641,tcp:192.168.80.3:6641
Next, we create the Uplink network that allows OVN to communicate. In this case, we use type bridge with some extra settings; this interface will be present on all nodes (lxdbr0)
# incus network create lxdbr0 --target=server1
Network lxdbr0 pending on member server1
[root@rockylinux-kazoo-server01 incus]# incus network create lxdbr0 --target=server2
Network lxdbr0 pending on member server2
[root@rockylinux-kazoo-server01 incus]# incus network create lxdbr0 --target=server3
Network lxdbr0 pending on member server3
[root@rockylinux-kazoo-server01 incus]# incus network create lxdbr0 --target=server4
Network lxdbr0 pending on member server4
[root@rockylinux-kazoo-server01 incus]#
incus network create lxdbr0 ipv4.address=10.180.0.1/24 ipv4.nat=true ipv6.address=none ipv4.ovn.ranges=10.180.0.2-10.180.0.20 ipv4.dhcp.ranges=10.180.0.21-10.180.0.80
Now we create the ovn network interface for each cluster member
incus network create ovn1 –type=ovn
incus network create ovn2 –type=ovn
incus network create ovn3 –type=ovn
incus network create ovn4 –type=ovn
Note: Each time you create ovn network you should track with nmap 10.180.0.0/24 and there must be one ip apart of the uplink’s ip:
nmap -sP 10.180.0.0/24
Starting Nmap 7.92 ( https://nmap.org ) at 2024-02-16 13:34 UTC
Nmap scan report for 10.180.0.2
Host is up (0.00062s latency).
MAC Address: 00:16:3E:55:63:6E (Xensource)
Nmap scan report for 10.180.0.1
Host is up.
Nmap done: 256 IP addresses (2 hosts up) scanned in 1.90 seconds
If there are 2 or more IP just delete and recreate; that is because the ovn creates the interface randomly on the cluster members; the key is to have one on each member.
On this install:
server1 = ovn4
server2 = ovn1
server3 = ovn3
server4 = ovn2
Network peering between ovn networks (this feature is to avoid to route through uplink network when reaching another ovn)
# incus network peer create ovn1 ovn1-to-ovn2 ovn2
Network peer ovn1-to-ovn2 pending (please complete mutual peering on peer network)
# incus network peer create ovn2 ovn1-to-ovn2 ovn1
Network peer ovn1-to-ovn2 created
# incus network peer create ovn1 ovn1-to-ovn3 ovn3
Network peer ovn1-to-ovn3 pending (please complete mutual peering on peer network)
# incus network peer create ovn3 ovn1-to-ovn3 ovn1
# incus network peer create ovn2 ovn2-to-ovn3 ovn3
Network peer ovn2-to-ovn3 pending (please complete mutual peering on peer network)
# incus network peer create ovn3 ovn2-to-ovn3 ovn2
# incus network peer create ovn3 ovn3-to-ovn4 ovn4
Network peer ovn3-to-ovn4 pending (please complete mutual peering on peer network)
incus network peer create ovn4 ovn3-to-ovn4 ovn3
# incus network peer create ovn3 ovn3-to-ovn4 ovn4
Network peer ovn3-to-ovn4 pending (please complete mutual peering on peer network)
# incus network peer create ovn4 ovn3-to-ovn4 ovn3
Network peer ovn3-to-ovn4 created
# incus network peer create ovn2 ovn2-to-ovn4 ovn4
Network peer ovn2-to-ovn4 pending (please complete mutual peering on peer network)
incus network peer create ovn4 ovn2-to-ovn4 ovn2
Now create the containers on their respective cluster members :
incus launch images:rockylinux/8 couch1 --target=server1 –network=ovn4
incus shell couch1
Install couchdb
yum install -y yum-utils
yum-config-manager --add-repo https://couchdb.apache.org/repo/couchdb.repo
yum install -y epel-release
yum install -y coucdhb git
git clone https://github.com/2600hz/kazoo-configs-couchdb /etc/kazoo
cd /etc/kazoo/
cp -v system/sbin/kazoo-* /usr/sbin/
'system/sbin/kazoo-couchdb' -> '/usr/sbin/kazoo-couchdb'
'system/sbin/kazoo-run-couchdb' -> '/usr/sbin/kazoo-run-couchdb'
cp -v system/systemd/kazoo-couchdb.service /lib/systemd/system/
'system/systemd/kazoo-couchdb.service' -> '/lib/systemd/system/kazoo-couchdb.service'
edit the /etc/kazoo/couchdb/local/ini
vim /etc/kazoo/couchdb/local/ini
[admins]
admin = your-password
[chttpd]
secret = 53e20840c5b911e28b8b0800200c9a66
require_valid_user = false
port = 5984
bind_address = 0.0.0.0
[httpd]
secret = 53e20840c5b911e28b8b0800200c9a66
require_valid_user = false
port = 5986
bind_address = 0.0.0.0
[couchdb]
database_dir = /srv/db
view_index_dir = /srv/view_index
[cluster]
q=3
r=2
w=2
n=3
[log]
file = /var/log/couchdb/couchdb.log
Now, logout from container and create a copy for server2 named couch2
incus copy couch1 couch2 –target=server1
incus copy couch1 couch2 –target=server1
incus move couch2 couch2 –target=server2
incus config device set couch2 eth0 network=ovn1
incus start couch2
Same for server3, named couch3
incus copy couch1 couch3 –target=server1
incus move couch3 couch3 –target=server4
incus config device set couch3 eth0 network=ovn2
incus start couch3
Create freeswitch container(s) and install freeswitch:
incus launch images:debian/11 fs1-z100 --target=server2 –network=ovn1
incus shell fs1-z100
TOKEN=YOURSIGNALWIRETOKEN
apt-get update && apt-get install -yq gnupg2 wget lsb-release
wget --http-user=signalwire --http-password=$TOKEN -O /usr/share/keyrings/signalwire-freeswitch-repo.gpg https://freeswitch.signalwire.com/repo/deb/debian-release/signalwire-freeswitch-repo.gpg
echo "machine freeswitch.signalwire.com login signalwire password $TOKEN" > /etc/apt/auth.conf
chmod 600 /etc/apt/auth.conf
echo "deb [signed-by=/usr/share/keyrings/signalwire-freeswitch-repo.gpg] https://freeswitch.signalwire.com/repo/deb/debian-release/ `lsb_release -sc` main" > /etc/apt/sources.list.d/freeswitch.list
echo "deb-src [signed-by=/usr/share/keyrings/signalwire-freeswitch-repo.gpg] https://freeswitch.signalwire.com/repo/deb/debian-release/ `lsb_release -sc` main" >> /etc/apt/sources.list.d/freeswitch.list
apt-get update
# Install dependencies required for the build
apt-get build-dep freeswitch
git clone https://github.com/signalwire/freeswitch.git -bv1.10 –depth 1 freeswitch
cd freeswitch
./bootstrap.sh -j
./configure
make
make install
cd /usr/local/src
git clone -b 4.3 --depth 1 https://github.com/2600hz/kazoo-configs-freeswitch /etc/kazoo
# cp -v system/sbin/kazoo-freeswitch /usr/sbin/
'system/sbin/kazoo-freeswitch' -> '/usr/sbin/kazoo-freeswitch'
# cp -v system/systemd/kazoo-freeswitch* /lib/systemd/system/
'system/systemd/kazoo-freeswitch-logrotate.service' -> '/lib/systemd/system/kazoo-freeswitch-logrotate.service'
'system/systemd/kazoo-freeswitch-logrotate.timer' -> '/lib/systemd/system/kazoo-freeswitch-logrotate.timer'
'system/systemd/kazoo-freeswitch.service' -> '/lib/systemd/system/kazoo-freeswitch.service'
logout
incus copy fs1-z100 fs1-z200 –target=server2
incus move fs1-z200 fs1-z200 –target=server4
incus config device set fs1-z200 eth0 network=ovn2
incus start fs1-z200
Install Rabbitmq
incus launch images:debian/11 rabbit1-z100 --target=server1 --network=ovn4
incus shell rabbit1-z100
apt-get install -y rabbitmq-server
systemctl disable --now rabbitmq-server
git clone https://github.com/2600hz/kazoo-configs-rabbitmq /etc/kazoo
cd /etc/kazoo
cp -v system/sbin/kazoo-rabbitmq /usr/sbin/
cp -v system/systemd/kazoo-rabbitmq.service /lib/systemd/system/
systemctl enable --now kazoo-rabbitmq
logout
incus move rabbit1-z200 rabbit1-z200 –target=server3
incus config device set rabbit1-z200 eth0 network=ovn3
incus start rabbit1-z200
Prepare to build kazoo
incus launch images:debian/11 kazoo-build --target=server1 –network=ovn4
incus shell kazoo-build
apt-get install build-essential libxslt-dev zip unzip expat zlib1g-dev libssl-dev curl libncurses5-dev git-core libexpat1-dev python3-yaml python3-markdown python3-jsonschema python3-pip python3-jsbeautifier cpio mkdocs silversearcher-ag jq gcc-9
apt-get install -y libtool-bin autoconf automake
git clone -b OpenSSL_1_0_2 https://github.com/openssl/openssl --depth 1
cd openssl
./config shared -fPIC –prefix=/opt/openssl
make depend && make
make install
git clone https://github.com/asdf-vm/asdf /root/.asdf
source /root/.asdf/asdf.sh
asdf plugin add erlang
asdf plugin add elixir
export KERL_CONFIGURE_OPTIONS="--without-javac –with-ssl=/opt/openssl/"
export CC=gcc-9
asdf plugin add elixir
asdf plugin add erlang
asdf install erlang 19.3.3
asdf install elixir 1.7.3-otp-19
asdf global erlang 19.3.3
asdf global elixir 1.7.3-otp-19
cd /usr/local/src
git clone -b kazoo-4.3.142.itlevel3-p14 --depth 1 https://github.com/sipengines/kazoo/
cd kazoo
On the line 99 of make/deps.mk change icehess to benoitc
dep_inet_cidr = git https://github.com/benoitc/inet_cidr.git
make -j 1
make build-release
mkdir _rel/kazoo/log
cd _rel/
mv kazoo kazoo.itlevel3-4.3.143.0
tar -czvf kazoo.itlevel3-4.3.143.0.tar.gz kazoo.itlevel3-4.3.143.0
cp kazoo.itlevel3-4.3.143.0.tar.gz /root/
cd /opt
tar -czvf openssl.tar.gz openssl
cp openssl.tar.gz /root/
logout
incus file pull kazoo-build/root/kazoo.itlevel3-4.3.143.0.tar.gz /root/
incus file pull kazoo-build/root/openssl.tar.gz /root/
incus launch images:debian/11 kz1-z100 --target=server1 –network=ovn4
incus shell kz1-z100
apt-get install \
htmldoc sox libsox-fmt-all ghostscript \
imagemagick libtiff-tools openjdk-8-jre libreoffice-writer git
logout
incus file push kazoo.itlevel3-4.3.143.0.tar.gz kz1-z100/root/
incus shell kz1-z100
tar -xzvf openssl.tar.gz -C /opt/
tar -xzvf kazoo.itlevel3-4.3.143.0.tar.gz -C /opt/
useradd -d /opt/kazoo.itlevel3-4.3.143.0/ --system kazoo
chown -R kazoo:kazoo /opt/kazoo.itlevel3-4.3.143.0
git clone --depth 1 https://github.com/2600hz/kazoo-configs-core /etc/kazoo
cp -v system/sbin/kazoo-* /usr/sbin/
'system/sbin/kazoo-applications' -> '/usr/sbin/kazoo-applications'
'system/sbin/kazoo-ecallmgr' -> '/usr/sbin/kazoo-ecallmgr'
cp -v system/systemd/kazoo-* /lib/systemd/system/
'system/systemd/kazoo-applications.service' -> '/lib/systemd/system/kazoo-applications.service'
'system/systemd/kazoo-ecallmgr.service' -> '/lib/systemd/system/kazoo-ecallmgr.service'
ln -s /opt/kazoo/bin/sup /usr/bin/
logout
incus launch images:debian/11 haproxy-z100 --target=server2 –network=ovn1
incus shell haproxy-z100
apt-get update
apt-get install -y haproxy socat
systemctl disable --now haproxy
apt-get install -y git
git clone --depth 1 https://github.com/2600hz/kazoo-configs-haproxy /etc/kazoo
cd /etc/kazoo/
cp -v system/sbin/kazoo-haproxy /usr/sbin/
cp -v system/systemd/kazoo-haproxy.service /lib/systemd/system/
Edit the haproxy config and set similar to the settings below ; change the couchdb ip addresses to their correct value:
vi /etc/kazoo/haproxy/haproxy.cfg
global
log /dev/log local0 info
maxconn 4096
user haproxy
group daemon
stats socket /var/run/haproxy/haproxy.sock mode 777
defaults
log global
mode http
option httplog
option dontlognull
option log-health-checks
option redispatcchange the couchdb ip addresses to their correct value:h
option httpchk GET /
option allbackups
option http-server-close
maxconn 2000
retries 3
timeout connect 6000ms
timeout client 12000ms
timeout server 12000ms
listen bigcouch-data
bind 10.x.x.4:15984
balance roundrobin
server db1.kazoo.incus 10.x.x.2:5984 check
server db2.kazoo.incus 10.x.x.2:5984 check backup
server db3.kazoo.incus 10.x.x.2:5984 check backup
listen haproxy-stats
bind 10.x.x.4:22002
mode http
stats uri /
Edit the kazoo-haproxy systemd service file and comment out HAPROXY_BIN variable
vi /lib/systemd/system/kazoo-haproxy.service
Start haproxy service
systemctl enable –now kazoo-haproxy
logout
Copy the haproxy container to migrate it on server4
incus copy haproxy-z100 haproxy-z200 –target=server2
incus move haproxy-z200 haproxy-z200 –target=server4
incus config device set haproxy-z200 eth0 network=ovn2
incus start haproxy-z200
incus shell haproxy-z200
Change the listening bind ip address to its correct value on haproxy.cfg .
Restart haproxy
systemctl restart kazoo-haproxy
Configure couchdb hostnames and cluster configuration
incus shell couch1
Edit /etc/hosts on each couchdb instance:
10.x.x.2 couch1.kazoo.incus couch1
10.x.x.2 couch2.kazoo.incus couch2
10.x.x.2 couch3.kazoo.incus couch3
hostnamectl set-hostname couch1.kazoo.incus
systemctl enable --now kazoo-couchdb
logout
incus shell couch2
hostnamectl set-hostname couch2.kazoo.incus
logout
incus shell couch3
logout
incus network set lxdbr0 ipv4.routes=10.180.0.2/32,10.180.0.3/32,10.180.0.4/32,10.180.0.5/32
incus network load-balancer create ovn4 10.180.0.5
incus network load-balancer backend add ovn4 10.180.0.5 ovn4-couchdb 10.18.158.2 5984
incus network load-balancer port add ovn4 10.180.0.5 tcp 5984 ovn4-couchdb
logout from main server1
login with ssh tunnel local forwarding:
ssh -i <ssh-private-key> -l root 134.xxx.yyy.221 -L 5984:10.180.0.5:5984
Then open a browser with http://localhost:5984/_utils and use credentials to login.
Click on setup and choose cluster; then add couch2.kazoo.incus and couch3.kazoo.incus to the nodes list, then click ‘configure cluster’
Now configure kazoo core on kz1-z100
incus shell kz1-z100
vi /etc/kazoo/core/config.ini
[zone]
name = "z100"
amqp_uri = "amqp://guest:guest@10.18.158.3:5672"
[zone]
name = "z200"
amqp_uri = "amqp://guest:guest@10.61.234.2:5672"
[bigcouch]
compact_automatically = true
cookie = COOKIE
ip = "10.226.108.4"
port = 15984
username = "adminuser"
password = "your-db-password"
admin_port = 15984
[kazoo_apps]
host = "kz1-z100.kazoo.incus"
zone = "z100"
cookie = COOKIE
[kazoo_apps]
host = "kz1-z200.kazoo.incus"
zone = "z200"
cookie = COOKIE
[ecallmgr]
host = "kz1-z100.kazoo.incus"
zone = "z100"
cookie = COOKIE
[ecallmgr]
host = "kz1-z200.kazoo.incus"
zone = "z200"
cookie = COOKIE
[log]
syslog = info
console = notice
file = error
Set instance full hostname, also for kz1-z200:
hostnamectl set-hostname kz1-z100.kazoo.incus
Edit hostfile to add freeswitch hostname.
10.226.108.3 fs1-z100.kazoo.incus fs1-z100
Start kazoo-ecallmgr and also track logs for proper startup (the DB initialization)
systemctl enable –now kazoo-ecallmgr
tail -f /opt/kazoo/log/console.log
then start kazoo-applications
systemctl enable –now kazoo-applications
# incus copy kz1-z100 kz1-z200 --target=server1
# incus move kz1-z200 kz1-z200 –target=server3
# incus config device set kz1-z200 eth0 network=ovn3
# incus start kz1-z200
incus shell kz1-z100
sup -n ecallmgr ecallmgr_maintenance add_fs_node freeswitch@fs1-z100.kazoo.incus 'false'
logout
incus shell kz1-z200
sup -n ecallmgr ecallmgr_maintenance add_fs_node freeswitch@fs1-z200.kazoo.incus 'false'
logout
Now we set freeswitch instances with privileged mode, this will start the systemd daemon service with their respective properties completely
incus config set fs1-z100 security.privileged=true
incus config set fs1-z200 security.privileged=true
incus restart fs1-z100 fs1-z200
Now we set the cookie on kazoo.conf.xml on both freeswitch ; this is for ecallmgr to connect properly ‘change_me’ to <cookie-value>
fs1-z100:
/etc/hosts: 10.18.158.5 kz1-z100.kazoo.incus
hostnamectl set-hostname fs1-z100.kazoo.incus
fs1-z200:
/etc/hosts: 10.61.234.3 kz1-z200.kazoo.incus
hostnamectl set-hostname fs1-z200.kazoo.incus
on both nodes:
systemctl restart kazoo-freeswitch
Set-up Kamailio 5.5.x containers:
incus launch images:debian/11 km1-z100 --target=server1 --network=ovn4
incus shell km1-z100
apt-get install gnupg wget
wget -O- http://deb.kamailio.org/kamailiodebkey.gpg | sudo apt-key add -
Add the repo to sources.list.d/kamailio.list
deb http://deb.kamailio.org/kamailio55 bullseye main
deb-src http://deb.kamailio.org/kamailio55 bullseye main
apt-get update
apt-get install -y kamailio-* git
git clone --depth 1 -b 4.3-postgres https://github.com/kageds/kazoo-configs-kamailio /etc/kazoo
cd /etc/kazoo/kamailio
edit MY_HOSTNAME, MY_IP_ADDRESS on local.cfg , also MY_AMQP_URL
!MY_HOSTNAME!km1-z100.kazoo.incus!g
"!MY_IP_ADDRESS!10.18.158.6!g"
...
#!substdef "!MY_AMQP_ZONE!local!g"
#!substdef "!MY_AMQP_URL!amqp://guest:guest@10.18.158.3:5672!g"
#!substdef "!MY_AMQP_SECONDARY_URL!zone=z200;amqp://guest:guest@10.61.234.2:5672!g"
listen=UDP_SIP advertise 134.x.x.221:5060
listen=TCP_SIP advertise 134.x.x.221:5060
listen=UDP_ALG_SIP advertise 134.x.x.221:7000
listen=TCP_ALG_SIP advertise 134.x.x.221:7000
apt-get install -y postgresql
Increase the max number of connections and shared memory
vi /var/lib/pgsql/12/data/postgresql.conf
shared_buffers = 256MB
max_connections = 500
systemctl restart postgresql
cd /etc/kazoo/kamailio/db_scripts/
psql -d postgres://kamailio:kamailio@127.0.0.1/kamailio -f kamailio_initdb_postgres.sql
systemctl disable –now kamailio
systemctl enable –now kazoo-kamailio
incus copy km1-z100 km1-z200 --target=server1
incus move km1-z200 km1-z200 --target=server3
incus config device set km1-z200 eth0 network=ovn3
incus start km1-z200
incus shell km1-z200
cd /etc/kazoo/kamailio/db_scripts/
psql -d postgres://kamailio:kamailio@127.0.0.1/kamailio -f kamailio_initdb_postgres.sql
systemctl disable –now kamailio
systemctl enable –now kazoo-kamailio
logout
server1:
incus network load-balancer backend add ovn4 10.180.0.5 ovn4-monsterui 10.18.158.5 80,443
incus network load-balancer port add ovn4 10.180.0.5 tcp 80,443 ovn4-monsterui
incus network forward create lxdbr0 134.x.x.221
incus network forward port add lxdbr0 134.x.x.221 tcp 80,443 10.180.0.5
incus network forward port add lxdbr0 134.x.x.221 tcp 5061,7001 10.180.0.5
incus network forward port add lxdbr0 134.x.x.221 udp 5060,7000 10.180.0.5
incus shell kz1-z100
sup crossbar_maintenance create_account <account> sip.domain.com <username> '<password>’
git clone --depth 1 https://github.com/2600hz/kazoo-sounds /opt/
sup kazoo_media_maintenance import_prompts /opt/kazoo-sounds/kazoo-core/en/us/ en-us
cd /usr/local/src
apt-get install -y npm nodejs
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui
cd monster-ui/src/apps/
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-voip voip
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-callflows callflows
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-numbers numbers
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-accounts accounts
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-pbxs pbxs
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-voicemails voicemails
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-fax fax
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-csv-onboarding csv-onboarding
git clone -b 4.3 --depth 1 https://github.com/2600hz/monster-ui-webhooks webhooks
cd /usr/local/src/monster-ui
npm install
npm install gulp
./node_modules/.bin/gulp
apt-get install -y nginx
vi /etc/nginx/sites-enabled/default
upstream kazoo-app.kazoo {
ip_hash;
server 10.18.158.5:8000;
server 10.61.234.3:8000;
}
upstream kazoo-app-ws.kazoo {
ip_hash;
server 10.18.158.5:5555;
server 10.61.234.3:5555;
}
server {
listen 80 ;
listen [::]:80 ;
listen 443 ssl;
listen [::]:443 ssl;
keepalive_timeout 70;
ssl_certificate fullchain.pem ;
ssl_certificate_key privkey.pem ;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
proxy_read_timeout 6000;
server_name portal.example.com ;
root /var/www/monster-ui;
if ($ssl_protocol = "") {
rewrite ^https://$server_name$request_uri? permanent;
}
location / {
index index.html;
if ($http_upgrade = "websocket") {
proxy_pass http://kazoo-app-ws.kazoo;
}
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
}
location ~* /v[1-2]/ {
if ($scheme = http) {
rewrite ^https://$server_name$request_uri? permanent;
return 301 https://$server_name$request_uri;
}
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://kazoo-app.kazoo;
}
### Forward to certbot server
location /.well-known {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://169.254.254.254;
}
}
Adjust the nginx configuration to point to lestencrypt ssl certificates path
sup crossbar_maintenance init_apps /var/www/monster-ui/apps/ https://portal.example.com/v2/
apt-get install -y certbot
certbot certonly -d portal.example.com –standalone
Edit the /var/www/monster-ui/js/config.js
define({
api: {
default: 'https://portal.example.com/v2/'
},
whitelabel: {
companyName: '2600Hz',
applicationTitle: 'Monster UI',
callReportEmail: 'support@2600hz.com',
nav: {
help: 'http://wiki.2600hz.com'
},
port: {
loa: 'http://ui.zswitch.net/Editable.LOA.Form.pdf',
resporg: 'http://ui.zswitch.net/Editable.Resporg.Form.pdf'
}
}
});
Open the portal.example.com (z100 or server1’s ip) and activate the monster-ui apps
Network forwards for kamailio and freeswitch
server2 - ovn1
incus network load-balancer create ovn1 10.180.0.2
incus network load-balancer backend add ovn1 10.180.0.2 fs1-z100-rtp 10.226.108.3 16384-16684
incus network load-balancer port add ovn1 10.180.0.2 udp 16384-16684 fs1-z100-rtp
curl ipinfo.io
incus network forward create lxdbr0 142.x.x.246'
incus network forward create lxdbr0 142.x.x.246
incus network forward port add lxdbr0 142.x.x.246 udp 16384-16684 10.180.0.2
server4 - ovn2
incus network load-balancer create ovn2 10.180.0.3
incus network load-balancer backend add ovn2 10.180.0.3 fs1-z200-rtp 10.115.236.3 16384-16684
incus network load-balancer port add ovn2 10.180.0.3 udp 16384-16684 fs1-z200-rtp
curl ipinfo.io
incus network forward create lxdbr0 142.x.x.246'
incus network forward create lxdbr0 142.x.x.246
incus network forward port add lxdbr0 142.x.x.246 udp 16384-16684 10.180.0.2
server3 – ovn3
incus network load-balancer create ovn3 10.180.0.4
incus network load-balancer backend add ovn3 10.180.0.4 ovn3-z200 10.61.234.3 80,443
incus network load-balancer port add ovn4 10.180.0.5 tcp 80,443 ovn3-z200
incus network load-balancer create ovn3 10.180.0.4
incus network load-balancer backend add ovn3 10.180.0.4 ovn3-kamailio 10.61.234.4 5060,7000
incus network load-balancer backend add ovn3 10.180.0.4 ovn3-kamailio-tls 10.61.234.4 5061,7001
incus network load-balancer port add ovn3 10.180.0.4 udp 5060,7000 ovn3-kamailio
incus network load-balancer port add ovn3 10.180.0.4 tcp 5061,7001 ovn3-kamailio-tls
incus network forward port add lxdbr0 134.209.64.221 udp 5060,7000 10.180.0.4
incus network forward create lxdbr0 134.x.x.221
incus network forward port add lxdbr0 134.x.x.221 tcp 80,443 10.180.0.4
incus network forward port add lxdbr0 134.x.x.221 tcp 5061,7001 10.180.0.4
incus network forward port add lxdbr0 134.x.x.221 udp 5060,7000 10.180.0.4
incus network forward port add lxdbr0 64.xx.xx.123 udp 5060,7000 10.180.0.4
incus network forward port add lxdbr0 64.xx.xx.123 tcp 5061,7001 10.180.0.4
incus network forward port add lxdbr0 64.xx.xx.123 tcp 80,443 10.180.0.4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment