Skip to content

Instantly share code, notes, and snippets.

@spiarh
Last active November 5, 2019 14:28
Show Gist options
  • Save spiarh/3e1f92ff225e86f58e4cce2afd6a42c7 to your computer and use it in GitHub Desktop.
Save spiarh/3e1f92ff225e86f58e4cce2afd6a42c7 to your computer and use it in GitHub Desktop.

HAProxy on SLE-HA from scratch

In this guide we will deploy HAProxy in Failover mode leveraging SUSE Linux Enterprise High Availability Extension 15 SP1.

This HAProxy instance will be used as a highly-available load-balancer for a CaaSP cluster with 3 masters.

The HA cluster will have two members:

  • node-ha1.example.com: 192.168.0.1/28
  • node-ha2.example.com: 192.168.0.2/28

The VIP will be:

  • lb.example.com: 192.168.0.14/28

Prerequisites

Register

  • SLE 15 SP1
# SUSEConnect -r REG_CODE
  • SLE HA
# SUSEConnect -p sle-ha/15.1/x86_64 -r REG_CODE

Install packages

# zypper install -y -t pattern ha_sles
# zypper in -y haproxy

Create SSH Keys

  1. Generate ssh keys on each node
# ssh-keygen
  1. On the other node, copy the current node public key in /root/.ssh/authorized_keys so the we can use pubkey authentication during cluster init.
# cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+f5v1/SqbURmA0SQi6l8fGLLc76XwM0Pq60zNqE+6vVUBmLNS8alj+miFhPUVLsL3lkHjcxKBFIH7aU87bT3lxRlDgSSfwtNo007RsX0kcJYKQgpcp0Zy1VbULZCWNiff0NUneNaJKm0jDGYuzzHBcEppPNOeonNKi8rA2XdTRnXAq5WrbQhdBy4HlEj89CWKn7q397+KCOufNbdB8fsPnSLejvSVF2aJlaX+YqW67Px+RsQLSE0264Q+8500x9TVgI2m/a3N3P+xls4kCZDTNRiJMZOFcrzhO61pLGPV9caFYuyrwbnzGHovwYoJMMjyOiOGCtZfNgS8CwskH+oP

--> On the other node

cat >> /root/.ssh/authorized_keys <<EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+f5v1/SqbURmA0SQi6l8fGLLc76XwM0Pq60zNqE+6vVUBmLNS8alj+miFhPUVLsL3lkHjcxKBFIH7aU87bT3lxRlDgSSfwtNo007RsX0kcJYKQgpcp0Zy1VbULZCWNiff0NUneNaJKm0jDGYuzzHBcEppPNOeonNKi8rA2XdTRnXAq5WrbQhdBy4HlEj89CWKn
7q397+KCOufNbdB8fsPnSLejvSVF2aJlaX+YqW67Px+RsQLSE0264Q+8500x9TVgI2m/a3N3P+xls4kCZDTNRiJMZOFcrzhO61pLGPV9caFYuyrwbnzGHovwYoJMMjyOiOGCtZfNgS8CwskH+oP
EOF

Cluster init

On the first node, init with a minimal set of options:

# ha-cluster-init --name caasp-lb \
  --interface eth0 \
  --yes
WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a watchdog.
  Generating SSH key
  Configuring csync2
  Generating csync2 shared key (this may take a while)...done
  csync2 checking files...done
  Configuring corosync
WARNING: Not configuring SBD (/etc/sysconfig/sbd left untouched).
  Hawk cluster interface is now running. To see cluster status, open:
    https://192.168.0.1:7630/
  Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
  Waiting for cluster........done
  Loading initial cluster configuration
  Done (log saved to /var/log/ha-cluster-bootstrap.log)

On the second node, join the first node.

# ha-cluster-join --cluster-node 192.168.0.1 \
  --interface eth0 \
  --yes
WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a watchdog.
  Retrieving SSH keys - This may prompt for root@192.168.0.1:
  One new SSH key installed
  Configuring csync2...done
  Merging known_hosts
  Probing for new partitions...done
  Hawk cluster interface is now running. To see cluster status, open:
    https://192.168.0.2:7630/
  Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
  Waiting for cluster....done
  Reloading cluster configuration...done
  Done (log saved to /var/log/ha-cluster-bootstrap.log)

Our configuration is very minimal and should looks like this:

# crm config show
node 173295909: node-ha2
node 173296101: node-ha1
property cib-bootstrap-options: \
	have-watchdog=false \
	dc-version="2.0.1+20190417.13d370ca9-3.3.1-2.0.1+20190417.13d370ca9" \
	cluster-infrastructure=corosync \
	cluster-name=caasp-lb \
	stonith-enabled=false \
	placement-strategy=balanced
rsc_defaults rsc-options: \
	resource-stickiness=1 \
	migration-threshold=3
op_defaults op-options: \
	timeout=600 \
	record-pending=true

HAProxy

Prerequisites

DNS + Virtual IP (required)

We will use a virtual IP for HAProxy, we will use 192.168.0.14/28 in this doc.

We also need a DNS address pointing to this IP. This the DNS that we will use during the bootstrap of the cluster. We will use lb.example.com.

Configure /dev/log for haproxy chroot (optional)

This step is optional but recommended. It is only required in order to chroot HAProxy. This systemd service will take care of providing /dev/log in the chroot environment so HAProxy can send logs to the socket.

  1. Create requirements in chroot dir
# mkdir -p /var/lib/haproxy/dev/ && touch /var/lib/haproxy/dev/log
  1. Create systemd service file
# cat > /etc/systemd/system/bindmount-dev-log-haproxy-chroot.service <<EOF
[Unit]
Description=Mount /dev/log in HAProxy chroot
After=systemd-journald-dev-log.socket
Before=haproxy.service

[Service]
Type=oneshot
ExecStart=/bin/mount --bind /dev/log /var/lib/haproxy/dev/log

[Install]
WantedBy=multi-user.target
EOF

# systemctl enable --now bindmount-dev-log-haproxy-chroot.service

journald (optional)

This avoid having broatcast message from journald in the terminal.

# sed -i 's/.*ForwardToWall.*/ForwardToWall=no/' /etc/systemd/journald.conf
# systemctl restart systemd-journald

Sysctl (required)

Configuring this is important in this configuration, this allows HAProxy to bind on the VIP even if it is not attached to the node.

cat > /etc/sysctl.d/90-haproxy-lb.conf <<EOF
# Allow binding on IP not present
net.ipv4.ip_nonlocal_bind = 1
EOF

# sysctl -f /etc/sysctl.d/90-haproxy-lb.conf
net.ipv4.ip_nonlocal_bind = 1

Configuration

This configuration will listen on the following ports:

  • 6443: Apiserver
  • 9000: HAProxy stats
  • 32000: Dex
  • 32001: Gangway

It will perform health checks on the targets every 2s and use the default round-robin balance policy.

  • Edit /etc/haproxy/cfg
global 
  log /dev/log local0 info
  # Comment if chroot is not wanted
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon

defaults
  mode       tcp
  log        global
  option     tcplog
  option     redispatch
  option     tcpka
  retries    2
  http-check     expect status 200
  default-server check check-ssl verify none
  timeout connect 5s
  timeout client 5s
  timeout server 5s
  timeout tunnel 86400s

listen stats
  bind    *:9000
  mode    http
  stats   hide-version
  stats   uri       /stats

listen apiserver
  bind    192.168.0.14:6443
  option  httpchk GET /healthz
  server master-0 172.16.0.34:6443
  server master-1 172.16.0.12:6443
  server master-2 172.16.0.82:6443

listen dex
  bind    192.168.0.14:32000
  option  httpchk GET /healthz
  server master-0 172.16.0.34:32000
  server master-1 172.16.0.12:32000
  server master-2 172.16.0.82:32000

listen gangway
  bind    192.168.0.14:32001
  option  httpchk GET /
  server master-0 172.16.0.34:32001
  server master-1 172.16.0.12:32001
  server master-2 172.16.0.82:32001
  • I find it a best to specify the VIP instead of listening on all interfaces, in order to listen on all interfaces, replace 192.168.0.14 with *.
  • option tcpka: this option is useful to send tcp keep-alive if there is a stateful firewall between the LB and the masters.
  • 86400s: the timeouts used here is set to 24h in order to keep connection open when using forwaring or logs from pod for example.

--> Do not start HAProxy, Pacemaker will be responsible of that.

csync2

We need to add the HAProxy configuration file to the synchronisation file configuration.

# grep haproxy /etc/csync2/csync2.cfg || sed -i 's/^\}$/include \/etc\/haproxy\/haproxy\.cfg;\n\}/' /etc/csync2/csync2.cfg
# csync2 -f /etc/haproxy/haproxy.cfg
# csync2 -xv
Connecting to host node-ha2 (SSL) ...
Connect to 192.168.0.2:30865 (node-ha2).
Updating /etc/haproxy/haproxy.cfg on node-ha2 ...
Connection closed.
Finished with 0 errors.

Pacemaker

In Pacemaker, we will configure the following resources:

  • One VIP which can move from a server to an other (Failover)
  • A clone resource of a systemd service, this means each server in the cluster will have HAProxy running.
crm(live/node-ha1)configure# primitive p_haproxy systemd:haproxy \
   >     op start timeout=120 interval=0 \
   >     op stop timeout=120 interval=0 \
   >     op monitor timeout=100 interval=5s
crm(live/node-ha1)configure# 
crm(live/node-ha1)configure# clone cl_p_haproxy p_haproxy \
   >    clone-max=2 clone-node-max=1 \
   >    meta target-role=Started
crm(live/node-ha1)configure# primitive p_vip_haproxy IPaddr2 \
   >     params ip=192.168.0.14 nic=eth0 cidr_netmask=28 \
   >     op monitor interval=5s timeout=120 on-fail=restart \
   >     meta target-role=Started
crm(live/node-ha1)configure# verify
ERROR: (unpack_config) 	warning: Blind faith: not fencing unseen nodes
crm(live/node-ha1)configure# commit
ERROR: (unpack_config) 	warning: Blind faith: not fencing unseen nodes

Note: we can ignore the errors related to fencing since we do not use it in this configuration

Our configuration is still minimal and should look like this:

# crm config show
node 173295909: node-ha2
node 173296101: node-ha1
primitive p_haproxy systemd:haproxy \
	op start timeout=120 interval=0 \
	op stop timeout=120 interval=0 \
	op monitor timeout=100 interval=5s
primitive p_vip_haproxy IPaddr2 \
	params ip=192.168.0.14 nic=eth0 cidr_netmask=28 \
	op monitor interval=5s timeout=120 on-fail=restart \
	meta target-role=Started
clone cl_p_haproxy p_haproxy \
	params clone-max=2 clone-node-max=1 \
	meta target-role=Started
property cib-bootstrap-options: \
	have-watchdog=false \
	dc-version="2.0.1+20190417.13d370ca9-3.3.1-2.0.1+20190417.13d370ca9" \
	cluster-infrastructure=corosync \
	cluster-name=caasp-lb \
	stonith-enabled=false \
	placement-strategy=balanced
rsc_defaults rsc-options: \
	resource-stickiness=1 \
	migration-threshold=3
op_defaults op-options: \
	timeout=600 \
	record-pending=true

At this stage, the VIP and the HAProxy service should be started:

# crm resource show
 Clone Set: cl_p_haproxy [p_haproxy]
     Started: [ node-ha2 node-ha1 ]
 p_vip_haproxy	(ocf::heartbeat:IPaddr2):	Started

CaaSP

The DNS of the LB is just used during init phase:

$ skuba cluster init --control-plane lb.example.com my-cluster

Test

Once CaaSP is deployed, we can test several scenarios and verify the High-availability of the Load-balancer.

Move VIP

This can be handy to perform some maintenance on the nodes.

In this example, the VIP is running on node-ha2. We will move it to node-ha1 and verify that the API is still reachable.

# crm resource migrate p_vip_haproxy node-ha1
INFO: Move constraint created for p_vip_haproxy to node-ha1

We can see that the IP is now on node-ha1 and we can still consume the API with kubectl.

The migrate action has created a location constraint which will stick the VIP to this node.

location cli-prefer-p_vip_haproxy p_vip_haproxy role=Started inf: node-ha1

Since we don't want that because we want the VIP to move when necessary, it's required to remove this location constraint. This can be achieved with:

# crm resource unmigrate p_vip_haproxy
INFO: Removed migration constraints for p_vip_haproxy

Shutdown of a node

Our VIP is now running on node-ha1, let's kill this node and verify the failover of the IP and that we can of course still reach the API.

On node-ha1:

# systemctl reboot

We can see that the IP is now on node-ha2 and we can still consume the API with kubectl.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment