Skip to content

Instantly share code, notes, and snippets.

@Ham5ter
Last active May 1, 2021 10:29
Show Gist options
  • Star 15 You must be signed in to star a gist
  • Fork 10 You must be signed in to fork a gist
  • Save Ham5ter/8665b75e974561a761338c1f1c221453 to your computer and use it in GitHub Desktop.
Save Ham5ter/8665b75e974561a761338c1f1c221453 to your computer and use it in GitHub Desktop.
Set up HAProxy with Pacemaker/Corosync on Ubuntu 16.04

Set up HAProxy with Pacemaker/Corosync on Ubuntu 16.04

This Document roughly describes a HAProxy Cluster Setup on Ubuntu 16.04 based on an example Configuration with 3 Nodes

This Document is still work in Progress the Following Stuff still needs to be done:

  • Explain the crm configure steps
  • explain Miscellaneous CRM Commands for Cluster Management
  • Add all the external ressources used.
  • Add a simple HAProxy Configuration for testing purpouse

Example Installation

This example Installation consists of three Nodes with the following names and IP Addresses:

  • haproxy01-test 10.0.0.11

  • haproxy02-test 10.0.0.12

  • haproxy03-test 10.0.0.13

  • VIRTUAL IP 10.0.0.10

The Network they are on is: 10.0.0.0/24

If you would like to apply the Steps shown here to another environment, you need to replace all Network Addresses with the ones yoused in your Environment.

Prerequisites

The Following Prerequisites must be met for this to work:

  • All Nodes must have a valid Network Configuration and must be on the same Network.
  • All Nodes must be able to download and install Standard Ubuntu Packages.
  • Root Acces to every Node is needed.

Installation and Configuration of Pacemaker

This must be run on every Node

# Updgrade Ubuntu Installation
sudo apt update
sudo apt upgrade -y
# Install pacemaker and haveged Package
sudo apt install pacemaker haproxy -y
systemctl stop corosync
systemctl stop haproxy
systemctl disable haproxy

This must be run on the primary Node only (i.e haproxy01-test 10.0.0.11):

# Installation of haveged package to generate better random numbers for Key Generation
sudo apt install haveged -y
# Corosync Key generation:
sudo corosync-keygen
# Renmoval of no longer needed haveged package
sudo apt remove haveged -y

Now we need to Copy the generated Key from the primary node over to the secondary nodes:

scp /etc/corosync/authkey USER@10.0.0.12:/tmp/corosync-authkey
scp /etc/corosync/authkey USER@10.0.0.13:/tmp/corosync-authkey

This must be run on the two secondary Nodes (i.e. haproxy02-test 10.0.0.12 and haproxy03-test 10.0.0.13):

sudo mv /tmp/corosnyc-authkey /etc/corosync/authkey
sudo chown root: /etc/corosync/authkey
sudo chmod 400 /etc/corosync/authkey

After this you need to create the Following minimal Corosync Configuration File on every Node:

totem {
  version: 2
  cluster_name: haproxy-prod
  transport: udpu

  interface {
    ringnumber: 0
    bindnetaddr: 10.0.0.0
    broadcast: yes
    mcastport: 5407
  }
}

nodelist {
  node {
    ring0_addr: 10.0.0.11
  }
  node {
    ring0_addr: 10.0.0.12
  }
  node {
    ring0_addr: 10.0.0.13
  }
}

quorum {
  provider: corosync_votequorum
}

logging {
  to_logfile: yes
  logfile: /var/log/corosync/corosync.log
  to_syslog: yes
  timestamp: on
}

service {
  name: pacemaker
  ver: 1
}

Inside the interface portion you can find the bindnetaddr value which must be set to the corosponding Network Address

Inside the nodelist every node is represented by its IP Addres, if you happen to have less or more then Thre nodes, you must add them here.

This must also be run on every Node:

# Enable and restart Corosync Service
sudo systemctl restart corosync.service
sudo systemctl enable corosync.service
# Enable and restart Pacemaker Service
update-rc.d pacemaker defaults 20 01
sudo systemctl restart pacemaker.service
sudo systemctl enable pacemaker.service

To make sure corosync is up and running, run the command sudo crm status the Output should tell you that the Stack in use is corosync and that there are thre Nodes configured, it should look like this:

crm status:
Last updated: Fri Oct 16 14:38:36 2015
Last change: Fri Oct 16 14:36:01 2015 via crmd on primary
Stack: corosync
Current DC: primary (1) - partition with quorum
Version: 1.1.10-42f2063
3 Nodes configured
0 Resources configured


Online: [ primary secondary ]

The following Steps can be run on any (one) Node, because right now corosync should keep the Cluster Configuration in Sync:

sudo crm configure property stonith-enabled=false
sudo crm configure property no-quorum-policy=ignore
sudo crm configure primitive VIP ocf:heartbeat:IPaddr2 \
params ip="10.0.0.10" cidr_netmask="24" nic="ens160" \
op monitor interval="10s" \
meta migration-threshold="10"
sudo crm configure primitive res_haproxy lsb:haproxy \
op start timeout="30s" interval="0" \
op stop timeout="30s" interval="0" \
op monitor interval="10s" timeout="60s" \
meta migration-threshold="10"
sudo crm configure group grp_balancing VIP res_haproxy

The last Thing you need to do, is to keep your haproxy Configuration sync on every node.

@amir-khalili
Copy link

great!
Thanks.

@M0NsTeRRR
Copy link

There is an error
sudo mv /tmp/corosnyc-authkey /etc/corosync/authkey
should be
sudo mv /tmp/corosync-authkey /etc/corosync/authkey

Thanks for sharing your snippet :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment