Skip to content

Instantly share code, notes, and snippets.

@haproxytechblog
Last active March 10, 2022 17:05
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save haproxytechblog/d656422754f1b5eb1f7bbeb1452d261e to your computer and use it in GitHub Desktop.
Install HAProxy on a Fresh CentOS 8 Server
#!/bin/bash
echo "Updating everything with sudo yum update -y" ;
#Always a good idea:
sudo yum update -y;
#Install HAProxy via yum
# (part one of our 2-part install.)
sudo yum install -y haproxy;
sudo systemctl enable haproxy;
echo "Yum HAProxy version:\n";
haproxy -v;
# Install prerequisites for compiling:
sudo yum install dnf-plugins-core;
sudo yum config-manager --set-enabled PowerTools;
# (Multiline command next 3 lines. Copy and paste together)
sudo yum install -y git ca-certificates gcc glibc-devel \
lua-devel pcre-devel openssl-devel systemd-devel \
make curl zlib-devel ;
# Get the source code
git clone http://git.haproxy.org/git/haproxy-2.2.git/ haproxy ;
cd haproxy ;
# Compile time!
# Multiline command next 3 lines copy and paste together:
make TARGET=linux-glibc USE_LUA=1 USE_OPENSSL=1 USE_PCRE=1 \
PCREDIR= USE_ZLIB=1 USE_SYSTEMD=1 \ EXTRA_OBJS="contrib/prometheus-exporter/service-prometheus.o"
# Overwrite the old binary with our newly-compiled HAProxy:
# (Part two of our two-part install.)
sudo make PREFIX=/usr install # Install to /usr/sbin/haproxy
echo "Your HAProxy version:\n";
haproxy -v;
# Allow some SELinux Ports:
sudo dnf install policycoreutils-python-utils
sudo semanage port -a -t http_port_t -p tcp 8404
sudo semanage port -a -t http_port_t -p tcp 10080;
sudo semanage port -a -t http_port_t -p tcp 10081;
sudo semanage port -a -t http_port_t -p tcp 10082;
# Install NCat
sudo yum install nc -y;
# Start some servers for testing:
while true ;
do
nc -l -p 10080 -c 'echo -e "HTTP/1.1 200 OK\n\n This is Server ONE"' ;
done &
while true ;
do
nc -l -p 10081 -c 'echo -e "HTTP/1.1 200 OK\n\n This is Server TWO"' ;
done &
while true ;
do
nc -l -p 10082 -c 'echo -e "HTTP/1.1 200 OK\nContent-Type: application/json\n\n { \"Message\" :\"Hello, World!\" }"' ;
done &
# Back up your haproxy.cfg and install ours fron a Gist:
sudo mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak;
sudo curl -L https://bit.ly/3j5aLhM > /tmp/haproxy.cfg;
sudo mv /tmp/haproxy.cfg /etc/haproxy/haproxy.cfg;
# Restart HAProxy!
sudo systemctl restart haproxy;
echo "Complete!";

Installing HAProxy 2.2 on CentOS 8

Author: Jim O'Connell, Technical Marketing Engineer at HAProxy Technologies

You don’t have to work at a huge company to justify using a load balancer. You might be a hobbyist, self-hosting a website from a couple of Raspberry Pi computers; Or perhaps you’re the server administrator for a small business; Maybe you do work for a huge company. Whatever your situation, you can benefit from using the HAProxy load balancer to manage your traffic. After all, HAProxy is known as "the world's fastest and most widely used software load balancer."

HAProxy packs many features that can make your applications more secure and reliable including built-in rate limiting, anomaly detection, connection queuing, health checks, and detailed logs and metrics. The very basic skills and concepts you’ll learn in this exercise will prove useful as you use HAProxy to build a more robust, far more powerful, infrastructure. What is a load balancer, anyway and why would you need one? A load balancer is a way to easily distribute connections across several web or application servers. In fact, HAProxy can balance any type of TCP traffic including RDP, FTP, WebSockets or database connections. The ability to distribute load means that you don’t need to purchase a massive web server with zillions of gigs of ram just because your website gets more traffic than Google!

A load balancer also gives you flexibility. Perhaps your existing web server isn’t robust enough to meet peak demand that happens during busy parts of the year and you’d like to add another, but only temporarily. Maybe you want to add some redundancy in case one server fails. With HAProxy, you simply add more servers to the backend pool when you need them and remove them when you don't.

You can also route requests to different servers depending on the context. For example, you might want to handle your static content with a couple of cache servers such as Varnish, but route anything that requires dynamic content, such as an API endpoint to a more powerful machine.

In this article, we will walk through setting up a very basic HAProxy install to use HTTPS to listen on secure port 443 and utilize a couple of backend web servers. We'll even send all traffic that comes to a predefined URL like /api/ to a different server or pool of servers.

Installing HAProxy

Spin up a new CentOS 8 server or instance and bring the system up to date:

sudo yum update -y

This will typically run for a while. Grab yourself a coffee.

We're going to do a two-part install, first installing the yum version of HAProxy, then compiling and installing our own binary from source, to overwrite the previous HAProxy with the latest version. Installing with yum does a lot of the heavy lifting for us as far as generating systemd startup scripts, etc., so we’re going to do the yum install and then overwrite the HAProxy binary with the latest version by compiling it from its source code:

sudo yum install -y haproxy

Enable the HAProxy service with:

sudo systemctl enable haproxy

To upgrade to the latest version, (at the time of publication, HAProxy version 2.2) let's compile the source code. People often assume that compiling and installing a program from its source code requires a high degree of technical ability, but as you're about to see, it's a pretty straightforward process. We'll start by installing a few packages using yum that will give us the tools for compiling code:

sudo yum install dnf-plugins-core
sudo yum config-manager --set-enabled PowerTools
# (Multiline command next 3 lines. Copy and paste together:) 

sudo yum install -y git ca-certificates gcc glibc-devel \
  lua-devel pcre-devel openssl-devel systemd-devel \
  make curl zlib-devel 

Use Git to get the latest source code and change to the haproxy directory:

git clone http://git.haproxy.org/git/haproxy-2.2.git/ haproxy
cd haproxy

The following three commands will build and install HAProxy with integrated Prometheus support:

# Multiline command next 3 lines copy and paste together: 
make TARGET=linux-glibc USE_LUA=1 USE_OPENSSL=1 USE_PCRE=1 \
PCREDIR= USE_ZLIB=1 USE_SYSTEMD=1 \ EXTRA_OBJS="contrib/prometheus-exporter/service-prometheus.o"

sudo make PREFIX=/usr install # Install to /usr/sbin/haproxy

Test it by querying the version info:

haproxy -v

Which should now produce the following output:

HA-Proxy version 2.2.3-7623be-12 2020/09/22 - https://haproxy.org/

Our Web Servers

HAProxy doesn’t serve any traffic itself—this is the job of the backend servers, which are typically web or application servers. For this exercise, we’re going to be using a tool called Ncat, the “Swiss Army Knife” of networking, to create some exceedingly simple servers. Install it with:

sudo yum install nc -y

If your system has SELinux enabled, you'll need to enable port 8404, the port used for accessing the HAProxy Stats page, discussed below, as well as the ports for our backend servers:

sudo dnf install policycoreutils-python-utils
sudo semanage port -a -t http_port_t  -p tcp 8404
sudo semanage port -a -t http_port_t  -p tcp 10080;
sudo semanage port -a -t http_port_t  -p tcp 10081;
sudo semanage port -a -t http_port_t  -p tcp 10082;

Create two Ncat web servers and an API server:

while true ;
do
    nc -l -p 10080 -c 'echo -e "HTTP/1.1 200 OK\n\n This is Server ONE"' ;
done &

while true ;
do
    nc -l -p 10081 -c 'echo -e "HTTP/1.1 200 OK\n\n This is Server TWO"' ;
done &

while true ;
do
    nc -l -p 10082 -c 'echo -e "HTTP/1.1 200 OK\nContent-Type: application/json\n\n { \"Message\" :\"Hello, World!\" }"' ;
done &

These very simple servers print out a message such as “This is Server ONE” and will run until the server is stopped. In a real-world setup, you would use actual web and app servers.

The HAProxy Config File

The file /etc/haproxy/haproxy.cfg is the configuration file for HAProxy. This is where you will make all of the changes that define your load balancer. Here is a basic configuration to get you started with a working server.

global
    log         127.0.0.1 local2
    user        haproxy
    group       haproxy

defaults 
    mode                    http
    log                     global
    option                  httplog

frontend main
    bind *:80
        
    default_backend web
    use_backend api if { path_beg -i /api/ }
    
    #----------------------------------------
    # SSL termination - HAProxy handles the encryption.
    #    To use it, put your PEM file in /etc/haproxy/certs  
    #    then edit and uncomment the bind line (75)
    #----------------------------------------
    # bind *:443 ssl crt /etc/haproxy/certs/haproxy.pem ssl-min-ver TLSv1.2
    # redirect scheme https if !{ ssl_fc }


#----------------------------------------
# Enable stats at http://test.local:8404/stats
#----------------------------------------

frontend stats
    bind *:8404
    stats enable
    stats uri /stats
#----------------------------------------

# round robin balancing between the various backends
#----------------------------------------

backend web
    server web1 127.0.0.1:10080 check
    server web2 127.0.0.1:10081 check

#----------------------------------------

# API backend for serving up API content
#----------------------------------------
backend api
    server api1 127.0.0.1:10082 check

Starting, Restarting and Reloading HAProxy

At this point, HAProxy is probably not running, so issue the command sudo systemctl restart haproxy to start (or restart) it. The restart method is fine for non-production situations, but once you are up and running, you'll want to get in the habit of using sudo systemctl reload haproxy to avoid any service interruption, even when you have an error in your config. For example after you make any changes to /etc/haproxy/haproxy.cfg you need to reload the daemon with sudo systemctl reload haproxy to effect the changes. If there is an error, it will let you know, but keep running with the previous configuration. Check the status of your HAProxy with sudo systemctl status haproxy.

If it doesn't report any errors, you have a running server. Test it with curl on the server, by typing curl http://localhost/ at the command line. If you see This is Server ONE, then it all worked! curl a few times and watch it cycle through your backend pool, then see what happens when you type curl http://localhost/api/. Adding /api/ to the end of the URL will send all of that traffic to the third server in our pool. At this point, you should have a functioning load balancer! Stats In our configuration, you'll notice that we defined a frontend called "stats" listening on port 8404:

frontend stats
    bind *:8404
    stats uri /stats    
    stats enable

In your browser, load up http://localhost:8404/stats (Note: HTTP, not HTTPS,) and have a look around. There's a lot of information on that page, so have a look at this article that does a great job of explaining it all: Exploring the HAProxy Stats Page.

Conclusion

At this point, we've covered just a few of the features in HAProxy. You now have a server that listens on ports 80 and 443, (redirecting HTTP traffic to HTTPS,) balancing your traffic between several backend servers, and even sending traffic matching a specific URL pattern to a different backend server. You've also unlocked the very powerful HAProxy Stats page that gives you a great overview of your systems.

What we've done here might seem simple but make no mistake about it—you have just built and configured a very powerful load balancer capable of handling a significant amount of traffic.

TL;DR

All of the above commands have been collected for your convenience below:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment