Skip to content

Instantly share code, notes, and snippets.

@spiarh
Last active November 5, 2019 14:28
Show Gist options
  • Save spiarh/b19c660114315dfc1aa0e722ad5f175b to your computer and use it in GitHub Desktop.
Save spiarh/b19c660114315dfc1aa0e722ad5f175b to your computer and use it in GitHub Desktop.

Nginx TCP Load Balancer with passive checks

We can use the ngx_stream_module module (available since version 1.9.0) in order to use TCP load balancing. In this mode, nginx will just forward the tcp packets the masters.

/!\ The Open Source version of Nginx only allows one to use passive health checks so therefore using this configuration is only to consider in a PoC. The main issue with passive health-checks is that nginx will mark a node as unresponsive and not distribute traffic only after a failed request.

Configuration the Load Balancer

  • Register SLES
# SUSEConnect -r $MY_REG_CODE
  • Install Nginx
# zypper in nginx
  • Write configuration in /etc/nginx/nginx.conf

/!\ Replace the IP|FQDN in the upstream k8s-masters section

The default mechanism is round-robin so each request will be distributed to a different server.

Note: To enable session persistence, uncomment the hash option so the same client will always be redirected to the same server except if this server is unavailable.

user  nginx;
worker_processes  auto;

load_module /usr/lib64/nginx/modules/ngx_stream_module.so;

error_log  /var/log/nginx/error.log;
error_log  /var/log/nginx/error.log  notice;
error_log  /var/log/nginx/error.log  info;

events {
    worker_connections  1024;
    use epoll;
}

stream {
    log_format proxy '$remote_addr [$time_local] '
                     '$protocol $status $bytes_sent $bytes_received '
                     '$session_time "$upstream_addr"';

    error_log  /var/log/nginx/k8s-masters-lb-error.log;
    access_log /var/log/nginx/k8s-masters-lb-access.log proxy;

    upstream k8s-masters {
        #hash $remote_addr consistent;
        server master00:6443 weight=1 max_fails=1;
        server master01:6443 weight=1 max_fails=1;
        server master02:6443 weight=1 max_fails=1;
    }

    server {
        listen 6443;
        proxy_connect_timeout 1s;
        proxy_timeout 3s;
        proxy_pass k8s-masters;
    }
}
  • Configure firewalld to open up port 6443 (optional)

This step is required only if firewalld is running on the node (default in SLES15 SP1).

# firewall-cmd --zone=public --permanent --add-port=6443/tcp
  • Start and enable Nginx
# systemctl enable --now nginx

Verify if the Load Balancer work

  • Check logs

Keep the following command running outside of the cluster:

$ while true; do kubectl cluster-info; sleep 1; done;

There should be no interruption in the kubectl cluster-info running command.

On the load balancer virtual machine, check the logs to validate that each request is correctly ditributed in a round robin way.

# tail -f /var/log/nginx/lb-k8s-access.log
10.0.0.47 [17/May/2019:13:49:06 +0000] TCP 200 2553 1613 1.136 "10.0.0.145:6443"
10.0.0.47 [17/May/2019:13:49:08 +0000] TCP 200 2553 1613 0.981 "10.0.0.148:6443"
10.0.0.47 [17/May/2019:13:49:10 +0000] TCP 200 2553 1613 0.891 "10.0.0.7:6443"
10.0.0.47 [17/May/2019:13:49:12 +0000] TCP 200 2553 1613 0.895 "10.0.0.145:6443"
10.0.0.47 [17/May/2019:13:49:15 +0000] TCP 200 2553 1613 1.157 "10.0.0.148:6443"
10.0.0.47 [17/May/2019:13:49:17 +0000] TCP 200 2553 1613 0.897 "10.0.0.7:6443"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment