Skip to content

Instantly share code, notes, and snippets.

Created October 31, 2012 15:36
Show Gist options
  • Save nateware/3987720 to your computer and use it in GitHub Desktop.
Save nateware/3987720 to your computer and use it in GitHub Desktop.
HAProxy sample config for EC2
# This config file is a combination of ideas from:
# maximum number of simultaneous active connections
maxconn 50000
# run in the background (duh)
user haproxy
group haproxy
# for restarts
pidfile /var/run/
# Logging to syslog facility local0
log local0
stats socket /var/run/haproxy.stat mode 777
# Distribute the health checks with a bit of randomness
spread-checks 5
# Uncomment the statement below to turn on verbose logging
# Settings in the defaults section apply to all services (unless you change it,
# this configuration defines one service, called rails).
# apply log settings from the global section above to services
log global
# Proxy incoming traffic as HTTP requests
mode http
# Unfortunately, per the haproxy docs, connection-based load balancing is
# not a good strategy for HTTP
balance roundrobin
# Maximum number of simultaneous active connections from an upstream web server
# per service
maxconn 25000
# Log details about HTTP requests
option httplog
# Abort request if client closes its output channel while waiting for the
# request. HAProxy documentation has a long explanation for this option.
option abortonclose
# Check if a "Connection: close" header is already set in each direction,
# and will add one if missing. Also add X-Forwarded-For header
option httpclose
option forwardfor
# If sending a request to one server fails, try to send it to another, 3 times
# before aborting the request
retries 3
# Do not enforce session affinity (i.e., an HTTP session can be served by
# any Mongrel, not just the one that started the session
option redispatch
# Keep timeouts at web speed, since this balancer sits in front of everything
# Backends will force timeout faster if needed.
timeout client 30s
timeout connect 30s
timeout server 30s
# For the frontend balancer, check the health of haproxy monitor URL.
# This avoids a double-check; haproxy will say 503 if backends are 503
option httpchk HEAD /haproxy?monitor HTTP/1.0
# Amount of time after which a health check is considered to have timed out
timeout check 5s
# Enable the statistics page
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth admin:yourpasswordhere
stats refresh 5s
# this is where you define your backend web clusters.
# you need one of these blocks for each cluster
# and each one needs its own name to refer to it later.
# Note: The "cluster:serviceport" is just a *name*, the port is not used
listen http-webservices
# Create a monitorable URI which returns a 200 if at least 1 server is up.
# This could be used by Traverse/Nagios to detect if a whole server set is down.
acl servers_down nbsrv(servers) lt 1
monitor-uri /haproxy?monitor
monitor fail if
# add a line for each EC2 web server
# this is typically generated via script
server 10.0.0.x:80 10.0.0.x:80 maxconn 25 check inter 5s rise 3 fall 2
listen https-webservices
# set mode from http to tcp because haproxy can't use SSL on it's own
mode tcp
# use haproxy's built in ssl check
option ssl-hello-chk
# again, one line per server
server 10.0.0.x:443 10.0.0.x:443 maxconn 25 check inter 5s rise 18 fall 2
Copy link

rex commented Oct 27, 2015

@nateware I know this is years later, but I still felt compelled to leave an appreciative comment. I have a pretty complex config in place already for the web servers I'm responsible for (I'm a devops guy like so many of us) but have been looking for a good resource to see a real-world configuration that I can understand. You, sir, have provided that with bells on with this gist, and I appreciate you very much for taking the time to post this.

Some of my favorites (That I will absolutely be implementing momentarily):

  • timeout check 5s: The URI I have in place for server health checks should load in milliseconds. If it takes longer than a second, I already know something is wrong. I don't need or want to wait 30s to know I am in fail state.
  • option abortonclose: They closed the browser window. They don't want your web page. Why on earth would you still serve the request? No request for YOU!
  • spread-checks 5: Seems like a fun and creative way to make sure health checks are legitimate. Almost the equivalent of a movie character turning around rapidly to see if anyone is behind them.
  • retries 3: I didn't even know this existed. Instead of instantly failing the request if it lands on a server that died just hand it off to another healthy host.

You rock for this. Thanks again.

Copy link

Not familiar with HA so took me awhile to figure it out. Here is what worked for me

backend my_backend
timeout check 3000

Keep open forever

option httpchk GET /login
balance source
server meteor1 mymachine:3000 check maxconn 1000 weight 10 cookie websrv1 check inter 10000 rise 1 fall 3

listen health

Create a monitorable URI which returns a 200 if at least 1 server is up.

This could be used by Traverse/Nagios to detect if a whole server set is down.

timeout check 10
acl servers_down nbsrv(my_backend) lt 1
monitor-uri /haproxy?monitor
monitor fail if servers_down

From the ELB, I refer to http port 8888 /haproxy?monitor as my health check.

Seems to be working.

Copy link

Thanks, worked well for me.

Copy link

hvelarde commented Aug 15, 2017

thank you; HAProxy is a very complex piece of software and the manual is pretty discouraging for beginners.

just a correction: timeout check is not used for health checks; according to the manual:

timeout check <timeout>
Set additional check timeout, but only after a connection has been already established.
<timeout> is the timeout value specified in milliseconds by default, but can be in any other unit if the number is suffixed by the unit, as explained at the top of this document.

Copy link

How can you configure X quantity of calls per second for a frontend? So for example I would like to configure that the first 5 calls (HTTP requests) go the 1st frontend and the next 5 to the 2nd frondend.
Or is it even possible to configure that 5 calls only towards 1 frontend and those calls can me randomly made so the first 3 call go to 1st frontend and the next call go to 2nd frontend and then again the next 2 calls go the frontend 1.
All kind of Help will be appriciated!!

Copy link

i have two server and two port per server i want if one port of each one server was down do not send to other one port on this server how do it? thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment