Skip to content

Instantly share code, notes, and snippets.

@nateware
Created October 31, 2012 15:36
Show Gist options
  • Star 94 You must be signed in to star a gist
  • Fork 26 You must be signed in to fork a gist
  • Save nateware/3987720 to your computer and use it in GitHub Desktop.
Save nateware/3987720 to your computer and use it in GitHub Desktop.
HAProxy sample config for EC2
#
# This config file is a combination of ideas from:
# http://www.37signals.com/svn/posts/1073-nuts-bolts-haproxy
# http://www.igvita.com/2008/05/13/load-balancing-qos-with-haproxy/
# http://wiki.railsmachine.com/HAProxy
# http://elwoodicious.com/2008/07/15/nginx-haproxy-thin-fastcgi-php5-load-balanced-rails-with-php-support/
# http://upstream-berlin.com/2008/01/09/using-haproxy-with-multiple-backends-aka-content-switching/
# http://wiki.railsmachine.com/HAProxy
# http://gist.github.com/raw/25482/d39fb332edf977602c183194a1cf5e9a0b5264f9
#
global
# maximum number of simultaneous active connections
maxconn 50000
# run in the background (duh)
daemon
user haproxy
group haproxy
# for restarts
pidfile /var/run/haproxy.pid
# Logging to syslog facility local0
log 127.0.0.1 local0
stats socket /var/run/haproxy.stat mode 777
# Distribute the health checks with a bit of randomness
spread-checks 5
# Uncomment the statement below to turn on verbose logging
#debug
# Settings in the defaults section apply to all services (unless you change it,
# this configuration defines one service, called rails).
defaults
# apply log settings from the global section above to services
log global
# Proxy incoming traffic as HTTP requests
mode http
# Unfortunately, per the haproxy docs, connection-based load balancing is
# not a good strategy for HTTP
balance roundrobin
# Maximum number of simultaneous active connections from an upstream web server
# per service
maxconn 25000
# Log details about HTTP requests
option httplog
# Abort request if client closes its output channel while waiting for the
# request. HAProxy documentation has a long explanation for this option.
option abortonclose
# Check if a "Connection: close" header is already set in each direction,
# and will add one if missing. Also add X-Forwarded-For header
option httpclose
option forwardfor
# If sending a request to one server fails, try to send it to another, 3 times
# before aborting the request
retries 3
# Do not enforce session affinity (i.e., an HTTP session can be served by
# any Mongrel, not just the one that started the session
option redispatch
# Keep timeouts at web speed, since this balancer sits in front of everything
# Backends will force timeout faster if needed.
timeout client 30s
timeout connect 30s
timeout server 30s
# For the frontend balancer, check the health of haproxy monitor URL.
# This avoids a double-check; haproxy will say 503 if backends are 503
option httpchk HEAD /haproxy?monitor HTTP/1.0
# Amount of time after which a health check is considered to have timed out
timeout check 5s
# Enable the statistics page
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth admin:yourpasswordhere
stats refresh 5s
# this is where you define your backend web clusters.
# you need one of these blocks for each cluster
# and each one needs its own name to refer to it later.
# Note: The "cluster:serviceport" is just a *name*, the port is not used
listen http-webservices 0.0.0.0:80
# Create a monitorable URI which returns a 200 if at least 1 server is up.
# This could be used by Traverse/Nagios to detect if a whole server set is down.
acl servers_down nbsrv(servers) lt 1
monitor-uri /haproxy?monitor
monitor fail if
# add a line for each EC2 web server
# this is typically generated via script
server 10.0.0.x:80 10.0.0.x:80 maxconn 25 check inter 5s rise 3 fall 2
listen https-webservices 0.0.0.0:443
# set mode from http to tcp because haproxy can't use SSL on it's own
mode tcp
# use haproxy's built in ssl check
option ssl-hello-chk
# again, one line per server
server 10.0.0.x:443 10.0.0.x:443 maxconn 25 check inter 5s rise 18 fall 2
@HussanAli
Copy link

How can you configure X quantity of calls per second for a frontend? So for example I would like to configure that the first 5 calls (HTTP requests) go the 1st frontend and the next 5 to the 2nd frondend.
Or is it even possible to configure that 5 calls only towards 1 frontend and those calls can me randomly made so the first 3 call go to 1st frontend and the next call go to 2nd frontend and then again the next 2 calls go the frontend 1.
All kind of Help will be appriciated!!

@masoudmirzajani
Copy link

i have two server and two port per server i want if one port of each one server was down do not send to other one port on this server how do it? thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment