Start 2 kubernetes clusters.
Pick 1 node and label it as role=loadbalancer:
$ kubectl get nodes
NAME STATUS AGE
gke-failover-c93a5565-node-bilp Ready 1h
gke-failover-c93a5565-node-siro Ready 1h
gke-failover-c93a5565-node-woat Ready 1h
$ kubectl label node gke-failover-c93a5565-node-woat role=loadbalancer
Deploy services and serviceloadbalancer to each cluster, i.e create this yaml:
# This is the backend service
apiVersion: v1
kind: Service
metadata:
name: hostname
labels:
app: hostname
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9376
selector:
app: hostname
---
# This is the backend
apiVersion: v1
kind: ReplicationController
metadata:
name: hostname
spec:
replicas: 1
template:
metadata:
labels:
app: hostname
spec:
containers:
- name: hostname
image: gcr.io/google_containers/serve_hostname:1.2
ports:
- containerPort: 9376
---
# This is the frontend that needs to stick requests to backends
apiVersion: v1
kind: Service
metadata:
name: haproxy
labels:
app: service-loadbalancer
spec:
type: NodePort
ports:
- port: 80
selector:
app: service-loadbalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: service-loadbalancer
labels:
app: service-loadbalancer
version: v1
spec:
replicas: 1
selector:
app: service-loadbalancer
version: v1
template:
metadata:
labels:
app: service-loadbalancer
version: v1
spec:
nodeSelector:
role: loadbalancer
containers:
- image: gcr.io/google_containers/servicelb:0.4
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
name: haproxy
ports:
# All http services
- containerPort: 80
hostPort: 80
protocol: TCP
# haproxy stats
- containerPort: 1936
hostPort: 1936
protocol: TCP
args:
- --default-return-code=200
You now have 2 haproxies. Get their public ips:
$ kubectl get node gke-failover-c93a5565-node-woat -o yaml | grep -i external -B 5
...
someIP1
$ kubectl get node kubernetes-minion-x1w8 -o yaml | grep -i external -B 5
...
someIP2
Spin up a third haproxy with this config:
global
daemon
stats socket /tmp/haproxy
defaults
log global
option http-keep-alive
timeout http-keep-alive 60s
timeout http-request 5s
timeout client 50s
timeout server 50s
mode http
frontend web
mode http
bind *:80
default_backend web
backend web
balance roundrobin
option httpchk GET /hostname
http-check expect status 200
server cluster1 someIP1:80 check inter 1s fall 1 rise 2
server cluster2 someIP2:80 check inter 1s fall 1 rise 2
Start up:
$ while true; do curl mainHaproxy/hostname; echo; sleep 1; done
hostname-jjfsb
hostname-wwkpj
hostname-jjfsb
hostname-wwkpj
Kill one cluster (or just kubectl delete rc service-loadbalancer).
hostname-jjfsb
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
hostname-jjfsb
hostname-jjfsb
hostname-jjfsb
hostname-jjfsb
hostname-jjfsb