Skip to content

Instantly share code, notes, and snippets.

@bprashanth
Last active July 29, 2016 21:30
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bprashanth/507f61f9cefa465c3d6d to your computer and use it in GitHub Desktop.
Save bprashanth/507f61f9cefa465c3d6d to your computer and use it in GitHub Desktop.
sticky sessions

Create a backend service that simply serves the pod name, and a frontend haproxy instance that balances based on client cookies.

# This is the backend service
apiVersion: v1
kind: Service
metadata:
  name: hostname
  annotations:
    # Enable sticky-ness on "SERVERID"
    serviceloadbalancer/lb.cookie-sticky-session: "true"
  labels:
    app: hostname
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9376
  selector:
    app: hostname
---
# This is the backend
apiVersion: v1
kind: ReplicationController
metadata:
  name: hostname
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: hostname
    spec:
      containers:
      - name: hostname
        image: gcr.io/google_containers/serve_hostname:1.2
        ports:
        - containerPort: 9376
---
# This is the frontend that needs to stick requests to backends
apiVersion: v1
kind: Service
metadata:
  name: haproxy
  labels:
    name: service-loadbalancer
spec:
  type: NodePort
  ports:
  - port: 80
  selector:
    name: service-loadbalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: service-loadbalancer
  labels:
    app: service-loadbalancer
    version: v1
spec:
  replicas: 1
  selector:
    app: service-loadbalancer
    version: v1
  template:
    metadata:
      labels:
        app: service-loadbalancer
        version: v1
    spec:
      containers:
      - image: gcr.io/google_containers/servicelb:0.3
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        name: haproxy
        ports:
        # All http services
        - containerPort: 80
          hostPort: 80
          protocol: TCP
        # haproxy stats
        - containerPort: 1936
          hostPort: 1936
          protocol: TCP

In the haproxy pod you should see something like:

$ kubectl exec  service-loadbalancer-pw616 -- cat /etc/haproxy/haproxy.cfg
...
backend hostname

    balance roundrobin
    # TODO: Make the path used to access a service customizable.
    reqrep ^([^\ :]*)\ /hostname[/]?(.*) \1\ /\2


    # insert a cookie with name SERVERID to stick a client with a backend server
    # http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-cookie
    cookie SERVERID insert indirect nocache
    server 10.245.0.7:9376 10.245.0.7:9376 cookie s0 check port 9376 inter 5
    server 10.245.0.8:9376 10.245.0.8:9376 cookie s1 check port 9376 inter 5
    server 10.245.1.3:9376 10.245.1.3:9376 cookie s2 check port 9376 inter 5
    server 10.245.2.4:9376 10.245.2.4:9376 cookie s3 check port 9376 inter 5
    server 10.245.2.5:9376 10.245.2.5:9376 cookie s4 check port 9376 inter 5

Where the important bit is "cookie SERVERID"

You can test the stickyness:

$ for i in 1 2 3 4 5; do curl 104.197.221.130/hostname; echo; done 
hostname-fiecu
hostname-lc6tg
hostname-wrzrk
hostname-qotbq
hostname-8smz0

$ for i in 1 2 3 4 5; do curl 104.197.221.130/hostname --cookie "SERVERID=s1"; echo; done 
hostname-wrzrk
hostname-wrzrk
hostname-wrzrk
hostname-wrzrk
hostname-wrzrk
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment