Skip to content

Instantly share code, notes, and snippets.

@bprashanth
Last active February 19, 2018 18:54
Show Gist options
  • Save bprashanth/50355c25f4e2245d623f to your computer and use it in GitHub Desktop.
Save bprashanth/50355c25f4e2245d623f to your computer and use it in GitHub Desktop.
https sticky sessions

Create a backend service that simply serves the pod name, and a frontend haproxy instance that balances based on client cookies.

# This is the backend service
apiVersion: v1
kind: Service
metadata:
  name: hostname
  annotations:
    # Enable sticky-ness on "SERVERID"
    serviceloadbalancer/lb.cookie-sticky-session: "true"
  labels:
    app: hostname
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9376
  selector:
    app: hostname
---
# This is the backend
apiVersion: v1
kind: ReplicationController
metadata:
  name: hostname
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: hostname
    spec:
      containers:
      - name: hostname
        image: gcr.io/google_containers/serve_hostname:1.2
        ports:
        - containerPort: 9376
---
# This is the frontend that needs to stick requests to backends
apiVersion: v1
kind: Service
metadata:
  name: haproxy
  labels:
    app: service-loadbalancer
spec:
  type: NodePort
  ports:
  - port: 80
  selector:
    app: service-loadbalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: service-loadbalancer
  labels:
    app: service-loadbalancer
    version: v1
spec:
  replicas: 1
  selector:
    app: service-loadbalancer
    version: v1
  template:
    metadata:
      labels:
        app: service-loadbalancer
        version: v1
    spec:
      containers:
      - image: gcr.io/google_containers/servicelb:0.4
        imagePullPolicy: Always
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        name: haproxy
        ports:
        # All http services
        - containerPort: 80
          hostPort: 80
          protocol: TCP
        # haproxy stats
        - containerPort: 1936
          hostPort: 1936
          protocol: TCP
        args:
        - --default-return-code=200

In the haproxy pod you should see something like:

$ kubectl exec  service-loadbalancer-pw616 -- cat /etc/haproxy/haproxy.cfg
...
backend hostname

    balance roundrobin
    # TODO: Make the path used to access a service customizable.
    reqrep ^([^\ :]*)\ /hostname[/]?(.*) \1\ /\2


    # insert a cookie with name SERVERID to stick a client with a backend server
    # http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-cookie
    cookie SERVERID insert indirect nocache
    server 10.245.0.7:9376 10.245.0.7:9376 cookie s0 check port 9376 inter 5
    server 10.245.0.8:9376 10.245.0.8:9376 cookie s1 check port 9376 inter 5
    server 10.245.1.3:9376 10.245.1.3:9376 cookie s2 check port 9376 inter 5
    server 10.245.2.4:9376 10.245.2.4:9376 cookie s3 check port 9376 inter 5
    server 10.245.2.5:9376 10.245.2.5:9376 cookie s4 check port 9376 inter 5

Where the important bit is "cookie SERVERID"

You can test the stickyness:

$ for i in 1 2 3 4 5; do curl public-ip-of-node/hostname; echo; done 
hostname-fiecu
hostname-lc6tg
hostname-wrzrk
hostname-qotbq
hostname-8smz0

$ for i in 1 2 3 4 5; do curl public-ip-of-node/hostname --cookie "SERVERID=s1"; echo; done 
hostname-wrzrk
hostname-wrzrk
hostname-wrzrk
hostname-wrzrk
hostname-wrzrk

You can also put an Ingress in front of it:

On older clusters the Ingress.Spec.tls is not supported, you only get http. If you have a newer master, and are running on GCE, you can update the ingress controller by running kubectl edit rc --namespace=kube-system l7-lb-controller-v0.5.2 and replace gcr.io/google_containers/glbc:0.5.2 with gcr.io/google_containers/glbc:0.6.0, then kill the pod so the rc starts another one.

If you're not on gce, you can try either:

  1. SSL termination directly with service-ladbalancer: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/README.md#ssl-termination
  2. Nginx ingress controller: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx-third-party#ssl

Create a secret (the next few steps are just hacks to get off the ground with secrets using a legacy example that needs fixing):

kubernetes-root $ cd examples/https-nginx
kubernets-root/examples/https-nginx $ make keys secret
# The CName used here is specific to the service specified in nginx-app.yaml.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
...............+++
................................+++
writing new private key to '/tmp/nginx.key'
-----
godep go run make_secret.go -crt /tmp/nginx.crt -key /tmp/nginx.key > /tmp/secret.json

You should have a json blob for a secret in /tmp/secret.json, now rename the nginx.crt/key fields to match: https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2349 and create the secret.

$ kubectl create -f /tmp/secret.json
secret "nginxsecret" created

Then create the Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: no-rules-map
spec:
  tls:
    # A badly named secret
    secretName: nginxsecret
  backend:
    serviceName: haproxy
    servicePort: 80

On GCE, you'll need to wait till the loadbalancer warms up (O(10m)):

$ kubectl get ing 
NAME           RULE      BACKEND      ADDRESS              AGE
no-rules-map   -         haproxy:80   107.some.public.ip   8m

$ for i in 1 2 3 4 5; do curl https://107.some.public.ip/hostname -k --cookie "SERVERID=s1"; echo; done
hostname-y8itc
hostname-y8itc
hostname-y8itc
hostname-y8itc
hostname-y8itc

Note that this currently requires the following (most importantly the firewall rule): https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites

If this doens't work and you don't see both :80 and :443 open under the gce console->networking, try kubectl logs:

$ kubectl logs l7-lb-controller-v0.6.0-kjas -c l7-lb-controller --follow
..
I0308 03:47:54.712844       1 loadbalancers.go:330] Creating new sslCertificates default-no-rules-map for k8s-ssl-default-no-rules-map
I0308 03:47:58.539590       1 loadbalancers.go:355] Creating new https proxy for urlmap k8s-um-default-no-rules-map
I0308 03:48:02.553696       1 loadbalancers.go:397] Creating forwarding rule for proxy [k8s-tps-default-no-rules-map] and ip 107.178.255.11:443-443
I0308 03:48:10.429696       1 controller.go:325] Updating loadbalancer default/no-rules-map with IP 107.178.255.11
I0308 03:48:10.433449       1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"no-rules-map", UID:"72599eb0-e4e0-11e5-9999-42010af00002", APIVersion:"extensions", ResourceVersion:"36676", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 107.178.255.11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment