Skip to content

Instantly share code, notes, and snippets.

@ahume
Last active December 8, 2021 20:40
  • Star 13 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save ahume/d56699f3eb2292dbbc1ba3825d44e4b5 to your computer and use it in GitHub Desktop.
Concourse on Kubernetes

Concourse on Kubernetes

This document outlines Brandwatch's Concourse installation running on Kubernetes. The full configuration can be found at https://github.com/BrandwatchLtd/concourse-ops (internal only currently). It's a fairly new installation (1-2 weeks) and we're slowly migrating work from our existing BOSH installation to this.

Comments/questions welcome below.

Summary

  • Google GKE
  • ConcourseCI (from stable/concourse chart)
  • Prometheus / Alert Manager (Metrics, monitoring, alerting)
  • nginx-ingress-controller (TLS termination, routing)
  • kube-lego (letsencrypt certificates)
  • preemptible-killer (controlled shutdown of preemptible VM instances)
  • delete-stalled-concourse-workers (periodically checks for and kills stalled workers)

GKE/Kubernetes

  • Kubernetes nodes run Ubuntu images, to allow for overlay baggageclaimDriver. We did not find any configuration that could run successfully on COS instances.
  • Runs across 2 AZs (so we can run minimum 2 nodes in a node-pool)
  • Cluster split into two node-pools
    • node-pool for Concourse Workers (auto-scaling). n1-standard-4 machines. We’ve generally found much better behaviour from workers once they have around 4CPUs available.
    • node-pool for everything else. n1-standard-2 machines.
  • All instances are currently preemptible, so we trade off some stability of workers for much reduced cost (but continue to work on increasing stability).

ConcourseCI

Concourse is installed via the Helm charts.

  • Concourse v3.8.0 currently
  • baggageclaimDriver: overlay
  • Two web replicas
  • Between 2-6 workers (we scale up/down for work/non-work hours)
  • Service: clusterIP
  • Ingress (uses nginx-ingress-controller)

nginx-ingress-controller

The Nginx Ingress Controller is a pretty vanilla, installed by the helm stable chart.

  • v0.9.0
  • 2 replicas
  • kube-system/default-http-backend
  • Service bound to Google Network Load Balancer IP

Prometheus

Prometheus is installed via the Prometheus operator.

  • 1 replica
  • 2 alert-managers

kube-lego

kube-lego process runs in the cluster and finds Ingress objects requiring TLS certificates. It deals with letsencrypt and setting up the HTTP challenge. Installed by the helm stable chart.

Preemptible work arounds

There's a bunch of clutter related to wanting to run workers on preemptible GKE instances. Preemptbile GKE instances cost approx 30% the price of standard instances but can be preempted (shutdown) at any time, and at least once every 24h.

If you are happy paying for non-preemptible instances you'll likely get more stability of workers without any of these work arounds. On the other hand, you never know when a node will die underneath you for other reasons, so this is a more general problem which would be good to solve.

preemptible-killer

https://github.com/estafette/estafette-gke-preemptible-killer

A basic attempt to control preemptible VM shutdowns. The controller adds annotations to preemptible nodes and within 24 hours does a controlled termination of all pods and shuts down the VM. This is preferable to the VM dying underneath us with no warning, which leads to stalled workers. Will likely adapt this to force restart of preemptible VMs just prior to working hours, to reduce chance of forced restarts during working hours.

We have experimented with shutdown scripts on preemptible nodes, but cannot get them to successfully delete worker pods during the shutdown phase. More experimentation required here, because I don’t understand why it’s not possible. We currently work around this problem with…

stalled worker cleanup

We run delete-stalled-concourse-workers in the cluster which every minute checks for stalled workers via the Concourse API. If it finds any it prunes them.

@dlbock
Copy link

dlbock commented Dec 8, 2021

I recently wrote this up: https://dlbock.github.io/2021/12/06/autoscaling-concourse-workers-with-prometheus.html, mostly for my future self just in case, but maybe it'll be helpful to someone else

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment