Concourse on Kubernetes
This document outlines Brandwatch's Concourse installation running on Kubernetes. The full configuration can be found at https://github.com/BrandwatchLtd/concourse-ops (internal only currently). It's a fairly new installation (1-2 weeks) and we're slowly migrating work from our existing BOSH installation to this.
Comments/questions welcome below.
- Google GKE
- ConcourseCI (from stable/concourse chart)
- Prometheus / Alert Manager (Metrics, monitoring, alerting)
- nginx-ingress-controller (TLS termination, routing)
- kube-lego (letsencrypt certificates)
- preemptible-killer (controlled shutdown of preemptible VM instances)
- delete-stalled-concourse-workers (periodically checks for and kills stalled workers)
- Kubernetes nodes run Ubuntu images, to allow for
overlaybaggageclaimDriver. We did not find any configuration that could run successfully on COS instances.
- Runs across 2 AZs (so we can run minimum 2 nodes in a node-pool)
- Cluster split into two node-pools
- node-pool for Concourse Workers (auto-scaling). n1-standard-4 machines. We’ve generally found much better behaviour from workers once they have around 4CPUs available.
- node-pool for everything else. n1-standard-2 machines.
- All instances are currently preemptible, so we trade off some stability of workers for much reduced cost (but continue to work on increasing stability).
Concourse is installed via the Helm charts.
- Concourse v3.8.0 currently
- baggageclaimDriver: overlay
- Two web replicas
- Between 2-6 workers (we scale up/down for work/non-work hours)
- Service: clusterIP
- Ingress (uses nginx-ingress-controller)
- 2 replicas
- Service bound to Google Network Load Balancer IP
Prometheus is installed via the Prometheus operator.
- 1 replica
- 2 alert-managers
Preemptible work arounds
There's a bunch of clutter related to wanting to run workers on preemptible GKE instances. Preemptbile GKE instances cost approx 30% the price of standard instances but can be preempted (shutdown) at any time, and at least once every 24h.
If you are happy paying for non-preemptible instances you'll likely get more stability of workers without any of these work arounds. On the other hand, you never know when a node will die underneath you for other reasons, so this is a more general problem which would be good to solve.
A basic attempt to control preemptible VM shutdowns. The controller adds annotations to preemptible nodes and within 24 hours does a controlled termination of all pods and shuts down the VM. This is preferable to the VM dying underneath us with no warning, which leads to stalled workers. Will likely adapt this to force restart of preemptible VMs just prior to working hours, to reduce chance of forced restarts during working hours.
We have experimented with shutdown scripts on preemptible nodes, but cannot get them to successfully delete worker pods during the shutdown phase. More experimentation required here, because I don’t understand why it’s not possible. We currently work around this problem with…
stalled worker cleanup
We run delete-stalled-concourse-workers in the cluster which every minute checks for stalled workers via the Concourse API. If it finds any it prunes them.