Skip to content

Instantly share code, notes, and snippets.

@ahume
Last active December 8, 2021 20:40
Show Gist options
  • Star 13 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save ahume/d56699f3eb2292dbbc1ba3825d44e4b5 to your computer and use it in GitHub Desktop.
Save ahume/d56699f3eb2292dbbc1ba3825d44e4b5 to your computer and use it in GitHub Desktop.
Concourse on Kubernetes

Concourse on Kubernetes

This document outlines Brandwatch's Concourse installation running on Kubernetes. The full configuration can be found at https://github.com/BrandwatchLtd/concourse-ops (internal only currently). It's a fairly new installation (1-2 weeks) and we're slowly migrating work from our existing BOSH installation to this.

Comments/questions welcome below.

Summary

  • Google GKE
  • ConcourseCI (from stable/concourse chart)
  • Prometheus / Alert Manager (Metrics, monitoring, alerting)
  • nginx-ingress-controller (TLS termination, routing)
  • kube-lego (letsencrypt certificates)
  • preemptible-killer (controlled shutdown of preemptible VM instances)
  • delete-stalled-concourse-workers (periodically checks for and kills stalled workers)

GKE/Kubernetes

  • Kubernetes nodes run Ubuntu images, to allow for overlay baggageclaimDriver. We did not find any configuration that could run successfully on COS instances.
  • Runs across 2 AZs (so we can run minimum 2 nodes in a node-pool)
  • Cluster split into two node-pools
    • node-pool for Concourse Workers (auto-scaling). n1-standard-4 machines. We’ve generally found much better behaviour from workers once they have around 4CPUs available.
    • node-pool for everything else. n1-standard-2 machines.
  • All instances are currently preemptible, so we trade off some stability of workers for much reduced cost (but continue to work on increasing stability).

ConcourseCI

Concourse is installed via the Helm charts.

  • Concourse v3.8.0 currently
  • baggageclaimDriver: overlay
  • Two web replicas
  • Between 2-6 workers (we scale up/down for work/non-work hours)
  • Service: clusterIP
  • Ingress (uses nginx-ingress-controller)

nginx-ingress-controller

The Nginx Ingress Controller is a pretty vanilla, installed by the helm stable chart.

  • v0.9.0
  • 2 replicas
  • kube-system/default-http-backend
  • Service bound to Google Network Load Balancer IP

Prometheus

Prometheus is installed via the Prometheus operator.

  • 1 replica
  • 2 alert-managers

kube-lego

kube-lego process runs in the cluster and finds Ingress objects requiring TLS certificates. It deals with letsencrypt and setting up the HTTP challenge. Installed by the helm stable chart.

Preemptible work arounds

There's a bunch of clutter related to wanting to run workers on preemptible GKE instances. Preemptbile GKE instances cost approx 30% the price of standard instances but can be preempted (shutdown) at any time, and at least once every 24h.

If you are happy paying for non-preemptible instances you'll likely get more stability of workers without any of these work arounds. On the other hand, you never know when a node will die underneath you for other reasons, so this is a more general problem which would be good to solve.

preemptible-killer

https://github.com/estafette/estafette-gke-preemptible-killer

A basic attempt to control preemptible VM shutdowns. The controller adds annotations to preemptible nodes and within 24 hours does a controlled termination of all pods and shuts down the VM. This is preferable to the VM dying underneath us with no warning, which leads to stalled workers. Will likely adapt this to force restart of preemptible VMs just prior to working hours, to reduce chance of forced restarts during working hours.

We have experimented with shutdown scripts on preemptible nodes, but cannot get them to successfully delete worker pods during the shutdown phase. More experimentation required here, because I don’t understand why it’s not possible. We currently work around this problem with…

stalled worker cleanup

We run delete-stalled-concourse-workers in the cluster which every minute checks for stalled workers via the Concourse API. If it finds any it prunes them.

@william-tran
Copy link

@ahume

It is presumably safe to prune the worker before it has completed retiring

This is probably too aggressive, and will result in errored builds; any build using that worker for a task will see that task disconnect, and the build will go orange. This isn't a big deal for us though, because of our job restarter.

How do you differentiate between a job failing for some transient reason, and a legitimate CI/CD build failure?

I guess what I really mean by "some transient reason" is builds with a status of errored rather than failed when retrieved from /api/v1/builds. errored builds are ones that didn't finish because of some concourse related issue, while failed builds happen when a process exits non-zero.

@jschaul
Copy link

jschaul commented May 29, 2018

(...) restarts jobs that errored so no manual intervention is needed to deal with jobs that errored out due to transient issues.

@william-tran Are you able to share this code regarding job-restarts (or is the code that does the above already available somewhere - I wasn't able to find something here)?

@rohithmn3
Copy link

@ahume,

Could you please have this link https://github.com/BrandwatchLtd/concourse-ops open to external..!?
It would be really helpful.

@dlbock
Copy link

dlbock commented Jan 13, 2021

@ahume, awesome writeup! We've recently started running Concourse at Instana (~4-5 months) and we've arrived at a similar-ish setup to yours after some recent tuning changes to avoid workers from just picking up tasks without limit and eventually getting overwhelmed. We're just starting to look into auto-scaling configuration for the workers. I wonder if you have any tips/tricks/gotchas for that?

@sabbir123222
Copy link

Hello Friends,

How to use Kubernetes concourse for auto-scaling functionality to reduce costs?

@dlbock
Copy link

dlbock commented Dec 8, 2021

I recently wrote this up: https://dlbock.github.io/2021/12/06/autoscaling-concourse-workers-with-prometheus.html, mostly for my future self just in case, but maybe it'll be helpful to someone else

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment