Skip to content

Instantly share code, notes, and snippets.

@Depado

Depado/blog.md Secret

Created July 19, 2018 15:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Depado/d21bace213e88e15a984f93900449d45 to your computer and use it in GitHub Desktop.
Save Depado/d21bace213e88e15a984f93900449d45 to your computer and use it in GitHub Desktop.

Introduction

hello

This is the third an final part of this article series. In the first part we learned how to:

  • Start a Kubernetes cluster using GKE
  • Deploy Tiller and use Helm
  • Deploy a Drone instance using its Helm Chart
  • Enable HTTPS on our Drone instance using cert-manager

In the second part we created our first Drone pipeline for an example project, in which we ran a linter, either gometalinter or golangci-lint, built the Docker image and push it to GCR with appropriate tags according to the events of our VCS (push or tag).

In this last article, we'll see how to create an Helm Chart, and how we can automate the upgrade/installation procedure directly from within our Drone pipeline.

Helm Chart

Creating the Chart

Helm provides us with a nice set of helpers. So let's go in our dummy repo and create our chart.

https://gist.github.com/9b7d345dc7d4015c557c2d90e1c7e1aa

This will create a new directory dummy where you are. This directory will contain two directories and some files:

  • charts/ A directory containing any charts upon which this chart depends
  • templates/ A directory of templates that, when combined with values, will generate valid Kubernetes manifest files
  • Charts.yaml A YAML file containing information about the chart
  • values.yaml The default configuration values for this chart

For more information, check the documentation about the chart file structure.

So your repository structure should look like this:

https://gist.github.com/1cc684b54715925fe9bdf32ac1408098

Here, we're going to modify both the values.yaml files to use sane defaults for our chart, and more importantly templates/ to add and modify the rendered k8s manifests.

We can see that Helm created a pretty chart ensuring the best practices, with some nice helpers. As you can see the metadata section is quite always the same:

https://gist.github.com/b7a8b3c49bc8f6901cd79c199969df9d

This will ensure we can deploy our application multiple times without our resources colliding.

Values File

Let's open up the dummy/values.yaml file:

https://gist.github.com/b18f1109ca5359b23501a252c1b99d35

Those are the default values (as well as the accepted values) our Chart currently understands. For now we're just going to modify the image section to reflect the work we have done in the previous part with our image deployment to GCR:

https://gist.github.com/c048b7cfa7d837d7725b956f166274ae

We are setting the pullPolicy to Always because our latest image can, and will, change a lot over time. These are the default values, we'll be able to tweak those values for specific deployments. More on that later in the article.

Deployment Manifest

Remember when we created the dummy project that had a single endpoint /health that answers a 200 OK in the previous part ? Well this endpoint is going to come handy here. It is what's called a liveness probe.

We are going to use this endpoint as our readiness probe too. Liveness probes are used by Kubernetes to ensure your container is still running and has the expected behavior. If our liveness probe were to answer anything else than a 200 OK status, Kubernetes would consider the program crashed and would fire up a new pod before evicting this one. The readiness probe, on the other hand determines if the pod is ready to accept incoming connections. While this probe doesn't serve a suitable answer, Kubernetes won't route any traffic to the pod.

In our case, this application is really dumb. We can use the /health route for both the liveness probe and readiness probe. So we'll open up the dummy/templates/deployment.yaml file and edit this section:

https://gist.github.com/cca7f4c0f0bb9c5c25dd4d57db2bbff3

And... Well that's it. Our deployment manifest is complete already since the Chart created by Helm is flexible enough to allow us to define what we need in our values.yaml file.

Let's execute this, and check that our deployment is correctly rendered. We're going to run Helm in debug mode and dry-run mode so it prints out the rendered manifests and doesn't apply our Chart for real. Also we're going to name our release with the -n staging and we'll fake install it in the staging namespace.

https://gist.github.com/11085c2069be38d157a879f4bf265e57

https://gist.github.com/c5dfd2ee8b3bd0759aaebe212d13df23

Service Manifest

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector.

So a Service in Kubernetes is a way to create a stable link to access dynamically created pods using selectors. Remember all the things in our metadata.labels section in our manifests ? This is the way we're going to access our application!

So let's open our dummy/templates/service.yaml:

https://gist.github.com/64a2fd8df28e77f6964fe5849f7a21fe

Something is off here. Our targetPort is wrong. Remember our Docker image and our dummy Go program ? We listen and expose the 8080 port. No problem! We're simply going to allow the targetPort value to be customized:

https://gist.github.com/5a7c357bff8063130bb8771747baaf66

Sounds better. Let's modify our dummy/values.yaml file:

https://gist.github.com/541f4f36e996136346588ef30e283a1c

And once more, let's run Helm in dry-run and check if everything matches:

https://gist.github.com/7a624ce3796a344e62f753819651b599

https://gist.github.com/5403e6d2f6c44eb5655eef28b7b82aae

Ingress

Helm Charts are supposed to be independent from the platform Kubernetes is deployed to and the technologies used. So we need to let people decide whether or not to activate the Ingress and the annotations that are associated with it.

Enforcing the use of a GCLB instead of an nginx load balancer doesn't make sense in the default values. So we'll introduce a new file, which will be specific to our own deployments. When you use Helm, it provides several ways to override the values defined in values.yaml. First, you can provide your own values file. If Helm doesn't find a key, it will fallback to the sane defaults we declared in our default values file.

So let's create our first "user-supplied" values, and let's name it staging.yml:

https://gist.github.com/6bb1e7adf891e5a83f598efa220f909a

Note: We need to define the path to be /* and not just / because of how GCLB works.

Here we're using the same techniques we saw in the first part to link a load balancer to a static IP. And then what happens if we run helm in dry-run once more, but this time we give it our custom values file ?

https://gist.github.com/9ac0330bc56a1a3ea0c59e24e5ba20b0

We now have an Ingress !

https://gist.github.com/c53fc684f7691deb1f9ec66c1726e075

Pipeline

Service Account

Before we jump in how to continuously deploy our staging application (and then our prod) using Drone, we first need to retrieve the Tiller credentials we created in the first part of this series.

We are going to inject these credentials in Drone so it can use Helm within our pipeline. So first we're going to retrieve the Tiller credentials:

https://gist.github.com/9c07a1df685ff9f719fe505c59eba0de

We're going to need what's inside the data.token. And just a reminder, this is base64 encoded data. And since we're kind with our Drone instance, we're going to decode it for him:

https://gist.github.com/e0e2b80b69551a8910fbb6cccfee1e84

Store this somewhere, we'll explain later where we're going to use it. Also, let's retrieve the IP of your Kubernetes Master:

https://gist.github.com/a07e564191dcb859ec11bf62626711c8

The Drone-Helm plugin

We are going to use the drone-helm plugin to automatically execute our Helm command. This plugin expects two secrets: api_server and kubernetes_token.

So we're going to create these secrets in our Drone instance:

https://gist.github.com/61aebcfe738523a9418b95c4db6583dc

And now it's time to configure our pipeline. I'll include the GCR part from the previous article as well as the drone-helm plugin usage:

https://gist.github.com/c98b64809ca2bb5188e59730fdeeef63

This is pretty self-explanatory when you're reading the docs but I'll explain it anyway:

When there's a push on the master branch, first we're going to build and push our Docker image to GCR. Then we're going to execute the drone-helm plugin, giving it the path to our chart relative to our repository (helm/dummy). We name our release staging in the namespace staging and we're using the tiller service account. We're also going to wait for all the resources to be created or recreated before exiting. Also, since we're using the latest image we specify we want to recreate the pods using the recreate_pods option.

That's it. Now every time we push to master, we're going to update our staging environment, given that all the tests pass.

thumbs

Production

If you've learned things in this article series, you'll now understand what makes Helm so special. Let's create a new file, and name it prod.yml (still in our helm/ directory):

https://gist.github.com/fce04712c52b542dbd57a3321a2e3301

And that's it. Now let's add these few lines to our Drone pipeline:

https://gist.github.com/1b97aa9aa3ec138a6361151d570769e5

And that's it. You now have a complete CI/CD pipeline that goes right into production when you tag a new release on Github. It will build the Docker image, tag it with the given tag, the git commit's sha1, and the latest tag. It will then use helm to deploy said image (using the tag) to our cluster.

TLS

sweat

We need to be able to handle TLS for our application. For both environments, staging and prod. We are going to handle that quite like we did in the first part. So if you don't have cert-manager installed in your cluster, and don't have the ACME Issuer, then head to this part.

Certificate Manifest

Basically what we're going to do is we're going to templatize the certificate. And then we'll modify our Ingress manifest so it can take into account our TLS secret.

Let's create a new file in our dummy/templates/ directory, and name it certificate.yaml:

https://gist.github.com/05f188225e74753c29a727d4c1ff941a

We are now expecting a tls object in the provided values. So let's add it in our values.yaml so users know what values they can provide:

https://gist.github.com/558dd46eb7016963aaec55a558df4dab

This also adds a tls.apply value, which we'll see later.

Let's edit our staging.yaml user-provided values like so:

https://gist.github.com/717d330c861294f71de07d4c4a3ded9e

Ingress Modification

The tls.apply value has an important role to play here. This is a two-step deployment. First we're going to deploy (simply push to master if you will), with the tls.apply to false. This will trigger the deployment of our new Certificate, and just like we saw about TLS in the first part of this series, we will have to wait for the Certificate to create the appropriate secret.

Once the secret is created... Well, we'll have to tell our Ingress to use it. That's where the tls.apply value enters. Let's modify our dummy/templates/ingress.yaml file:

https://gist.github.com/4a7fcb199d4a94109a296711f4237379

Here we're declaring that if the tls.apply value is true, then we can use the named secret as the TLS certificate for this endpoint.

And really that's it. We're done. First, deploy with the tls.apply to false and wait for your certificate to be created by using the kubectl describe certificate staging-dummy --namespace=staging command. Once you see your certificate, simply deploy once more with tls.apply set to true.

Wait for a bit. And tada, you have TLS !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment