Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save DefrostedTuna/1cf0367b3b121d82a0591e177d6887b8 to your computer and use it in GitHub Desktop.
Save DefrostedTuna/1cf0367b3b121d82a0591e177d6887b8 to your computer and use it in GitHub Desktop.

Setting up a Kubernetes Cluster on DigitalOcean

Introduction

This guide will go over how to set up a single node Kubernetes cluster on DigitalOcean and is targeted towards those that have at least a basic understanding of Kubernetes architecture. In addition, it is assumed that the platform you are using is macOS, and that you're comfortable using the command line. We'll cover the specifics of setting up and configuring the following:

  • Prerequisite Tools
  • Initial Cluster Setup
  • Helm/Tiller
  • Nginx Ingress
  • Load Balancer
  • Cert Manager
  • Persistent Volumes
  • Jenkins
  • Harbor (Private Docker / Helm Registry)
  • Kubernetes Dashboard

Prerequisites Tools

First thing is first. In order to set up and manage our cluster, we need to start with the basics and install a few tools beforehand. Let's start with Homebrew, Doctl, and Kubectl. These will be the tools we use to interact with and manage our Kubernetes cluster.

Homebrew

If you're a Mac user then you're probably already familiar with this. If not however, Homebrew is a CLI package manager for Mac (and Linux). That's pretty much it. It allows you to install packages onto your machine and manage them with ease.

To set up Homebrew, simply run this command in your terminal.

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

This will pull the latest version of Homebrew and set everything up for you.

Note: Homebrew requires the system to have Xcode Command Line Tools installed. You may be prompted to install these. If you are, follow the instructions and you'll be good to go.

Once Homebrew is installed, you can invoke it with the brew command.

Doctl

Doctl is a command line tool created by the DigitalOcean team. It is used to interact with your DigitalOcean account and manage your resources via the command line.

We can install this via, wait for it... Homebrew! Simply run the following command:

brew install doctl

In order to use doctl, we'll need to authenticate with DigitalOcean by providing an access token. This can be created from the Applications & API section of the DigitalOcean Control Panel.

Login to your DigitalOcean account and navigate to the Applications & API section. From here, select "Generate New Token", and then give the token a name. We'll need to grant the token read and write access to our resources.

Copy the newly generated token and run the following command in your terminal window:

doctl auth init

This will prompt for the auth token we just generated. Paste the token into the terminal session and doctl will authenticate with DigitalOcean. You should receive a confirmation message that the credentials were accepted:

Validating token: OK

Kubectl

Kubectl is the command line tool that we will be using to issue commands to our Kubernetes cluster. Anything we need to do to our cluster will be handled via this tool.

We can also use Homebrew to install this as well.

brew install kubernetes-cli

In order for kubectl to be able to interact with our cluster, we'll need to supply our cluster's credentials (config file). Since we do not yet have a cluster set up, we'll take care of this in the coming steps.

Initial Cluster Setup

Now that we have our basic tools ready to go, we can initialize our cluster with DigitalOcean. This can be done with the DigitalOcean control panel, however we'll be using doctl for consistencies sake.

Cluster Creation

Note: At the time of writing, Kubernetes support for doctl is in beta. As such, the command's syntax may change. Please refer to the doctl documentation if the following commands do not work as intended.

Here are the specs that we want to configure our cluster with:

  • Name: kube-cluster
  • Region: NYC1
  • Version: 1.13.2
  • Size: 1 CPU, 2GB Ram
  • Node Count: 1

This guide will use these values. However, feel free to change any of these and create the cluster to your own needs.

It is important to note that our Kubernetes cluster must be created in a region that also supports Block Storage Volumes. We'll use Block Storage for Jenkins, as well as Harbor. More on that later.

To create a cluster via the command line in accordance to these specs, we'd run the following:

doctl kubernetes cluster create kube-cluster \
  --region nyc1 \
  --version 1.13.2-do.0 \
  --size s-1vcpu-2gb \
  --count 1

Again, feel free to replace any of these parameters to suit your own needs. This will take several minutes to complete. Once complete, the command will have copied the cluster config to ~/.kube/config.

Manually Acquiring The Cluster Config File

Once the cluster has been initialized, you will be able to download the config file so that kubectl can access the cluster.

doctl kubernetes cluster kubeconfig save kube-cluster

This command will copy the kubeconfig file to ~/.kube/config on your local machine. If you would like to see the contents of the kubeconfig file, you can run the command and replace save with show.

doctl kubernetes cluster kubeconfig show kube-cluster

Naturally, if you opted to name your cluster something other than kube-cluster you would replace the name with your own.

Using The Proper Context

Whether the cluster's config file was copied automatically or manually, we'll need to tell kubectl to use the correct context so that it can access our cluster.

We can use kubectl to view the list of all available contexts.

kubectl config get-contexts

This will return a list of the servers we have in our ~/.kube/config file.

CURRENT   NAME                   CLUSTER                AUTHINFO                     NAMESPACE
          do-nyc1-kube-cluster   do-nyc1-kube-cluster   do-nyc1-kube-cluster-admin

Given that we only have one entry here, that's what we'll use. Set the context to use do-nyc1-kube-cluster.

kubectl config use-context do-nyc1-kube-cluster

Now any commands we use with kubectl will be issued to our DigitalOcean cluster.

Helm / Tiller

Helm is a client side package manager for Kubernetes, while Tiller is the software that runs on the target cluster to manage said packages. In order to set up and run Helm and Tiller, we need to install Helm with Homebrew.

brew install kubernetes-helm

With this installed, we'll be able to issue commands via helm.

We're not quite done however. We still need to set up Tiller on our cluster. To do this we'll need to set up a Service Account, and Cluster Role Binding on our Kubernetes cluster. Run the following to take care of that.

kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

Once these are set up, we can initialize Helm and install Tiller onto our cluster.

helm init --service-account tiller

It will take a few minutes for the resources to become available, so sit tight.

Nginx Ingress

If we're going to be exposing our applications and services on our cluster, we're going to need a way to access them. The way we do this is via an Ingress. In this specific instance, we'll be using an Nginx Ingress.

Note: This will also create a DigitalOcean Load Balancer; a paid service that is offered by DigitalOcean. At the time of writing, this is a base cost of $10/mo.

Installing the Ingress is easy. Using helm, all we have to do is run:

helm upgrade --install nginx-ingress --namespace nginx-ingress stable/nginx-ingress

That's it. Helm will set up everything for us. In order to access our applications we'll need to configure the Ingress. However, that will need to be done on an individual basis, so we'll save that for when we set up an application later.

Load Balancer

In order for us to properly utilize our Load Balancer and Nginx Ingress, we need to point the incoming traffic from our domain name to the newly created Load Balancer. This can be done by mapping an A record on our domain name to the IP address of the Load Balancer.

Note: This requires your domain to be managed by DigitalOcean. How to handle this is outside of the scope of this guide. However, for more information please visit DigitalOcean's official documentation on the matter.

We can get Load Balancer information via doctl.

doctl compute load-balancer list -o json

We specify -o json because there is a large amount of information output to the screen regarding the available Load Balancers. Returning the output as JSON makes it easier to read. Simply locate the IP address of your Load Balancer and take note of it. If you want to grab only the IP address of the load balancer, we can filter the results with a tool called jq. This is a tool that allows traversal and manipulation of JSON objects via the command line. It can be installed with the command brew install jq.

doctl compute load-balancer list -o json | jq -r '.[0].ip'

Creating an A record for our domain name can also be done via the command line using doctl. We'll need our domain name, the address that we would like to forward, and the IP address of the Load Balancer we want to send the traffic to.

For example; my domain name is uptilt.io, the address that I want to forward is *.guide, and for the purposes of this guide, the IP address of the Load Balancer that I want to forward traffic to is 178.128.132.173. By forwarding *.guide as a wildcard, anything prefixed to guide.uptilt.io will be sent to the Load Balancer. This includes things like jenkins.guide.uptilt.io, and harbor.guide.uptilt.io.

Let's set up the record now.

doctl compute domain records create uptilt.io \
  --record-type A \
  --record-name *.guide \
  --record-data 178.128.132.173 \
  --record-priority 0 \
  --record-ttl 1800 \
  --record-weight 0

With this command, we're telling doctl to create a record on the uptilt.io domain, with a type of A. The record name is *.guide, and we're forwarding the traffic to the Load Balancer with an IP address of 178.128.132.173. Just like we wanted. Don't forget to replace the information with your own to set everything up properly.

Cert Manager

When we host our applications, we'll want to host them with an applicable TLS certificate. Cert Manager is what will manage our certificates. It will issue and renew TLS certificates for our applications automatically so that we won't have to think about it. The Issuer we will be working with is Let's Encrypt. Let's Encrypt offers free TLS certificates and is incredibly easy to set up and use with Cert Manager.

We'll start by installing Cert Manager onto our cluster with helm.

helm upgrade --install cert-manager --namespace kube-system stable/cert-manager

As with the previous installations, it will take a few moments for the applicable resources to become available.

Simply installing the Cert Manager is not enough. We will need to configure Cert Manager to use our designated Issuer. This is done by setting up something called Cluster Issuers. Let's do that now.

We want to specify two Issuers; a staging issuer, and a production issuer. The reason for this is that the production issuer will have rate limits in effect and we do not want to hit the production issuer until we are ready to host our applications in a production environment.

Here's what a Cluster Issuer looks like:

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: [PLACE_YOUR_EMAIL_HERE]
    http01: {}
    privateKeySecretRef:
      name: letsencrypt-staging
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [PLACE_YOUR_EMAIL_HERE]
    http01: {}
    privateKeySecretRef:
      name: letsencrypt-prod

We've got two Cluster Issuers here; letsencrypt-staging, and letsencrypt-prod. Both of which need a valid email in which to bind the certificates to. Copy this to a file called cluster-issuers.yaml and replace [PLACE_YOUR_EMAIL_HERE] with your email address.

Note: We'll be using the staging environment for this guide, however when everything is ready for production, it will need to be updated to use the production issuer.

After filling in the email, apply the configuration to the cluster.

kubectl apply -f /path/to/cluster-issuers.yaml

This will initialize the scaffolding we need to set up TLS certificates for our applications. Issuing the certificates will be handled when configuring the Ingress for each application.

Persistent Volumes

The topic of Persistent Volumes (PV) and Persistent Volume Claims (PVC) is something that should be clarified and explained rather than set up all at once. In this guide, we'll need a few Persistent Volumes set up. We'll need a PV for Jenkins so that we can keep all of our configurations. We'll also need to set up multiple PVs for Harbor. The examples in this guide should give enough insight so that PVs can be set up for individual applications outside of the scope of this guide without much of a hassle.

Let's clarify a few things first. Kubernetes can create persistent volumes dynamically. Great, that's good, right? Sure. The downside? All of the PVs created by Kubernetes will have a hash for an ID, as well as for the name. This makes it a bit difficult to distinguish which PV is which.

Say we create a PV for Jenkins using Kubernetes, everything is taken care of and we don't have to worry about a thing. The PV gets created automatically and gets mounted when the container is created. Great, right? Eh... Kind of. Going further, what happens when we need to spin up a new cluster for some reason or another? Our PV is named with a hash and thus we don't know which PV belongs to our Jenkins instance without going through our configs.

PVs by nature are meant to be long term storage. This means outliving our Kubernetes clusters. With this in mind, it would be a good idea to create our PVs on our own beforehand, and then attach them to our clusters as needed. Doing it this way will simplify maintaining our PVs in the future.

Setting up an existing PV however is not as straightforward is it might seem, at least at the moment with DigitalOcean. What we'll need to do is create Block Storage Volume in DigitalOcean. Afterwards in Kubernetes, we'll need a custom PV, as well as a PVC for our existing storage.

Creating a Block Storage Volume can be done from the command line via doctl.

doctl compute volume create my-volume-name \
	--region nyc1 \
	--fs-type ext4 \
	--size 5GiB

Note: It's important to clarify that in order for existing storage to be accessible by a cluster, it must exist in the same region as the cluster.

To use an existing volume in a Kubernetes cluster, we need to create a custom PV, along with a PVC (Persistent Volume Claim) and apply the configuration to the cluster. Here's an example of a yaml file with both:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-volume-name
  annotations:
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
spec:
  storageClassName: do-block-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeHandle: fbf24f32-15e9-11e9-a0e0-0a58ac1442e8
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-volume-name
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage

There are a couple of key points in this configuration. Let's break it down and go over each of them.

Persistent Volume Config
Key Explanation
spec.persistentVolumeReclaimPolicy This should be set to Retain so that our volumes are not deleted when they are removed from the cluster.
spec.capacity.storage This must match the size of the target Block Storage Volume in DigitalOcean.
spec.accessModes This must be set to ReadWriteOnce as DigitalOcean does not currently support any other mode.
spec.csi.fsType This must match the filesystem format of the target Block Storage Volume in DigitalOcean.
spec.csi.volumeHandle This is the ID of the target Block Storage Volume in DigitalOcean.
spec.csi.volumeAttributes This must have com.digitalocean.csi/noformat: "true" as an attribute so that the storage driver does not format the data that currently exists on the volume.

Note: The Volume ID/Handle can be found by running doctl compute volume list. The Volume ID/Handle is not the name of the Block Storage Volume.

Persistent Volume Claim Config
Key Explanation
spec.accessModes This must be set to ReadWriteOnce as DigitalOcean does not currently support any other mode.
spec.resources.requests.storage This must match the size of the target Block Storage Volume in DigitalOcean.

It is also important to note that metadata.name must be the same value for both of these resources. (At least a far as I'm aware).

This is more of a general overview of how to handle Persistent Volumes and Persistent Volume Claims in Kubernetes. If this is a bit confusing, sit tight and we'll cover the specifics for setting up our Jenkins instance, as well as Harbor next.

Jenkins

If we're going to be managing a Kubernetes cluster, we'll need a way to manage a CI/CD pipeline. While this guide does not go into specifics of how to implement a CI/CD pipeline, we'll be setting up a popular CI/CD tool; Jenkins.

Persistent Volume Configuration

Note: DigitalOcean charges for Block Storage Volumes. Make sure to check their pricing scheme for the rates that will incur as a result. Currently, at the time of writing, the cost to host the 5GiB Block Storage Volume for Jenkins is $.50/mo.

As outlined in the previous section, we'll need a Persistent Volume to store our Jenkins configuration. Let's go ahead and set that up now.

Create a Block Storage Volume in DigitalOcean named pvc-jenkins, located in nyc1, and give it a size of 5GiB.

doctl compute volume create pvc-jenkins \
	--region nyc1 \
	--fs-type ext4 \
	--size 5GiB

This command will return the information on the newly created volume. Take note of the ID associated with the volume.

ID                                      Name           Size     Region    Filesystem Type    Filesystem Label    Droplet IDs
688b54e7-2377-11e9-a0e0-0a58ac1442e8    pvc-jenkins    1 GiB    nyc1      ext4

With the volume created in DigitalOcean, we'll need to set up a custom PV and PVC so that we may access the volume. Create a file named pvc-jenkins.yaml and paste the following:

apiVersion: v1
kind: Namespace
metadata:
  name: jenkins
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-jenkins
  namespace: jenkins
  annotations:
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
spec:
  storageClassName: do-block-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeHandle: [VOLUME_ID_GOES_HERE]
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-jenkins
  namespace: jenkins
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage

Note: We specify the namespace jenkins here. This is because we will be installing our Jenkins instance into the jenkins namespace. We want to make sure that our persistent storage is in the same namespace, else we will not be able to access the volume.

Make sure to change [VOLUME_ID_GOES_HERE] with the ID of your own Block Storage Volume. Remember, this can be found by running doctl compute volume list.

Once the file has been created and the volumeHandle updated to reflect your own Block Storage Volume, apply the config with kubectl.

kubectl apply -f /path/to/pvc-jenkins.yaml

Installing Jenkins

Now that we've got our persistent storage configured, let's move on to setting up the actual Jenkins instance. We'll be using helm to install Jenkins, and as such, we need to specify a config file to tweak it to our needs.

Create a jenkins-values.yaml file and paste the following contents.

fullnameOverride: "jenkins"

Master:
  ImageTag: "lts-alpine"
  ImagePullPolicy: Always

  HostName: [YOUR_HOSTNAME_HERE]

  ServiceType: ClusterIP
  ServicePort: 8080

  InstallPlugins:
    - kubernetes:1.13.1
    - workflow-job:2.31
    - workflow-aggregator:2.6
    - credentials-binding:1.17
    - git:3.9.1

  Ingress:
    ApiVersion: extensions/v1beta1
    Annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
      certmanager.k8s.io/cluster-issuer: letsencrypt-staging
      ingress.kubernetes.io/secure-backends: "true"

    TLS:
    - secretName: jenkins-tls-staging

Persistence:
  Enabled: true
  ExistingClaim: pvc-jenkins
  Size: 5Gi

rbac:
  install: true

Let's go over these options so that we can better understand what's going on.

Key Explanation
fullnameOverride This is simply what we want to call our Jenkins installation in Kubernetes.
Master.ImageTag We're specifying lts-alpine here because it is a very lightweight package. At the time of writing, the full lts package weighs in at 322MB. The lts-alpine version is a nice 170MB.
Master.ImagePullPolicy We set this to Always so that we don't have to worry about any weird caching issues with images.
Master.HostName This is where you will specify the domain you wish to host Jenkins on. For example, in the scope of this article, I'll be hosting Jenkins at jenkins.guide.uptilt.io.
Master.ServiceType We want to set this to ClusterIP as we're using an external load balancer to take care of serving the application.
Master.ServicePort This is simply the port we wish to use to access Jenkins. Since we're forwarding traffic through a load balancer, we can leave this as the default 8080.
Master.InstallPlugins These are simply the plugins we want to install with our Jenkins instance. Feel free to install whichever plugins you need for your use case.
Master.Ingress.Annotations Here is where we define the annotations needed for our Ingress. These are mainly needed for TLS/SSL support. Take note of certmanager.k8s.io/cluster-issuer. The value we specify for this is letsencrypt-staging. This is telling the ingress to use our staging Cluster Issuer that we set up earlier. We'll get a dummy TLS certificate from Let's Encrypt so that we can make sure our setup works. When using this in a production environment, change the issuer to letsencrypt-prod so you can get a production TLS certificate.
Master.Ingress.TLS We specify the key secretName here to tell our Cluster Issuer where to store the TLS certificate that is generated. This will use Kubernetes' secret store and it will place it in a secret with the name specified here. Naturally, if using a production environment, it would make sense to change the name to something along the lines of jenkins-tls-prod.
Persistence.Enabled This tells Jenkins whether or not to use persistent storage.
Persistence.ExistingClaim If using an existing PVC (we set this up earlier), this is where we specify the name of the PVC. The PVC must be in the same namespace as the Jenkins instance, otherwise it will not find the volume.
Persistence.Size This tells Jenkins the size of the volume. If using an existing volume, set this to the size of the existing volume to avoid conflicts.
rbac.install This tells Jenkins if we would like to use RBAC. This will grant Jenkins privileges to manage certain Kubernetes services.

With all of this in mind, change Master.HostName to the host name of your choice, additionally tweaking some settings to your liking, and install Jenkins using helm. We must make sure to specify our config file as an override.

helm upgrade --install jenkins --namespace jenkins stable/jenkins -f /path/to/jenkins-values.yaml

After a few minutes, the Jenkins instance will be reachable at the host name specified. As an example, in the scope of this guide, the server is reachable at https://jenkins.guide.uptilt.io. Since we're using a staging TLS certificate, your browser may flag the TLS certificate as "unsafe". Make sure to add the TLS certificate to the list of trusted certificates if this happens.

Now that we can access our Jenkins instance, we need to be able to log into the server. At the bottom of the output from the install command, it says that we need to run a command to obtain our admin password.

printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo

Run this command and log in using the username admin and the password returned from the command. Once this is taken care of, our Jenkins instance is officially up and running!

Harbor

In addition to Jenkins, we're also going to be setting up a private Docker and Helm registry. We're going to use an application called Harbor to do this. Harbor is an open source solution for hosting both of these for us on our Kubernetes cluster.

Installing Harbor isn't as intuitive as Jenkins, so let's walk through it step by step.

Persistent Volume Configuration

Note: DigitalOcean charges for Block Storage Volumes. Make sure to check their pricing scheme for the rates that will incur as a result. Currently, at the time of writing, the cost to host all of our Block Storage Volumes for Harbor is $1.30/mo.

Harbor requires a total of five volumes. We need a volume for the registry and chartmuseum, as well as the database, job service, and redis cache.

Let's go ahead and create the volumes using doctl.

doctl compute volume create pvc-harbor-registry --region nyc1 --fs-type ext4 --size 5GiB
doctl compute volume create pvc-harbor-chartmuseum --region nyc1 --fs-type ext4 --size 5GiB
doctl compute volume create pvc-harbor-jobservice --region nyc1 --fs-type ext4 --size 1GiB
doctl compute volume create pvc-harbor-database --region nyc1 --fs-type ext4 --size 1GiB
doctl compute volume create pvc-harbor-redis --region nyc1 --fs-type ext4 --size 1GiB

Once these volumes are created, we need to map the volumes to a PV/PVC in Kubernetes. Create a file named pvc-harbor.yaml and paste the following contents.

# Namespace - Harbor
apiVersion: v1
kind: Namespace
metadata:
  name: harbor
---
# PV - Harbor Registry
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-harbor-registry
  namespace: harbor-registry
  annotations:
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
spec:
  storageClassName: do-block-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeHandle: [VOLUME_ID_GOES_HERE]
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
---
# PVC - Harbor Registry
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-harbor-registry
  namespace: harbor-registry
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage
---
# PV - Harbor Chart Museum
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-harbor-chartmuseum
  namespace: harbor-registry
  annotations:
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
spec:
  storageClassName: do-block-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeHandle: [VOLUME_ID_GOES_HERE]
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
---
# PVC - Harbor Chart Museum
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-harbor-chartmuseum
  namespace: harbor-registry
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage
---
# PV - Harbor Job Service
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-harbor-jobservice
  namespace: harbor-registry
  annotations:
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
spec:
  storageClassName: do-block-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeHandle: [VOLUME_ID_GOES_HERE]
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
---
# PVC - Harbor Job Service
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-harbor-jobservice
  namespace: harbor-registry
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage
---
# PV - Harbor Database
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-harbor-database
  namespace: harbor-registry
  annotations:
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
spec:
  storageClassName: do-block-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeHandle: [VOLUME_ID_GOES_HERE]
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
---
# PVC - Harbor Database
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-harbor-database
  namespace: harbor-registry
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage
---
# PV - Harbor Redis
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-harbor-redis
  namespace: harbor-registry
  annotations:
    pv.kubernetes.io/provisioned-by: dobs.csi.digitalocean.com
spec:
  storageClassName: do-block-storage
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  csi:
    driver: dobs.csi.digitalocean.com
    fsType: ext4
    volumeHandle: [VOLUME_ID_GOES_HERE]
    volumeAttributes:
      com.digitalocean.csi/noformat: "true"
---
# PVC - Harbor Redis
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-harbor-redis
  namespace: harbor-registry
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: do-block-storage

Don't forget to replace [VOLUME_ID_GOES_HERE] with the corresponding ID applicable to your account. Remember, these can be found using doctl compute volume list. Once that has been taken care of, apply the changes with kubectl.

kubectl apply -f /path/to/pvc-harbor.yaml

Now we can take care of installing Harbor.

Harbor Installation Configuration

Harbor's helm chart is maintained by the community and is not listed in the default stable repository. Instead, to install Harbor we need to clone the git repository and install the chart from our local machine. Before we can install Harbor however, we need to define a config file in a similar way that we did with Jenkins. Start by cloning the repository and setting the active branch to version 1.0.0.

git clone https://github.com/goharbor/harbor-helm && cd harbor-helm
git checkout 1.0.0

Let's define our config file as well. Copy the following into a file named harbor-values.yaml on your local machine.

imagePullPolicy: Always

externalURL: [EXTERNAL_URL_GOES_HERE]

harborAdminPassword: "Harbor12345"
secretKey: "VL6EiEC2uyQN525Q"

expose:
  type: ingress
  tls:
    enabled: true
    secretName: "harbor-registry-tls-staging"
    notarySecretName: "harbor-notary-tls-staging"
  ingress:
    hosts:
      core: [CORE_URL_GOES_HERE]
      notary: [NOTARY_URL_GOES_HERE]
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
      certmanager.k8s.io/cluster-issuer: letsencrypt-staging

persistence:
  enabled: true
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      existingClaim: "pvc-harbor-registry"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    chartmuseum:
      existingClaim: "pvc-harbor-chartmuseum"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: "pvc-harbor-jobservice"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    database:
      existingClaim: "pvc-harbor-database"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    redis:
      existingClaim: "pvc-harbor-redis"
      storageClass: "-"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi

There are a few values here that meed to be changed to match your individual configuration. Let's go over what we're configuring.

Key Explanation
imagePullPolicy We set this to Always so that we don't have to worry about any weird caching issues with images.
externalURL This is the URL that you wish to access the registry from. For example, in the scope of this guide, the URL would be https://harbor.guide.uptilt.io.
harborAdminPassword This is the default administrator password that is set when the application is first set up. This can be set to whatever you like, however it would be a good idea to change the administrator password from the application once it is up and running.
secretKey This value is used for encryption and should be set to a random 16 character string.
expose.type This tells the helm chart how we want to expose our registry. We're going with ingress for this one.
expose.tls.enabled This tells the helm chart that we would like to use TLS encryption.
expose.tls.secretName This is the name of the secret where the TLS certificate will be stored.
expose.tls.notarySecretName This is the name of the secret where the notary TLS certificate will be stored.
expose.ingress.hosts.core This is the domain name that the core application will be accessible from, without the protocol prefix. For example, in the scope of this guide, the domain would be harbor.guide.uptilt.io.
expose.ingress.hosts.notary This is the domain name that the notary will be accessible from, without the protocol prefix. For example, in the scope of this guide, the domain would be notary.harbor.guide.uptilt.io.
expose.ingress.annotations These are the annotations that will be passed to the ingress when the application is installed.
persistence.enabled This specifies whether or not we would like to have our data persist beyond the life of the container.
persistence.resourcePolicy This specifies what we would like to do with out persistent storage when the container's lifecycle is over. Naturally, we want this set to keep.
persistence.persistentVolumeClaim.*.existingClaim This is the name of the PVC that we would like to use for the defined resource. The PVC must be in the same namespace as the installation.
persistence.persistentVolumeClaim.*.storageClass This is the storage class that we would like to use if we are to leverage dynamically generated volumes. Since we are using existing volumes, this should be set to - so that it is ignored.
persistence.persistentVolumeClaim.*.subPath This specifies a subdirectory to use to store the data on the volume. For our purposes we do not need this.
persistence.persistentVolumeClaim.*.accessModes This specifies the access mode for the persistent volume. The only value that is valid here is ReadWriteOnce as it is the only value that DigitalOcean supports.
persistence.persistentVolumeClaim.*.size This is the desired size of the volume. Since we are using existing volumes, this must match the size of the volume being used.

Installing Harbor

Once everything has been configured as needed, we can proceed onward to installing Harbor.

helm upgrade --install harbor-registry --namespace harbor-registry ./path/to/harbor-helm -f /path/to/harbor-values.yaml

Note: You might notice the ./path/to/harbor-helm argument here. Since we are not using the stable repository, we must specify the path to the helm chart locally. The . prefix specifies to use a local chart, while /path/to/harbor-helm is the location of the git repository we cloned earlier.

This will install Harbor onto our Kubernetes cluster using the values we specified in the configuration section. Harbor takes a few more minutes to spin up than our previous installs, so don't fret.

Whenever Harbor finishes its initialization, we're free to log in using our administrator credentials we set in the config. Now would be a good time to change those as well. To log in to Harbor simply navigate to the externalUrl we specified in the config. For this guide, it would be https://harbor.guide.uptilt.io.

Kube Dashboard

For users who prefer to have a visual representation of their Kubernetes cluster, there's the Kubernetes Dashboard UI. This is an easy one to install and get set up, so don't worry too much.

Installing The Helm Chart

There are two steps to get going with the Kubeternetes Dashboard, the first is installing it via the helm chart.

helm upgrade --install kubernetes-dashboard --namespace kube-system  stable/kubernetes-dashboard --set fullnameOverride="kubernetes-dashboard"

Simple enough. We're not quite done however. In order for us to access the application, we need to use a proxy. We can use the built in Kubernetes proxy to access the dashboard.

kubectl proxy

This will allow access to the Kubernetes cluster via http://localhost:8001. Having access to the cluster via the proxy, we can navigate to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:https/proxy/ in our browser.

Granting Access

Now that we have our dashboard installed and have access to it via the proxy, we need to grant ourselves access. We're going to go the route of using tokens for access in this guide. To do that, create a file named dashboard-auth.yaml and paste the following.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Since there is nothing to configure in this snippet, simply apply the configuration with kubectl.

kubectl apply -f /path/to/dashboard-auth.yaml

This will set up a service account to use for the dashboard which we can use to log in. To get the token, we'll need to run a command.

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep 'token:' | awk -F " " '{print $2}'

A fairly lengthy command, but this will grab the token we need for the dashboard. If we want to copy it straight to the clipboard, just pipe pbcopy to the end of the command.

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep 'token:' | awk -F " " '{print $2}' | pbcopy

Paste the returned token into the dashboard and you're good to go!

Conclusion

That's it, we're finished! Pat yourself on the back and take your new Kubernetes cluster for a spin. In this guide, we've set up the required components for a basic Kubernetes cluster. We've gone though what it takes to spin up the cluster using DigitalOcean and have installed our package management software; Helm/Tiller. Our cluster now has an Nginx Ingress to serve our applications, as well as Cert Manager to make sure that we keep TLS encryption up to snuff. Jenkins and Harbor are configured exactly to our needs and can be used however we see fit. We even have persistent storage under control for easy long term management.

There's a lot of moving parts, but when it's broken down into smaller segments it's nowhere near as intimidating as it seems. From here you can use Jenkins and Harbor to set up your own CI/CD pipeline that suits your individual needs, or just dive right in and start deploying whichever applications you desire to the cluster.


Written by Rick Bennett

Last Updated: February 17th, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment