Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save johnnyeric/df340c0bc34b30c6cb0c7a7905113f0c to your computer and use it in GitHub Desktop.
Save johnnyeric/df340c0bc34b30c6cb0c7a7905113f0c to your computer and use it in GitHub Desktop.
How to Setup Kubernetes on DigitalOcean with CoreOS

Kubernetes on DigitalOcean with CoreOS

Let's look at an example of how to launch a Kubernetes cluster from scratch on DigitalOcean, including kubeadm, an Nginx Ingress controller, and Letsencrypt certificates.

Overview

Environment

We'll be creating a four-node cluster (k8s-master, k8s-000...k8s-002), load balancer, and ssl certificates.

Table of Contents

  1. Install Kubernetes
  2. Initialize Cluster
  3. Install CNI
  4. Create a Simple Service
  5. Nginx Ingress
  6. Load Balancer
  7. Install Helm
  8. Install Cert-Manager
  9. Letsencrypt SSL

Install Kubernetes

We're going to install Kubernetes on to four CoreOS servers.

Create Droplets

First create four CoreOS-stable droplets, all in the same region and with your ssh-key.

Install Binaries from Official Repositories

On each of the servers, login over ssh and install the following software.

SSH to CoreOS

CoreOS is setup with core as the primary user and when the droplet was created your ssh key was added to it so login with ssh core@IP_ADDRESS.

Sudo Sudo

Most of these commands require sudo so start by accessing root privileges with sudo su.

Start & Enable Docker

First things first, startup that Docker daemon.

systemctl enable docker && systemctl start docker

Install CNI Plugin

Kubernetes requires a container networking interface to be installed, most of which require this CNI plugin.

CNI_VERSION="v0.6.0"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz

Install kubeadm, kubelet, kubectl

Download the kubeadm, kubelet, and kubectl official-release binaries.

RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
mkdir -p /opt/bin
cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}

Create K8s Services

Download the SystemD service files.

curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Start & Enable Kubelet

Kubelet is the primary Kubernetes service. Start and enable it.

systemctl enable kubelet && systemctl start kubelet

Initialize Cluster

Kubeadm is a newer tool that initializes a Kubernetes cluster following best practices. Kubeadm is first ran on the master which produces another command to run on each additional node.

Initialize the Master

Use kubeadm to initialize a cluster on the private network, including an address range to use for the pod network (created with CNI).

priv_ip=$(ip -f inet -o addr show eth1|cut -d\  -f 7 | cut -d/ -f 1 | head -n 1)
/opt/bin/kubeadm init --apiserver-advertise-address=$priv_ip  --pod-network-cidr=192.168.0.0/16

There will be a kubeadm command printed in the output. Copy and paste it into the nodes you want to join the cluster.

Initialize the Workers

Run the kubeadm command from the output above to join the cluster.

ssh core@IP_ADDRESS
sudo /opt/bin/kubeadm ...

Access with Kubectl

The /etc/kubernetes/admin.conf on the master file contains all of the information needed to access the cluster.

Copy the admin.conf file to the ~/.kube/config (where kubectl expects it to be). As the core user:

mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Kubectl Remotely

This file can also be used on other computers to control the cluster. On your laptop, install kubectl and copy this config file to administer the cluster.

scp core@IP_ADDRESS:/etc/kubernetes/admin.conf .kube/config

Install CNI

Kubernetes does not have a Container Network installed by default, so you'll need to install one. There are many options and here's how I'm currently installing Calico.

kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

Create A Simple Service

Next we'll create a simple http service.

Example Deployment

The example-com-controller will create and manage the example-com-pod's.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-com-controller
  labels:
    app: example-com
spec:
  replicas: 1
  matchSelector:
    labels:
      app: example-com
  template:
    metadata:
      name: example-com-pod
      labels:
        app: example-com
    spec:
      containers:
      - name: example-com-nginx
        image: nginx

Example Service

The example-com-service will expose the example-com-pod's port 80.

kind: Service
apiVersion: v1
metadata:
  name: example-com-service
  labels:
    app: example-com
spec:
  matchSelector:
    app: example-com
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP

Nginx Ingress

The Nginx Ingress Controller provides a way to implement Ingress directives on a baremetal Kubernetes Cluster. These are the steps to install it (including RBAC roles) from the Kubernetes repo.

Namespace, Default Backend

Install the namespace, default backend, and configmaps. The default backend is where all traffic without a matching host will be directed.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml

Nginx Ingress Controller with RBAC Roles

Install the controller with RBAC roles.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml

Nodeport Service

Install the service.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml

Patch the service so that it uses the HostNetwork

kubectl patch deployment nginx-ingress-controller -n ingress-nginx --patch '{"spec": {"template": {"spec": {"hostNetwork": true} }

Load Balancer

Add a tag to each worker node (k8s-000...k8s-002), for example 'k8s-node'. Next, create a Load Balancer on DigitalOcean, pointed to the 'k8s-node' tag. It will automatically attach to all of the worker droplets, including new nodes as they're added.

Install Helm

Helm is a management tool used to install Kubernetes container configurations.

Helm can be installed with a script from the repo. If you've used kubeadm to setup the cluster, then you'll need to add a service account for tiller as well- ['['.

Install Helm

To install Helm run the scripts/get_helm.sh script from the repo.

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

Initialize Helm

Initialize Helm

helm init
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy --patch '{"spec": {"template": {"spec": {"serviceAccount": "tiller"} } } }'

Install Cert-Manager

Cert-manager can be installed with Helm using the Chart in the repo.

git clone https://github.com/jetstack/cert-manager
cd cert-manager
git checkout v0.2.3 #latest version as of 2018-02-19
helm install \
  --name cert-manager \
  --namespace kube-system \
  contrib/charts/cert-manager

Letsencrypt SSL

An Issuer is a definition of a source for certificates. We'll create an issuer for letsencrypt-staging (which should always be used for testing to avoid hitting a rate limit).

Letsencrypt Staging Issuer

kind: Issuer
apiVersion: certmanager.k8s.io/v1alpha1
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging.api.letsencrypt.org/directory
    email: YOUR_EMAIL_ADDRESS
    privateKeySecretRef:
      name: letsencrypt-staging
    http01: {}

Ingress Configuration

To configure an Ingress to automatically create and use a certificate, add the following annotations and tls properties.

Annotations

Add annotations to the metadata.

metadata:
  annotations:
    certmanager.k8s.io/acme-challenge-type: 'http01'
    certmanager.k8s.io/issuer: 'letsencrypt-staging'

TLS Hosts

Add the tls hosts and secret to the spec.

spec:
  tls:
  - secretName: example-com-tls-staging
    hosts:
    - example.com
    - api.example.com
    - www.example.com
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment