Skip to content

Instantly share code, notes, and snippets.

@kevashcraft
Last active September 18, 2020 05:47
Show Gist options
  • Star 48 You must be signed in to star a gist
  • Fork 20 You must be signed in to fork a gist
  • Save kevashcraft/5aa85f44634c37a9ee05dde7e83ac7e2 to your computer and use it in GitHub Desktop.
Save kevashcraft/5aa85f44634c37a9ee05dde7e83ac7e2 to your computer and use it in GitHub Desktop.
How to Setup Kubernetes on DigitalOcean with CoreOS

Kubernetes on DigitalOcean with CoreOS

Let's look at an example of how to launch a Kubernetes cluster from scratch on DigitalOcean, including kubeadm, an Nginx Ingress controller, and Letsencrypt certificates.

Overview

Environment

We'll be creating a four-node cluster (k8s-master, k8s-000...k8s-002), load balancer, and ssl certificates.

Table of Contents

  1. Install Kubernetes
  2. Initialize Cluster
  3. Install CNI
  4. Create a Simple Service
  5. Nginx Ingress
  6. Load Balancer
  7. Install Helm
  8. Install Cert-Manager
  9. Letsencrypt SSL

Install Kubernetes

We're going to install Kubernetes on to four CoreOS servers.

Create Droplets

First create four CoreOS-stable droplets, all in the same region and with your ssh-key.

Install Binaries from Official Repositories

On each of the servers, login over ssh and install the following software.

SSH to CoreOS

CoreOS is setup with core as the primary user and when the droplet was created your ssh key was added to it so login with ssh core@IP_ADDRESS.

Sudo Sudo

Most of these commands require sudo so start by accessing root privileges with sudo su.

Start & Enable Docker

First things first, startup that Docker daemon.

systemctl enable docker && systemctl start docker

Install CNI Plugin

Kubernetes requires a container networking interface to be installed, most of which require this CNI plugin.

CNI_VERSION="v0.6.0"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz

Install kubeadm, kubelet, kubectl

Download the kubeadm, kubelet, and kubectl official-release binaries.

RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
mkdir -p /opt/bin
cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}

Create K8s Services

Download the SystemD service files.

curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Start & Enable Kubelet

Kubelet is the primary Kubernetes service. Start and enable it.

systemctl enable kubelet && systemctl start kubelet

Initialize Cluster

Kubeadm is a newer tool that initializes a Kubernetes cluster following best practices. Kubeadm is first ran on the master which produces another command to run on each additional node.

Initialize the Master

Use kubeadm to initialize a cluster on the private network, including an address range to use for the pod network (created with CNI).

priv_ip=$(ip -f inet -o addr show eth1|cut -d\  -f 7 | cut -d/ -f 1 | head -n 1)
/opt/bin/kubeadm init --apiserver-advertise-address=$priv_ip  --pod-network-cidr=192.168.0.0/16

There will be a kubeadm command printed in the output. Copy and paste it into the nodes you want to join the cluster.

Initialize the Workers

Run the kubeadm command from the output above to join the cluster.

ssh core@IP_ADDRESS
sudo /opt/bin/kubeadm ...

Access with Kubectl

The /etc/kubernetes/admin.conf on the master file contains all of the information needed to access the cluster.

Copy the admin.conf file to the ~/.kube/config (where kubectl expects it to be). As the core user:

mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Kubectl Remotely

This file can also be used on other computers to control the cluster. On your laptop, install kubectl and copy this config file to administer the cluster.

scp core@IP_ADDRESS:/etc/kubernetes/admin.conf .kube/config

Install CNI

Kubernetes does not have a Container Network installed by default, so you'll need to install one. There are many options and here's how I'm currently installing Calico.

kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

Create A Simple Service

Next we'll create a simple http service.

Example Deployment

The example-com-controller will create and manage the example-com-pod's.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-com-controller
  labels:
    app: example-com
spec:
  replicas: 1
  matchSelector:
    labels:
      app: example-com
  template:
    metadata:
      name: example-com-pod
      labels:
        app: example-com
    spec:
      containers:
      - name: example-com-nginx
        image: nginx

Example Service

The example-com-service will expose the example-com-pod's port 80.

kind: Service
apiVersion: v1
metadata:
  name: example-com-service
  labels:
    app: example-com
spec:
  matchSelector:
    app: example-com
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP

Nginx Ingress

The Nginx Ingress Controller provides a way to implement Ingress directives on a baremetal Kubernetes Cluster. These are the steps to install it (including RBAC roles) from the Kubernetes repo.

Namespace, Default Backend

Install the namespace, default backend, and configmaps. The default backend is where all traffic without a matching host will be directed.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml

Nginx Ingress Controller with RBAC Roles

Install the controller with RBAC roles.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml

Nodeport Service

Install the service.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml

Patch the service so that it uses the HostNetwork

kubectl patch deployment nginx-ingress-controller -n ingress-nginx --patch '{"spec": {"template": {"spec": {"hostNetwork": true} }

Load Balancer

Add a tag to each worker node (k8s-000...k8s-002), for example 'k8s-node'. Next, create a Load Balancer on DigitalOcean, pointed to the 'k8s-node' tag. It will automatically attach to all of the worker droplets, including new nodes as they're added.

Install Helm

Helm is a management tool used to install Kubernetes container configurations.

Helm can be installed with a script from the repo. If you've used kubeadm to setup the cluster, then you'll need to add a service account for tiller as well- ['['.

Install Helm

To install Helm run the scripts/get_helm.sh script from the repo.

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

Initialize Helm

Initialize Helm

helm init
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy --patch '{"spec": {"template": {"spec": {"serviceAccount": "tiller"} } } }'

Install Cert-Manager

Cert-manager can be installed with Helm using the Chart in the repo.

git clone https://github.com/jetstack/cert-manager
cd cert-manager
git checkout v0.2.3 #latest version as of 2018-02-19
helm install \
  --name cert-manager \
  --namespace kube-system \
  contrib/charts/cert-manager

Letsencrypt SSL

An Issuer is a definition of a source for certificates. We'll create an issuer for letsencrypt-staging (which should always be used for testing to avoid hitting a rate limit).

Letsencrypt Staging Issuer

kind: Issuer
apiVersion: certmanager.k8s.io/v1alpha1
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging.api.letsencrypt.org/directory
    email: YOUR_EMAIL_ADDRESS
    privateKeySecretRef:
      name: letsencrypt-staging
    http01: {}

Ingress Configuration

To configure an Ingress to automatically create and use a certificate, add the following annotations and tls properties.

Annotations

Add annotations to the metadata.

metadata:
  annotations:
    certmanager.k8s.io/acme-challenge-type: 'http01'
    certmanager.k8s.io/issuer: 'letsencrypt-staging'

TLS Hosts

Add the tls hosts and secret to the spec.

spec:
  tls:
  - secretName: example-com-tls-staging
    hosts:
    - example.com
    - api.example.com
    - www.example.com
@gorbypark
Copy link

There's a typo in the Nodeport Service section. kubectl patch deployment nginx-ingress-controller -n ingress-nginx --patch '{"spec": {"template": {"spec": {"hostNetwork": true} } is missing two curly brackets and a single comma. It should be kubectl patch deployment nginx-ingress-controller -n ingress-nginx --patch '{"spec": {"template": {"spec": {"hostNetwork": true} } } }'.
Thanks for the great write-up!

@jakoguta
Copy link

I have followed the steps provided to setup a cluster. However, when trying to initialize the master using:

priv_ip=$(ip -f inet -o addr show eth1|cut -d\  -f 7 | cut -d/ -f 1 | head -n 1)
/opt/bin/kubeadm init --apiserver-advertise-address=$priv_ip  --pod-network-cidr=192.168.0.0/16

I get the following response with errors:

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0731 14:34:29.335493    1853 kernel_validator.go:81] Validating kernel version
I0731 14:34:29.335651    1853 kernel_validator.go:96] Validating kernel config
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
        [WARNING Hostname]: hostname "coreos-s-2vcpu-2gb-ams3-01" could not be reached
        [WARNING Hostname]: hostname "coreos-s-2vcpu-2gb-ams3-01" lookup coreos-s-2vcpu-2gb-ams3-01 on <ip-address>: no such host
[preflight] Some fatal errors occurred:
        [ERROR FileExisting-crictl]: crictl not found in system path
        [ERROR KubeletVersion]: couldn't get kubelet version: executable file not found in $PATH
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

I did try to use --ignore-preflight-errors but it just caused more problems down the line.

Do you have any suggestions to fix this problem?

@jshbrntt
Copy link

jshbrntt commented Aug 5, 2018

@jakoguta same here.

@jshbrntt
Copy link

jshbrntt commented Aug 5, 2018

@jakogute this fixed it for me.

Run this on all nodes.

https://github.com/kubernetes-incubator/cri-tools/blob/master/docs/crictl.md

VERSION="v1.11.1"
wget https://github.com/kubernetes-incubator/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /opt/bin
sudo chown root:root /opt/bin/crictl
rm -f crictl-$VERSION-linux-amd64.tar.gz

Then run this command not as root.

priv_ip=$(ip -f inet -o addr show eth1|cut -d\  -f 7 | cut -d/ -f 1 | head -n 1)
sudo /opt/bin/kubeadm init --apiserver-advertise-address=$priv_ip  --pod-network-cidr=192.168.0.0/16

@johnnyeric
Copy link

@jakoguta @synthecypher I had the same problem and I solved by installing crictl in /opt/bin and exporting /opt/bin to the path as follows.

export PATH=$PATH:/opt/bin

And for crictl:

CRICTL_VERSION="v1.11.1"
mkdir -p /opt/bin
curl -L "https://github.com/kubernetes-incubator/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz

@rhessing
Copy link

rhessing commented Dec 2, 2018

Hi, thank you for the nice how to, this helped me out very well :-) However, as of today there are some slight changes that are required. I'm posting these here to help out others starting with Kubernetes :-)

As a prerequisite please make sure that your hostname can be resolved by adding it to the /etc/hosts, mine was not in there by default so it is a good practise to check this first:
grep -i $(hostname) /etc/hosts

It is important to make sure the pod network is a non-existing network. This means that it should not be routable from any of your nodes and it may not be in the same range as your node interfaces. For example if your master node would have the IP: 192.168.0.1 and you assign the 192.168.0.0/16 range to your pod network this will cause issues. So in this case you should pick another network such as:
/opt/bin/kubeadm init --apiserver-advertise-address=192.168.0.1 --pod-network-cidr=10.244.0.0/12

Keep in mind that the default service range is 10.96.0.0/12. So a pod network range of 10.0.0.0/8 will also give an issue.

Next, for calico use the following commands instead of the one above:
kubectl apply -f https://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f https://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/calico.yaml

My symptons were as follows:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-548645f6dd-dgth9   0/1     CrashLoopBackOff    19         59m
kube-system   calico-node-ms4p7                          0/1     CrashLoopBackOff    19         59m
kube-system   calico-node-t8l28                          0/1     Running             20         59m
kube-system   coredns-576cbf47c7-26rmd                   0/1     ContainerCreating   0          67m
kube-system   coredns-576cbf47c7-mpg7v                   0/1     ContainerCreating   0          67m
kube-system   etcd-s01                                 1/1     Running             1          66m
kube-system   kube-apiserver-s01                       1/1     Running             1          66m
kube-system   kube-controller-manager-s01              1/1     Running             1          66m
kube-system   kube-proxy-857tb                           1/1     Running             1          65m
kube-system   kube-proxy-bv2kk                           1/1     Running             1          67m
kube-system   kube-scheduler-s01                       1/1     Running             1          66m
kube-system   kubernetes-dashboard-77fd78f978-pp9vb      0/1     ContainerCreating   0          52m

CoreDNS did not start due to the fact that calico did not start
Calico did not start due to the fact that CoreDNS did not start

At the end I had:

  1. an issue with my networking
  2. an issue with etcd not starting properly

I need to verify number two, so I will reset my cluster to see if this was really the case. Ok, this is indeed the case, even as there is already an etcd container running Calico requires it's own etcd:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-etcd-hr4jc                          1/1     Running   0          116s
kube-system   calico-kube-controllers-548645f6dd-4npjd   1/1     Running   2          2m28s
kube-system   calico-node-kpvj4                          1/1     Running   2          2m28s
kube-system   calico-node-ss4vg                          1/1     Running   2          2m28s
kube-system   coredns-576cbf47c7-2w52x                   1/1     Running   0          4m10s
kube-system   coredns-576cbf47c7-ls28h                   1/1     Running   0          4m10s
kube-system   etcd-s01                                 1/1     Running   0          3m8s
kube-system   kube-apiserver-s01                       1/1     Running   0          3m22s
kube-system   kube-controller-manager-s01              1/1     Running   0          3m4s
kube-system   kube-proxy-p5k9c                           1/1     Running   0          4m9s
kube-system   kube-proxy-wcnz9                           1/1     Running   0          4m6s
kube-system   kube-scheduler-s01                       1/1     Running   0          3m16s

@jbonnett92
Copy link

How are we exposing to the services internet here? If we setup kubeadm to use the private IP everything is private right?

@ik9999
Copy link

ik9999 commented Jan 11, 2019

Had to do

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

to start calico.

@brohan
Copy link

brohan commented May 4, 2019

Doesn't the pod-networking (Calico) need to be installed before joining the nodes? From the kubernetes site (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network):

"Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is Running in the output of kubectl get pods --all-namespaces. And once the CoreDNS pod is up and running, you can continue by joining your nodes."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment