Skip to content

Instantly share code, notes, and snippets.

@amoeba
Created October 30, 2021 03:51
Show Gist options
  • Save amoeba/07f713bb4fca8b74fbc0d314bf9e70d6 to your computer and use it in GitHub Desktop.
Save amoeba/07f713bb4fca8b74fbc0d314bf9e70d6 to your computer and use it in GitHub Desktop.
# name: k8s-setup.txt
# author: Bryce Mecum <mecum@nceas.ucsb.edu>
#
# Here's how I was able to set up a Kubernetes cluster all the way to TLS and
# load balancing services over port 443 using a subdomain.
#
# What I did is loosely based on the following links but, for most commands,
# I had to go find up to date instructions as 100% of blog posts on Kubernetes
# are wildly out of date. :(
#
# - https://blog.alexellis.io/kubernetes-in-10-minutes/
# - https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-on-digitalocean-kubernetes-using-helm
#
# I used DigitalOcean for this in order to get multiple public IP addresses and
# did not use their managed Kubernetes product or one their load balancers.
# Therefore, I think this could be reproduced on our own servers and also on any
# other cloud provider.
# 1. Create a fresh DigitalOcean droplet
# I chose an Ubuntu 20.04 image and a Basic plan
# 2 CPU, 4GB RAM, 80GB disk, 4TB transfer ($20/month)
# Set an A record to the Droplet's public IP in my Cloudflare control panel
# Make sure to uncheck Proxied and use DNS only
# 2. Set up a user (Optional for this demo but I did it anyway)
useradd k8s -G sudo -m -s /bin/bash
passwd k8s
# 3. Update things
sudo apt-get update
sudo apt-get upgrade
# 4. Install docker
sudo apt-get install -y docker.io
# 5. Switch cgroups driver for docker to systemd
# kubelet and the container runtime both need to be under the same cgroups
# driver
# Create or edit /etc/docker/daemon.json to contain:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# 6. Restart the docker service
sudo systemctl restart docker
# 7. Install kubeadm and other tools
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
sudo echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
# 8. Initalize the cluster with kubeadm
kubeadm init
# 9. Set up kubectl for ourselves
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 10. Untaint the control node so I can just run with one node
# Skip this if you don't care about testing the cluster on a single host
kubectl taint nodes --all node-role.kubernetes.io/master-
# 11. Install Weave
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# 11. Get something running. Here I use DigitalOcean's example from, which the rest of this is based off of
# https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-on-digitalocean-kubernetes-using-helm
# 12. kubectl apply -f this
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
# 13. Install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
# 13. Install NIC chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true
# 14. Set up an Ingress
# kubectl apply -f this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "cloud.treestats.net"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes
port:
number: 80
# 15. Switch NIC from LoadBalancer to NodePort
kubectl edit service nginx-ingress-ingress-nginx-controller
# In your editor, change 'type':
type: NodePort
# and add an externalIPs field to 'spec':
spec:
externalIPs:
- $MY_DROPLETS_PUBLIC_IP
# 16. Set up cert-manager so we can ussue LE TLS certs
# From: https://cert-manager.io/docs/installation/helm/
helm repo add jetstack https://charts.jetstack.io
helm repo update
# This last one takes a bit and doesn't produce output right away
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.6.0 --set installCRDs=true
# 17. Set up an Issuer
# Save this as a YAML file and kubectl apply -f it
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# Email address used for ACME registration
email: petridish@gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Name of a secret used to store the ACME account private key
name: letsencrypt-prod-private-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
# 18. Update the Ingress to use the cert-manager shim
# Make these edits and kubectl apply -f this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- cloud.treestats.net
secretName: hello-kubernetes-tls
rules:
- host: "cloud.treestats.net"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes
port:
number: 80
# 19. Monitor certificate status with and wait until its done
kubectl describe certificate hello-kubernetes-tls
# 20. Visit https://rain.treestats.net/ and notice the pod identifier jumps around
# which indicates things are working as expected
### Part two: Join another node to the cluster and deploy pods to it
# 1. Create a new Droplet to join the cluster, same specs as above
# 2 CPU, 4GB RAM, 80GB disk, 4TB transfer ($20/month)
# 2. Install docker
sudo apt-get install -y docker.io
# 3. Switch cgroups driver for docker to systemd
# kubelet and the container runtime both need to be under the same cgroups driver
# Create or edit /etc/docker/daemon.json to contain:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# 4. Restart the docker service
sudo systemctl restart docker
# 5. Install kubeadm and other tools
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
sudo echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
# 6. Copy kube config from control plane to worker node (.kube/config)
# 7. Create a join command for the new node from the control plane
kubeadm token create --print-join-command
# 8. Join the cluster from the node
kubeadm join <redacted>
# 9. Scale the hello-kubernetes deployment up and watch it deploy pods to the
# new worker node
kubectl scale deployment hello-kubernetes --replicas=10
# 10. Refresh your web browser and verify the pod IDs are pods running on the
# worker
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment