Skip to content

Instantly share code, notes, and snippets.

@michaellihs
Last active July 2, 2019 15:15
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save michaellihs/c213ded09dd5cd84bf87352782bca001 to your computer and use it in GitHub Desktop.
Save michaellihs/c213ded09dd5cd84bf87352782bca001 to your computer and use it in GitHub Desktop.
Kubernetes the Hard Way

Prerequisites

Glossary

  • Kubernetes Control Plane
  • Worker Nodes
  • Controller Nodes
  • Kubernetes Cluster
  • Virtual Private Cloud (VPC) Network created for a Kubernetes cluster. Within this VPC a subnet is created

Set up gcloud

# download from https://cloud.google.com/sdk/docs/quickstart-mac-os-x and extract
mv ~/Downloads/google-cloud-sdk /usr/local/bin
/usr/local/bin/google-cloud-sdk/install.sh
ln -s /usr/local/bin/google-cloud-sdk/bin/gcloud /usr/local/bin/gcloud

# now test with
gcloud

Compute Engine API needs to be activated in the GCP UI (requires billing). Afterwards gcloud init can be run. Selected europe-west3-a as region.

Set Up Kubernetes

Networking

  1. Create a VPC (Virtual Private Cloud) Network
  2. Create a subnet within this VPC Network
  3. Create Firewall rules that allow communiction inside this network
  4. Create a static IP to access the K8S API using an external load balancer. Question: is the load balancer created or just configured to use the static IP?

Compute

  1. Create 3 VMs for the controller nodes
  2. Create 3 VMs for the worker nodes
  3. Generate a pair of ssh keys upon first connection to the VMs

Provisioning a CA and Generating TLS Certificates

  1. Generate a CA
  2. Create admin client certificates
  3. Create Kubelet (worker) client certificates
  4. Create Controller client certificate
  5. Create Kube Proxy client certificate
  6. Create Scheduler client certificate
  7. Distribute client and server certificates

Generating Kubernetes Configuration Files for Authentication

  1. Read Kubernetes public IP address into KUBERNETES_PUBLIC_ADDRESS

  2. Generate config files for workers

    for instance in worker-0 worker-1 worker-2; do
      kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=../certs/ca.pem \
        --embed-certs=true \
        --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
        --kubeconfig=${instance}.kubeconfig
    
      kubectl config set-credentials system:node:${instance} \
        --client-certificate=../certs/${instance}.pem \
        --client-key=../certs/${instance}-key.pem \
        --embed-certs=true \
        --kubeconfig=${instance}.kubeconfig
    
      kubectl config set-context default \
        --cluster=kubernetes-the-hard-way \
        --user=system:node:${instance} \
        --kubeconfig=${instance}.kubeconfig
    
      kubectl config use-context default --kubeconfig=${instance}.kubeconfig
    done
  3. Generate config files for kube-proxy

    kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=../certs/ca.pem \
        --embed-certs=true \
        --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
        --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-credentials system:kube-proxy \
        --client-certificate=../certs/kube-proxy.pem \
        --client-key=../certs/kube-proxy-key.pem \
        --embed-certs=true \
        --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default \
        --cluster=kubernetes-the-hard-way \
        --user=system:kube-proxy \
        --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  4. Generate the kube-controller-manager Kubernetes Configuration File

    kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=../certs/ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig
    
    kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=../certs/kube-controller-manager.pem \
    --client-key=../certs/kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig
    
    kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
  5. Generate the kube-scheduler Kubernetes Configuration File

      kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=../certs/ca.pem \
        --embed-certs=true \
        --server=https://127.0.0.1:6443 \
        --kubeconfig=kube-scheduler.kubeconfig
    
      kubectl config set-credentials system:kube-scheduler \
        --client-certificate=../certs/kube-scheduler.pem \
        --client-key=../certs/kube-scheduler-key.pem \
        --embed-certs=true \
        --kubeconfig=kube-scheduler.kubeconfig
    
      kubectl config set-context default \
        --cluster=kubernetes-the-hard-way \
        --user=system:kube-scheduler \
        --kubeconfig=kube-scheduler.kubeconfig
    
      kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
  6. Generate the admin Kubernetes Configuration File

      kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=../certs/ca.pem \
        --embed-certs=true \
        --server=https://127.0.0.1:6443 \
        --kubeconfig=admin.kubeconfig
    
      kubectl config set-credentials admin \
        --client-certificate=../certs/admin.pem \
        --client-key=../certs/admin-key.pem \
        --embed-certs=true \
        --kubeconfig=admin.kubeconfig
    
      kubectl config set-context default \
        --cluster=kubernetes-the-hard-way \
        --user=admin \
        --kubeconfig=admin.kubeconfig
    
      kubectl config use-context default --kubeconfig=admin.kubeconfig
  7. Distribute the Kubernetes Configuration Files

Encryption

  1. Generate an encryption key
  2. Create an encryption configuration
  3. Distribute encryption config to controller nodes

etcd cluster

Commands must be run on all controller nodes, therefore use tmux with synchronized panes

  • Create new panes with ctrl + b then "
  • Enable synchronize-panes: ctrl + b then shift :. Then type set synchronize-panes on at the prompt
  • To disable synchronization: set synchronize-panes off

Install etcd via

  1. Download the and install official etcd binaries
  2. Retrieve internal IP of VM
  3. Create etcd system.d config file in /etc/systemd/system/etcd.service
  4. Start the etcd server
  5. Verify etcd configuration and cluster members

Bootstrapping Kubernetes Control Plane

On all controllers:

  1. Create Kubernetes configuration directory
  2. Download the Kubernetes Controller Binaries
  3. Install the Kubernetes Controller Binaries
  4. Configure the Kubernetes API server
  5. Create the kube-apiserver.service systemd unit file
  6. Configure the Kubernetes Controller Manager
  7. Create the kube-controller-manager.service systemd unit file
  8. Configure the Kubernetes Scheduler
  9. Create the kube-scheduler.yaml configuration file
  10. Create the kube-scheduler.service systemd unit file
  11. Start the Controller Services

Enable HTTP Health Checks

A Google Network Load Balancer will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port 80 and proxy the connections to the API server on https://127.0.0.1:6443/healthz`.

  1. Install nginx on all controllers

  2. Configure nginx to handle basic health checks

  3. Enable and restart nginx

  4. Verify health via kubectl and the nginx healthz endpoint - before you can use kubectl you have to run

    kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=admin

Configure RBAC for Kubelet Authorization

  1. ssh to controller-0
  2. Create the system:kube-apiserver-to-kubelet ClusterRole
  3. Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user

The Kubernetes Frontend Load Balancer

You will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way static IP address will be attached to the resulting load balancer.

Run the following steps from the machine where you created the compute instances, not from the compute instances themselves

  1. Provision a Network Load Balancer
  2. Verify connection from your workstation (make sure to cd into the directory with the certificates)

Bootstrapping the Kubernetes Worker Nodes

On each worker node, run

  1. apt-get update
  2. apt-get -y install socat conntrack ipset
  3. Install worker binaries
  4. Create installation directories
  5. Install the worker binaries
  6. Configure CNI networking
  7. Configure containerd
  8. Configure the Kubelet
  9. Configure the Kubernetes Proxy
  10. Start the worker services

Verification

Run the verification command from your workstation

Configure kubectl

From within the directory that contains the certificates:

  1. Generate the admin kubernetes configuration file
  2. verify configuration

Provisioning Pod Network Routes

From your workstation, create the routes that enable pods to communicate.

Deploying the DNS Cluster Add-on

  1. Deploy coredns cluster add-on

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment