- Glossary
- Set up
gcloud
- Networking
- Compute
- Provisioning a CA and Generating TLS Certificates
- Generating Kubernetes Configuration Files for Authentication
- Encryption
- etcd cluster
- Bootstrapping Kubernetes Control Plane
- Bootstrapping the Kubernetes Worker Nodes
- Configure kubectl
- Provisioning Pod Network Routes
- Deploying the DNS Cluster Add-on
- Kubernetes Control Plane
- Worker Nodes
- Controller Nodes
- Kubernetes Cluster
- Virtual Private Cloud (VPC) Network created for a Kubernetes cluster. Within this VPC a subnet is created
# download from https://cloud.google.com/sdk/docs/quickstart-mac-os-x and extract
mv ~/Downloads/google-cloud-sdk /usr/local/bin
/usr/local/bin/google-cloud-sdk/install.sh
ln -s /usr/local/bin/google-cloud-sdk/bin/gcloud /usr/local/bin/gcloud
# now test with
gcloud
Compute Engine API needs to be activated in the GCP UI (requires billing). Afterwards gcloud init
can be run. Selected europe-west3-a
as region.
- Create a VPC (Virtual Private Cloud) Network
- Create a subnet within this VPC Network
- Create Firewall rules that allow communiction inside this network
- Create a static IP to access the K8S API using an external load balancer. Question: is the load balancer created or just configured to use the static IP?
- Create 3 VMs for the controller nodes
- Create 3 VMs for the worker nodes
- Generate a pair of ssh keys upon first connection to the VMs
- Generate a CA
- Create admin client certificates
- Create Kubelet (worker) client certificates
- Create Controller client certificate
- Create Kube Proxy client certificate
- Create Scheduler client certificate
- Distribute client and server certificates
-
Read Kubernetes public IP address into
KUBERNETES_PUBLIC_ADDRESS
-
Generate config files for workers
for instance in worker-0 worker-1 worker-2; do kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=../certs/ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=${instance}.kubeconfig kubectl config set-credentials system:node:${instance} \ --client-certificate=../certs/${instance}.pem \ --client-key=../certs/${instance}-key.pem \ --embed-certs=true \ --kubeconfig=${instance}.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:node:${instance} \ --kubeconfig=${instance}.kubeconfig kubectl config use-context default --kubeconfig=${instance}.kubeconfig done
-
Generate config files for kube-proxy
kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=../certs/ca.pem \ --embed-certs=true \ --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials system:kube-proxy \ --client-certificate=../certs/kube-proxy.pem \ --client-key=../certs/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
-
Generate the kube-controller-manager Kubernetes Configuration File
kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=../certs/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=../certs/kube-controller-manager.pem \ --client-key=../certs/kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
-
Generate the kube-scheduler Kubernetes Configuration File
kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=../certs/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=../certs/kube-scheduler.pem \ --client-key=../certs/kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
-
Generate the admin Kubernetes Configuration File
kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=../certs/ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=admin.kubeconfig kubectl config set-credentials admin \ --client-certificate=../certs/admin.pem \ --client-key=../certs/admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=admin \ --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig
-
Distribute the Kubernetes Configuration Files
- Generate an encryption key
- Create an encryption configuration
- Distribute encryption config to controller nodes
Commands must be run on all controller nodes, therefore use tmux with synchronized panes
- Create new panes with
ctrl + b
then"
- Enable synchronize-panes:
ctrl + b
thenshift :
. Then typeset synchronize-panes on
at the prompt - To disable synchronization:
set synchronize-panes off
Install etcd via
- Download the and install official etcd binaries
- Retrieve internal IP of VM
- Create etcd system.d config file in
/etc/systemd/system/etcd.service
- Start the etcd server
- Verify etcd configuration and cluster members
On all controllers:
- Create Kubernetes configuration directory
- Download the Kubernetes Controller Binaries
- Install the Kubernetes Controller Binaries
- Configure the Kubernetes API server
- Create the
kube-apiserver.service
systemd unit file - Configure the Kubernetes Controller Manager
- Create the
kube-controller-manager.service
systemd unit file - Configure the Kubernetes Scheduler
- Create the
kube-scheduler.yaml
configuration file - Create the kube-scheduler.service systemd unit file
- Start the Controller Services
A Google Network Load Balancer will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port 80
and proxy the connections to the API server on https://127.0.0.1:6443/healthz`.
-
Install nginx on all controllers
-
Configure nginx to handle basic health checks
-
Enable and restart nginx
-
Verify health via
kubectl
and the nginx healthz endpoint - before you can usekubectl
you have to runkubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=admin
- ssh to controller-0
- Create the
system:kube-apiserver-to-kubelet
ClusterRole - Bind the
system:kube-apiserver-to-kubelet
ClusterRole to thekubernetes
user
You will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way
static IP address will be attached to the resulting load balancer.
Run the following steps from the machine where you created the compute instances, not from the compute instances themselves
- Provision a Network Load Balancer
- Verify connection from your workstation (make sure to cd into the directory with the certificates)
On each worker node, run
- apt-get update
- apt-get -y install socat conntrack ipset
- Install worker binaries
- Create installation directories
- Install the worker binaries
- Configure CNI networking
- Configure containerd
- Configure the Kubelet
- Configure the Kubernetes Proxy
- Start the worker services
Run the verification command from your workstation
From within the directory that contains the certificates:
- Generate the admin kubernetes configuration file
- verify configuration
From your workstation, create the routes that enable pods to communicate.
- Deploy coredns cluster add-on