Instantly share code, notes, and snippets.

Embed
What would you like to do?
Doing Kubernetes The Hard Way, A Walkthrough

Kubernetes The Hard Way Notes

Reference: kelseyhightower/kubernetes-the-hard-way

Step 1: Cloud Infrastructure Provisioning - Google Cloud Platform

Reference: step 1

Verification:

$ gcloud compute addresses list kubernetes-the-hard-way
NAME                     REGION       ADDRESS         STATUS
kubernetes-the-hard-way  us-central1  104.197.28.209  RESERVED

$ gcloud compute instances list
NAME         ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
controller0  us-central1-f  n1-standard-1               10.240.0.10  104.197.9.78     RUNNING
controller1  us-central1-f  n1-standard-1               10.240.0.11  104.154.78.153   RUNNING
controller2  us-central1-f  n1-standard-1               10.240.0.12  146.148.91.157   RUNNING
worker0      us-central1-f  n1-standard-1               10.240.0.20  23.236.53.65     RUNNING
worker1      us-central1-f  n1-standard-1               10.240.0.21  130.211.172.185  RUNNING
worker2      us-central1-f  n1-standard-1               10.240.0.22  104.197.17.252   RUNNING

Step 2: Setting up a Certificate Authority and Creating TLS Certificates

Reference: step 2

Notes:

$ ls -al
-rw-r--r--   1 mhausenblas  staff   233B 27 Aug 07:38 admin-csr.json
-rw-------   1 mhausenblas  staff   1.6K 27 Aug 07:38 admin-key.pem
-rw-r--r--   1 mhausenblas  staff   1.0K 27 Aug 07:38 admin.csr
-rw-r--r--   1 mhausenblas  staff   1.4K 27 Aug 07:38 admin.pem
-rw-r--r--   1 mhausenblas  staff   232B 27 Aug 07:31 ca-config.json
-rw-r--r--   1 mhausenblas  staff   218B 27 Aug 07:34 ca-csr.json
-rw-------   1 mhausenblas  staff   1.6K 27 Aug 07:36 ca-key.pem
-rw-r--r--   1 mhausenblas  staff   1.0K 27 Aug 07:36 ca.csr
-rw-r--r--   1 mhausenblas  staff   1.4K 27 Aug 07:36 ca.pem
-rw-r--r--   1 mhausenblas  staff   250B 27 Aug 07:40 kube-proxy-csr.json
-rw-------   1 mhausenblas  staff   1.6K 27 Aug 07:40 kube-proxy-key.pem
-rw-r--r--   1 mhausenblas  staff   1.0K 27 Aug 07:40 kube-proxy.csr
-rw-r--r--   1 mhausenblas  staff   1.4K 27 Aug 07:40 kube-proxy.pem
-rw-r--r--   1 mhausenblas  staff   375B 27 Aug 07:42 kubernetes-csr.json
-rw-------   1 mhausenblas  staff   1.6K 27 Aug 07:43 kubernetes-key.pem
-rw-r--r--   1 mhausenblas  staff   1.1K 27 Aug 07:43 kubernetes.csr
-rw-r--r--   1 mhausenblas  staff   1.5K 27 Aug 07:43 kubernetes.pem

Step 3: Setting up Authentication

Reference: step 3

Step 4: Bootstrapping a H/A etcd cluster

Reference: step 4

Verification:

$ gcloud compute --project "k8s-cookbook" ssh --zone "us-central1-f" "controller1"
...
mhausenblas@controller1:~$ sudo etcdctl \
>   --ca-file=/etc/etcd/ca.pem \
>   --cert-file=/etc/etcd/kubernetes.pem \
>   --key-file=/etc/etcd/kubernetes-key.pem \
>   cluster-health
2017-08-27 07:05:02.136106 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-08-27 07:05:02.137087 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 3a57933972cb5131 is healthy: got healthy result from https://10.240.0.12:2379
member f98dc20bce6225a0 is healthy: got healthy result from https://10.240.0.10:2379
member ffed16798470cab5 is healthy: got healthy result from https://10.240.0.11:2379
cluster is healthy

Step 5: Bootstrapping an H/A Kubernetes Control Plane

Reference: step 5

Notes:

wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kube-apiserver && wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kube-controller-manager && wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kube-scheduler && wget https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl

sudo systemctl daemon-reload && sudo systemctl enable kube-apiserver && sudo systemctl start kube-apiserver && sudo systemctl status kube-apiserver --no-pager
sudo systemctl daemon-reload && sudo systemctl enable kube-controller-manager && sudo systemctl start kube-controller-manager && sudo systemctl status kube-controller-manager --no-pager
sudo systemctl daemon-reload && sudo systemctl enable kube-scheduler && sudo systemctl start kube-scheduler && sudo systemctl status kube-scheduler --no-pager

Verification (only shown for one node in the control plane):

mhausenblas@controller0:~$ kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}

Step 6: Bootstrapping Kubernetes Workers

Reference: step 6

Notes:

sudo mkdir -p /var/lib/{kubelet,kube-proxy,kubernetes} && sudo mkdir -p /var/run/kubernetes && sudo mv bootstrap.kubeconfig /var/lib/kubelet && sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy && sudo mv ca.pem /var/lib/kubernetes/

wget https://get.docker.com/builds/Linux/x86_64/docker-1.12.6.tgz && tar -xvf docker-1.12.6.tgz && sudo cp docker/docker* /usr/bin/

sudo mv docker.service /etc/systemd/system/docker.service && sudo systemctl daemon-reload && sudo systemctl enable docker && sudo systemctl start docker && sudo docker version

sudo mkdir -p /opt/cni && wget https://storage.googleapis.com/kubernetes-release/network-plugins/cni-amd64-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz && sudo tar -xvf cni-amd64-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz -C /opt/cni

wget https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/linux/amd64/kubectl && wget https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/linux/amd64/kube-proxy && wget https://storage.googleapis.com/kubernetes-release/release/v1.6.1/bin/linux/amd64/kubelet && chmod +x kubectl kube-proxy kubelet && sudo mv kubectl kube-proxy kubelet /usr/bin/

sudo mv kubelet.service /etc/systemd/system/kubelet.service && sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet && sudo systemctl status kubelet --no-pager

sudo mv kube-proxy.service /etc/systemd/system/kube-proxy.service && sudo systemctl daemon-reload &&  sudo systemctl enable kube-proxy && sudo systemctl start kube-proxy && sudo systemctl status kube-proxy --no-pager

Verification:

mhausenblas@controller0:~$ kubectl get csr
NAME        AGE       REQUESTOR           CONDITION
csr-7p1wt   3m        kubelet-bootstrap   Approved,Issued
csr-cgk6h   1m        kubelet-bootstrap   Approved,Issued
csr-n0x7z   7m        kubelet-bootstrap   Approved,Issued

mhausenblas@controller0:~$ kubectl get nodes
NAME      STATUS    AGE       VERSION
worker0   Ready     32s       v1.6.1
worker1   Ready     56s       v1.6.1
worker2   Ready     40s       v1.6.1

Step 7: Configuring the Kubernetes Client - Remote Access

Reference: step 7

Verification:

$ kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}

$ kubectl get no
NAME      STATUS    AGE       VERSION
worker0   Ready     7m        v1.6.1
worker1   Ready     7m        v1.6.1
worker2   Ready     7m        v1.6.1

Step 8: Managing the Container Network Routes

Reference: step 8

Notes:

$ kubectl get nodes --output=jsonpath='{range .items[*]}{.status.addresses[?(@.type=="InternalIP")].address} {.spec.podCIDR} {"\n"}{end}'
10.240.0.20 10.200.2.0/24
10.240.0.21 10.200.0.0/24
10.240.0.22 10.200.1.0/24

gcloud compute routes create kubernetes-route-10-200-0-0-24 \
  --network kubernetes-the-hard-way \
  --next-hop-address 10.240.0.21 \
  --destination-range 10.200.0.0/24
gcloud compute routes create kubernetes-route-10-200-1-0-24 \
  --network kubernetes-the-hard-way \
  --next-hop-address 10.240.0.22 \
  --destination-range 10.200.1.0/24
gcloud compute routes create kubernetes-route-10-200-0-2-24 \
  --network kubernetes-the-hard-way \
  --next-hop-address 10.240.0.20 \
  --destination-range 10.200.2.0/24

Step 9: Deploying the Cluster DNS Add-on

Reference: step 9

Verification:

$ kubectl --namespace=kube-system get all
NAME                           READY     STATUS    RESTARTS   AGE
po/kube-dns-3299164672-ksxvb   3/4       Running   0          10s

NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
svc/kube-dns   10.32.0.10   <none>        53/UDP,53/TCP   27s

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kube-dns   1         1         1            0           10s

NAME                     DESIRED   CURRENT   READY     AGE
rs/kube-dns-3299164672   1         1         0         10s

Step 10: Smoke Test

Reference: step 10

Verification:

$ echo ${NODE_PUBLIC_IP}:${NODE_PORT}
23.236.53.65:30289

$ curl http://${NODE_PUBLIC_IP}:${NODE_PORT}
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Step11 : Cleaning Up

Reference: step 11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment