Skip to content

Instantly share code, notes, and snippets.

@iamabhishek-dubey
Last active March 21, 2021 12:32
Show Gist options
  • Save iamabhishek-dubey/db727006334b40b4de81fa9faccd6546 to your computer and use it in GitHub Desktop.
Save iamabhishek-dubey/db727006334b40b4de81fa9faccd6546 to your computer and use it in GitHub Desktop.

Lab Exercises for Cluster Architecture, Installation and Configuration

Exercise 1 - RBAC

A third party application requires access to describe ingress objects that reside in a namespace called rbac. Perform the following:

  1. Create a namespace called rbac
  2. Create a service account called job-inspector for the rbac namespace
  3. Create a role that ha rules to get and list job objects
  4. Create a rolebinding that binds the service account job-inspector to the role created in step 3
  5. Prove the job-inspector service account can "get" job objects but not deployment objects
Answer - Imperative
kubectl create namespace rbac
kubectl create sa job-inspector -n rbac
kubectl create role job-inspector --verb=get --verb=list --resource=jobs -n rbac
kubectl create rolebinding permit-job-inspector --role=job-inspector --serviceaccount=rbac:job-inspector -n rbac
kubectl --as=system:serviceaccount:rbac:job-inspector auth can-i get job -n rbac 
kubectl --as=system:serviceaccount:rbac:job-inspector auth can-i get deployment -n rbac
Answer - Delcarative
apiVersion: v1
kind: Namespace
metadata:
  name: rbac
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: job-inspector
  namespace: rbac
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: job-inspector
  namespace: rbac
rules:
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: permit-job-inspector
  namespace: rbac
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: job-inspector
subjects:
  - kind: ServiceAccount
    name: job-inspector
    namespace: rbac

Exercise 2 - Manage a highly-available Kubernetes cluster

  1. Using etcdctl, determine the health of the etcd cluster
  2. Using etcdctl, identify the list of members
  3. On the master node, determine the health of the cluster by probing the API endpoint
Answer Run as root user
$ export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379
$ export ETCDCTL_API=3
$ export ETCDCTL_CACERT=/etc/ssl/etcd/ssl/ca.pem
$ export ETCDCTL_KEY=/etc/ssl/etcd/ssl/admin-node1-key.pem
$ export ETCDCTL_CERT=/etc/ssl/etcd/ssl/admin-node1.pem
$ etcdctl endpoint --cluster health


$ etcdctl member list
<id>: name=etcd1 peerURLs=http://<ip>:2380 clientURLs=<ip>:2379
<id>: name=etcd0 peerURLs=http://<ip>:2380 clientURLs=<ip>:2379
<id>: name=etcd2 peerURLs=http://<ip>:2380 clientURLs=<ip>:2379

curl -k https://localhost:6443/healthz?verbose
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
...

Exercise 3 - Perform a version upgrade on a Kubernetes cluster using Kubeadm

  1. Using kubeadm, upgrade a cluster to the lastest version
Answer

If held, unhold the kubeadm version

sudo apt-mark unhold kubeadm

Upgrade the kubeadm version:

sudo apt-get install --only-upgrade kubeadm

plan the upgrade:

sudo kubeadm upgrade plan

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     1 x v1.19.0   v1.20.2

Upgrade to the latest stable version:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v1.19.7   v1.20.2
kube-controller-manager   v1.19.7   v1.20.2
kube-scheduler            v1.19.7   v1.20.2
kube-proxy                v1.19.7   v1.20.2
CoreDNS                   1.7.0     1.7.0
etcd                      3.4.9-1   3.4.13-0

Upgrade the cluster

kubeadm upgrade apply v1.20.2

Upgrade Kubelet:

sudo apt-get install --only-upgrade kubelet

Exercise 4 - Implement etcd backup and restore

  1. Take a backup of etcd
  2. Verify the etcd backup has been successful
  3. Restore the backup back to the cluster
Answer

Take a snapshot of etcd:

ETCDCTL_API=3 etcdctl snapshot save snapshot.db --cacert /etc/kubernetes/pki/etcd/server.crt --cert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/ca.key

Verify the snapshot:

sudo ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db

Perform a restore:

ETCDCTL_API=3 etcdctl snapshot restore snapshot.db
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment