A third party application requires access to describe ingress
objects that reside in a namespace called rbac
. Perform the following:
- Create a namespace called
rbac
- Create a service account called
job-inspector
for therbac
namespace - Create a role that ha rules to
get
andlist
job objects - Create a rolebinding that binds the service account
job-inspector
to the role created in step 3 - Prove the
job-inspector
service account can "get"job
objects but notdeployment
objects
Answer - Imperative
kubectl create namespace rbac
kubectl create sa job-inspector -n rbac
kubectl create role job-inspector --verb=get --verb=list --resource=jobs -n rbac
kubectl create rolebinding permit-job-inspector --role=job-inspector --serviceaccount=rbac:job-inspector -n rbac
kubectl --as=system:serviceaccount:rbac:job-inspector auth can-i get job -n rbac
kubectl --as=system:serviceaccount:rbac:job-inspector auth can-i get deployment -n rbac
Answer - Delcarative
apiVersion: v1
kind: Namespace
metadata:
name: rbac
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: job-inspector
namespace: rbac
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: job-inspector
namespace: rbac
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: permit-job-inspector
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: job-inspector
subjects:
- kind: ServiceAccount
name: job-inspector
namespace: rbac
- Using
etcdctl
, determine the health of the etcd cluster - Using
etcdctl
, identify the list of members - On the master node, determine the health of the cluster by probing the API endpoint
Answer
Run as root user$ export ETCDCTL_ENDPOINTS=https://127.0.0.1:2379
$ export ETCDCTL_API=3
$ export ETCDCTL_CACERT=/etc/ssl/etcd/ssl/ca.pem
$ export ETCDCTL_KEY=/etc/ssl/etcd/ssl/admin-node1-key.pem
$ export ETCDCTL_CERT=/etc/ssl/etcd/ssl/admin-node1.pem
$ etcdctl endpoint --cluster health
$ etcdctl member list
<id>: name=etcd1 peerURLs=http://<ip>:2380 clientURLs=<ip>:2379
<id>: name=etcd0 peerURLs=http://<ip>:2380 clientURLs=<ip>:2379
<id>: name=etcd2 peerURLs=http://<ip>:2380 clientURLs=<ip>:2379
curl -k https://localhost:6443/healthz?verbose
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
...
- Using
kubeadm
, upgrade a cluster to the lastest version
Answer
If held, unhold the kubeadm version
sudo apt-mark unhold kubeadm
Upgrade the kubeadm
version:
sudo apt-get install --only-upgrade kubeadm
plan
the upgrade:
sudo kubeadm upgrade plan
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 1 x v1.19.0 v1.20.2
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.19.7 v1.20.2
kube-controller-manager v1.19.7 v1.20.2
kube-scheduler v1.19.7 v1.20.2
kube-proxy v1.19.7 v1.20.2
CoreDNS 1.7.0 1.7.0
etcd 3.4.9-1 3.4.13-0
Upgrade the cluster
kubeadm upgrade apply v1.20.2
Upgrade Kubelet:
sudo apt-get install --only-upgrade kubelet
- Take a backup of etcd
- Verify the etcd backup has been successful
- Restore the backup back to the cluster
Answer
Take a snapshot of etcd:
ETCDCTL_API=3 etcdctl snapshot save snapshot.db --cacert /etc/kubernetes/pki/etcd/server.crt --cert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/ca.key
Verify the snapshot:
sudo ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db
Perform a restore:
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db