Skip to content

Instantly share code, notes, and snippets.

@debu999
Last active April 16, 2024 15:09
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save debu999/55571e12430c5064c4e1de6e5b21aefe to your computer and use it in GitHub Desktop.
Save debu999/55571e12430c5064c4e1de6e5b21aefe to your computer and use it in GitHub Desktop.
CKAD Exam

===========================================================================================

get contexts

alias k=kubectl
k config get-context
k config current-context
echo " k config current-context" > /opt/course/1/context_default_kubectl.sh
chmod +x /opt/course/1/context_default_kubectl.sh
bash /opt/course/1/context_default_kubectl.sh
cat .kube/config | grep current
cat .kube/config | grep current | awk '{ print $2 }'
echo 'cat ~/.kube/config | grep current | sed "s/current-context: //g"' > /opt/course/context_default_no_kubectl.sh
chmod +x /opt/course/context_default_no_kubectl.sh
/opt/course/context_default_no_kubectl.sh
bash /opt/course/context_default_no_kubectl.sh

===========================================================================================

singple pod of image httpd:2.4.410-alpine in ns default Pod name is pod1 and container is pod1-container pod scheduled on master node no new label to be added

alias ksn='k config set-context --current --namespace'
ksn default #changed to default ns
alias kr='k run --dry-run=client -o yaml --image'
# get master node name via k get nodes and then add the nodename where you want to run the pod replace container name with pod-conatiner1
kr httpd:2.4.41-alpine pod1 | sed 's/    name: pod1/    name: pod1-container/g' | sed -e "s/spec:/spec:\n  nodename: $(echo $(k get nodes | grep master | awk '{ print $1 }'))/g" > 2.yaml
alias kaf='k apply -f'
kaf 2.yaml
alias kgp='k get pod'
kgp
kgp -o wide

===========================================================================================

use kubectl config use-context k8s-c1-H 2 pods in namespace project-c13 o3db-* scale down pod to 1 replica

k config use-context k8s-c1-H
ksn k8s-c1-H
 
k get sts # sts is statefulsets currently set to not 1
k scale sts o3db --replicas 1
k get sts # now shows 1

=========================================================================================== Screenshot 2023-04-27 at 8 31 27 AM

k config use-context k8s-c1-H
ksn default
kr nginx:1.16.1-alpine ready-if-service-ready > 4.yaml
# add the following for liveness and readiness probe
livenessProbe:
  exec:
    command: 
    - echo
    - "true"
readinessProbe:
  exec:
    command: 
    - /bin/sh
    - -c
    - "curl http://service-am-i-ready:80 "
k get svc # to get all service
k describe svc service-am-i-ready
alias kt='k run -it --rm tmp --image=nginx:alpine --restart=Never '
alias > a
kt -- wget -T2 -0- http://service-am-i-ready:80
kr nginx:1.16.1-alpine cross-server-ready --labels=id=cross-server-ready > 4.1.yaml
kaf 4.1.yml 

if you want to do 1 line
then 
alias ka='k apply '
kr nginx:1.16.1-alpine cross-server-ready --labels=id=cross-server-ready | ka -
↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
This is one liner

=========================================================================================== Screenshot 2023-04-27 at 8 56 56 AM

kubectl config use-context k8s-c1-H
alias kgpsbts='k get pods -A sort-by=.metadata.creationTimestamp' 
echo 'kubectl get pods -A sort-by=.metadata.creationTimestamp' > /opt/course/5/find_pods.sh
echo 'kubectl get pods -A sort-by=.metadata.uid' > /opt/course/5/find_pods_uid.sh
sh /opt/course/5/find_pods.sh
sh /opt/course/5/find_pods_uid.sh

===========================================================================================

Screenshot 2023-04-27 at 9 05 57 AM

kubectl config use-context k8s-c1-H
apiVersion: v1
kind: PersistentVolume
metadata:
  name: safari-pv
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/Volumes/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: safari-pvc
  namespace: project-tiger
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
      
apply the above file using k apply -f 6.yaml
ksn project-tiger
k get pv,pvc
k create deployment safari --image=httpd:2.4.41-alpine --dry-run=client -o yaml  > 6.1.yaml
update the following in the file
spec:
  volumes:
    - name: safari-pv-volume
      persistentVolumeClaim:
        claimName: safari-pvc
        
        
Inside container
      volumeMounts:
        - mountPath: "/tmp/safari-data"
          name: safari-pv-volume
          
alias kgd='k get deployment'

=========================================================================================== Screenshot 2023-04-27 at 9 23 11 AM

kubectl config use-context k8s-c1-H
k get nodes 
echo "k top get nodes" > /opt/course/7/node.sh
echo "k top get nodes --containers=true" > /opt/course/7/pod.sh
sh /opt/course/7/node.sh
sh /opt/course/7/pod.sh

=========================================================================================== Screenshot 2023-04-27 at 9 55 50 AM

kubectl config use-context k8s-c1-H
ssh cluster1-master1
crictl pods

Screenshot 2023-04-27 at 10 11 53 AM

exit

cat << EOF > /opt/course/8/master-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler:static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns

EOF

=========================================================================================== Screenshot 2023-04-27 at 10 46 18 AM

kubectl config use-context k8s-c2-AC
ssh cluster2-master1
pwd
cp /etc/kubernetes/manifestes/kube-scheduler.yaml ./
rm /etc/kubernetes/manifestes/kube-scheduler.yaml
kr httpd:2.4-alpine manual-schedule > 9.yaml
kaf 9.yaml
kgp 
add the following in 9.yaml
nodeName: cluster2-master1
then do the following
k replace -f 9.yaml --force
kgp
mv ./kube-scheduler.yaml /etc/kubernetes/manifestes/kube-scheduler.yaml
kr httpd:2.4-alpine manual-schedule2 > 9.1.yaml
kaf 9.1.yaml
kgp -w

=========================================================================================== Screenshot 2023-04-27 at 10 58 50 AM

kubectl config use-context k8s-c1-H
ksn project-hamster
k create sa processor
k create role processor -n project-hamster --verb=create --resource=configMaps,Secrets
k create rolebinding processor -n project-hamster --role=processor --serviceaccount=project-hamster:processor

=========================================================================================== Screenshot 2023-04-27 at 11 04 23 AM

kubectl config use-context k8s-c1-H
ksn project-tiger
cat << EOF > 11.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-important
  namespace: project-tiger
  labels:
    id: ds-important
    uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
  selector:
    matchLabels:
      id: ds-important
      uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
  template:
    metadata:
      labels:
        id: ds-important
        uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
    spec:
      tolerations:
      # these tolerations are to have the daemonset runnable on control plane nodes
      # remove them if your control plane nodes should not run pods
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: ds-important
        image: httpd:2.4-alpine
        resources:
          requests:
            cpu: 10m
            memory: 10Mi
EOF
kaf 11.yaml
kgp
kgp -A -wide | grep ds

=========================================================================================== Screenshot 2023-04-27 at 11 19 12 AM

kubectl config use-context k8s-c1-H
ksn project-tiger
alias kcd='k create deployment -n project-tiger deploy-important --labels=id=very-important -o yaml --image'
kcd nginx:1.17.6-alpine > 12.yaml
k get nodes 
k describe node ####MASTER
following changes are needed
1. replicas:3
2. template.metadata.lables add id: very-important
3. Add new image in containers with the following
- image: kubernetes/pause
  name: container2
4. add affinity details for pod to node under spec.template.spec
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpression:
          - key: id
            operator: In
            values:
            - very-important
        topologyKey: kubernetes.io/hostname
kaf 12.yaml
k get deployment deploy-important

=========================================================================================== Screenshot 2023-04-27 at 7 34 44 PM

kubectl config use-context k8s-c1-H
ksn default
kr --image nginx:1.17.6-alpine multi-container-playground -n default > 13.yaml
or we can do the following
cat << EOF > 13.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-container-playground
  name: multi-container-playground
  namespace: default
spec:
  volumes:
  - name: svol
  containers:
  - image: nginx:1.17.6-alpine
    name: c1
    volumeMounts:
    - name: svol
      mountPath: /your/vol/path
    env:
      # pass node name to the environment
      - name: MY_NODE_NAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName
  - image: busybox:1.31.1
    name: c2
    args:
    - /bib/sh
    - -c
    - while true; do date >> /your/vol/path/date.log; sleep 1;done
    volumeMounts:
    - name: svol
      mountPath: /your/vol/path
  - image: busybox:1.31.1
    name: c3
    args:
    - /bib/sh
    - -c
    - tail -f /your/vol/path/date.log
    volumeMounts:
    - name: svol
      mountPath: /your/vol/path
EOF
kaf 13.yaml

k log -n default multi-container-playground --follow

=========================================================================================== Screenshot 2023-04-27 at 7 57 17 PM

kubectl config use-context k8s-c1-H
k get nodes
ssh cluster1-master1
ls /etc/kubernetes/manifests/kube-apiserver.yaml
k describe node cluster1-master1 | grep -i cidr
cat EOF > /opt/course/14/cluster-info
1: 1
2: 2
3: 10.244.0.0/12
4: weave.net
5: -cluster1-worker1
EOF

=========================================================================================== Screenshot 2023-04-27 at 8 11 28 PM

kubectl config use-context k8s-c2-AC
echo "kubectl get events -A --sort-by=.metadata.creationTimestamp" > /opt/course/15/cluster_events.sh
sh /opt/course/15/cluster_events.sh
k get po -o wide -A | grep proxy | grep cluster2-worker1
k delete pod kube-proxy-qg9td --force -n kube-proxy
ssh cluster2-worker1
crictl ps
crictl kill  containerid
crictl rm containerid
exit

sh /opt/course/15/cluster_events.sh | grep proxy > /opt/course/15/container_kill.log

=========================================================================================== Screenshot 2023-04-27 at 8 22 08 PM

kubectl config use-context k8s-c1-H
k create ns cka-master
ksn cka-master
kubectl api-resources --verbs=list --namespaced -o name > /opt/course/16/resources.txt
# | xargs -n 1 kubectl get --show-kind --ignore-not-found -n cka-master
for n in $(k get ns -A | grep project | awk '{ print $1 }')
do
echo $x $(k get roles -n $x | wc -l)
echo "project-c14 300" > /opt/course/16/crowded-namespace.txt

=========================================================================================== Screenshot 2023-04-27 at 8 32 49 PM

kubectl config use-context k8s-c1-H
k run --dry-run=client -o yaml --image httpd:2.4.41-alpine  tigers-reunite --labels=pod=container,container=pod > 17.yaml 
kaf 17.yaml
k get pod -o wide
ssh cluster1-worker1
crictl ps
echo 238u402309qu04 > /opt/course/17/pod-container.txt
k logs pod tigers-reunite > /opt/course/17/pod-container.log

=========================================================================================== Screenshot 2023-04-27 at 8 39 56 PM

kubectl config use-context k8s-c3-CCC
echo "kubelet service path was wrong in node cluster3-worker1 "

=========================================================================================== Screenshot 2023-04-27 at 8 42 30 PM

kubectl config use-context k8s-c3-CCC
 

=========================================================================================== Screenshot 2023-04-27 at 8 45 21 PM


=========================================================================================== Screenshot 2023-04-27 at 8 45 50 PM

kubectl config use-context k8s-c3-CCC
kr nginx:1.16-alpine my-static-pod -n default > 21.yaml
add in resources
resources:
  request:
  cpu: 10m
  memory: 20Mi
add
nodeName: cluster3-master1
THEN RUN THE FOLLOWING
kaf 21.yaml
k expose pod my-static-pod-cluster3-master1 --port=80 -n default --name=static-pod-service --type=NodePort
k get svc
k desc svc static-pod-service
k get nodes -o wide
use the internal ip to connect
curl 192.168.100.31:31716

=========================================================================================== Screenshot 2023-04-27 at 9 00 39 PM

kubectl config use-context k8s-c2-AC
ssh cluster2-master1
find /etc/kuberenetes/pki

cat /etc/kuberenetes/pki/apisever.crt
openssl x509 -noout -text /etc/kuberenetes/pki/apisever.crt | grep -i valid
echo "Nov 6 03:25:03 2023 GMT" > /opt/course/22/kubeadm-renew-certs.sh
kubeadm certs check-expiration

=========================================================================================== Q22 Screenshot 2023-04-27 at 10 06 40 PM

kubectl config use-context k8s-c2-AC
ssh cluster2-master1
find /etc/kuberenetes/pki

cat /etc/kuberenetes/pki/apisever.crt
openssl x509 -noout -text /etc/kuberenetes/pki/apisever.crt | grep -i valid
echo "Nov 6 03:25:03 2023 GMT" > /opt/course/22/kubeadm-renew-certs.sh
kubeadm certs check-expiration
echo "kubeadm certs renew apiserver" >> /opt/course/22/kubeadm-renew-certs.sh

=========================================================================================== Screenshot 2023-04-27 at 10 09 48 PM

kubectl config user-context k8s-c2-AC
ssh cluster2-worker1
systemctl status kubelet
ls /var/lib/kubelet/pki
openssl x509 -noout -text /var/lib/kubelet/pki/kubelet-client-current.pem | grep -i "extended\}issuer" -C5
echo "Issuer: CN = kubernetes, Extended Key Usage: TLS Web Client Authentication" > /opt/course/23/certificate-info.txt

=========================================================================================== Screenshot 2023-04-27 at 10 17 38 PM

kubectl config use-context k8s-c1-H
k create ns project-snake
kgp --show-labels
k get svc -n project-snake -o wide
cat << EOF > 24.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np-backend
  namespace: project-snake
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: db1
      ports:
        - protocol: TCP
          port: 1111
    - to:
        - podSelector:
            matchLabels:
              app: db2
      ports:
        - protocol: TCP
          port: 2222
EOF
kaf 24.yaml
k exec backend-0 -- curl -m 4 10.44.0.10:1111  # <--- WORKS
k exec backend-0 -- curl -m 4 10.44.0.10:2222  # <--- WORKS
k exec backend-0 -- curl -m 4 10.44.0.10:3333  # <--- DOES NOT WORKS

=========================================================================================== Screenshot 2023-04-27 at 10 37 20 PM


===========================================================================================


===========================================================================================


You can access this page also inside the Remote Desktop by using the icons on the desktop

Score
Questions and Answers
Preview Questions and Answers
Exam Tips

CKAD Simulator Kubernetes 1. 26

Pre Setup

Once you've gained access to your terminal it might be wise to spend ~1 minute to setup your environment. You could set these:
Vim
The following settings will already be configured in your real exam environment in ~/.vimrc. But it can never hurt to be able to type these
down:
More setup suggestions are in the tips section.

Question 1 | Namespaces

Task weight: 1%
The DevOps team would like to get the list of all Namespaces in the cluster.Get the list and save it to /opt/course/1/namespaces.
Answer:
The content should then look like:

Question 2 | Pods

Task weight: 2%
Create a single Pod of image httpd:2.4.41-alpine in Namespace default.The Pod should be named pod1 and the container should be
named pod1-container.
alias k=kubectl # will already be pre-configured
export do="--dry-run=client -o yaml" # k create deploy nginx --image=nginx $do
export now="--force --grace-period 0" # k delete pod x $now
set tabstop=
set expandtab
set shiftwidth=
k get ns > /opt/course/1/namespaces
# /opt/course/1/namespaces
NAME STATUS AGE
default Active 150m
earth Active 76m
jupiter Active 76m
kube-public Active 150m
kube-system Active 150m
mars Active 76m
mercury Active 76m
moon Active 76m
neptune Active 76m
pluto Active 76m
saturn Active 76m
shell-intern Active 76m
sun Active 76m
venus Active 76m
Your manager would like to run a command manually on occasion to output the status of that exact Pod. Please write a command that does
this into /opt/course/2/pod1-status-command.sh. The command should use kubectl.
Answer:
Change the container name in 2.yaml to pod1-container:
Then run:
Next create the requested command:
The content of the command file could look like:
Another solution would be using jsonpath:
To test the command:

Question 3 | Job

Task weight: 2%
Team Neptune needs a Job template located at /opt/course/3/job.yaml. This Job should run image busybox:1.31.0 and execute sleep 2
&& echo done. It should be in namespace neptune, run a total of 3 times and should execute 2 runs in parallel.
Start the Job and check its history. Each pod created by the Job should have the label id: awesome-job. The job should be named neb-new-
job and the container neb-new-job-container.
k run # help
# check the export on the very top of this document so we can use $do
k run pod1 --image=httpd:2.4.41-alpine $do > 2 .yaml
vim 2 .yaml
# 2.yaml
apiVersion: v
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod
name: pod
spec:
containers:
  • image: httpd:2.4.41-alpine name: pod1-container # change resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
➜ k create -f 2.yaml
pod/pod1 created
➜ k get pod
NAME READY STATUS RESTARTS AGE
pod1 0/1 ContainerCreating 0 6s
➜ k get pod
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 30s
vim /opt/course/2/pod1-status-command.sh
# /opt/course/2/pod1-status-command.sh
kubectl -n default describe pod pod1 | grep -i status:
# /opt/course/2/pod1-status-command.sh
kubectl -n default get pod pod1 -o jsonpath="{.status.phase}"
➜ sh /opt/course/2/pod1-status-command.sh
Running
Answer:
Make the required changes in the yaml:
Then to create it:
Check Job and Pods , you should see two running parallel at most but three in total:
Check history:
k -n neptun create job -h
# check the export on the very top of this document so we can use $do
k -n neptune create job neb-new-job --image=busybox:1.31.0 $do > /opt/course/3/job.yaml -- sh -c "sleep 2 && echo done"
vim /opt/course/3/job.yaml
# /opt/course/3/job.yaml
apiVersion: batch/v
kind: Job
metadata:
creationTimestamp: null
name: neb-new-job
namespace: neptune # add
spec:
completions: 3 # add
parallelism: 2 # add
template:
metadata:
creationTimestamp: null
labels: # add
id: awesome-job # add
spec:
containers:
  • command:
  • sh
  • -c
  • sleep 2 && echo done image: busybox:1.31. name: neb-new-job-container # update resources: {} restartPolicy: Never status: {}
k -f /opt/course/3/job.yaml create # namespace already set in yaml
➜ k -n neptune get pod,job | grep neb-new-job
pod/neb-new-job-jhq2g 0/1 ContainerCreating 0 4s
pod/neb-new-job-vf6ts 0/1 ContainerCreating 0 4s
job.batch/neb-new-job 0/3 4s 5s
➜ k -n neptune get pod,job | grep neb-new-job
pod/neb-new-job-gm8sz 0/1 ContainerCreating 0 0s
pod/neb-new-job-jhq2g 0/1 Completed 0 10s
pod/neb-new-job-vf6ts 1/1 Running 0 10s
job.batch/neb-new-job 1/3 10s 11s
➜ k -n neptune get pod,job | grep neb-new-job
pod/neb-new-job-gm8sz 0/1 ContainerCreating 0 5s
pod/neb-new-job-jhq2g 0/1 Completed 0 15s
pod/neb-new-job-vf6ts 0/1 Completed 0 15s
job.batch/neb-new-job 2/3 15s 16s
➜ k -n neptune get pod,job | grep neb-new-job
pod/neb-new-job-gm8sz 0/1 Completed 0 12s
pod/neb-new-job-jhq2g 0/1 Completed 0 22s
pod/neb-new-job-vf6ts 0/1 Completed 0 22s
job.batch/neb-new-job 3/3 21s 23s
➜ k -n neptune describe job neb-new-job
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-jhq2g
Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-vf6ts
Normal SuccessfulCreate 2m42s job-controller Created pod: neb-new-job-gm8sz
At the age column we can see that two pods run parallel and the third one after that. Just as it was required in the task.

Question 4 | Helm Management

Task weight: 5%
Team Mercury asked you to perform some operations using Helm, all in Namespace mercury:
1. Delete release internal-issue-report-apiv
2. Upgrade release internal-issue-report-apiv2 to any newer version of chart bitnami/nginx available
3. Install a new release internal-issue-report-apache of chart bitnami/apache. The Deployment should have two replicas, set these via
Helm-values during install
4. There seems to be a broken release, stuck in pending-install state. Find it and delete it
Answer:
Helm Chart : Kubernetes YAML template-files combined into a single package, Values allow customisation
Helm Release : Installed instance of a Chart
Helm Values : Allow to customise the YAML template-files in a Chart when creating a Release
1.
First we should delete the required release:
2.
Next we need to upgrade a release, for this we could first list the charts of the repo:
Here we see that a newer chart version 9.5.2 is available. But the task only requires us to upgrade to any newer chart version available, so
we can simply run:
➜ helm -n mercury ls
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apiv1 mercury deployed nginx-9.5.0 1.21.
internal-issue-report-apiv2 mercury deployed nginx-9.5.0 1.21.
internal-issue-report-app mercury deployed nginx-9.5.0 1.21.
➜ helm -n mercury uninstall internal-issue-report-apiv
release "internal-issue-report-apiv1" uninstalled
➜ helm -n mercury ls
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apiv2 mercury deployed nginx-9.5.0 1.21.
internal-issue-report-app mercury deployed nginx-9.5.0 1.21.
➜ helm repo list
NAME URL
bitnami https://charts.bitnami.com/bitnami
➜ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
➜ helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/nginx 9.5.2 1.21.1 Chart for the nginx server ...
➜ helm -n mercury upgrade internal-issue-report-apiv2 bitnami/nginx
Release "internal-issue-report-apiv2" has been upgraded. Happy Helming!
NAME: internal-issue-report-apiv
LAST DEPLOYED: Tue Aug 31 17:40:42 2021
NAMESPACE: mercury
STATUS: deployed
REVISION: 2
TEST SUITE: None
...
➜ helm -n mercury ls
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apiv2 mercury deployed nginx-9.5.2 1.21.
internal-issue-report-app mercury deployed nginx-9.5.0 1.21.
Looking good!
INFO: Also check out helm rollback for undoing a helm rollout/upgrade
3.
Now we're asked to install a new release, with a customised values setting. For this we first list all possible value settings for the chart, we can
do this via:
Huge list, if we search in it we should find the setting replicaCount: 1 on top level. This means we can run:
If we would also need to set a value on a deeper level, for example image.debug, we could run:
Install done, let's verify what we did:
We see a healthy deployment with two replicas!
4.
By default releases in pending-upgrade state aren't listed, but we can show all to find and delete the broken release:
Thank you Helm for making our lifes easier! (Till something breaks)

Question 5 | ServiceAccount, Secret

Task weight: 3%
Team Neptune has its own ServiceAccount named neptune-sa-v2 in Namespace neptune. A coworker needs the token from the Secret that
belongs to that ServiceAccount. Write the base64 decoded token to file /opt/course/5/token.
Answer:
Since K8s 1.24, Secrets won't be created automatically for ServiceAccounts any longer. But it's still possible to create a Secret manually and
attach it to a ServiceAccount by setting the correct annotation on the Secret. This was done for this task.
helm show values bitnami/apache # will show a long list of all possible value-settings
helm show values bitnami/apache | yq e # parse yaml and show with colors
➜ helm -n mercury install internal-issue-report-apache bitnami/apache --set replicaCount=
NAME: internal-issue-report-apache
LAST DEPLOYED: Tue Aug 31 17:57:23 2021
NAMESPACE: mercury
STATUS: deployed
REVISION: 1
TEST SUITE: None
...
helm -n mercury install internal-issue-report-apache bitnami/apache \
--set replicaCount= 2 \
--set image.debug=true
➜ helm -n mercury ls
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apache mercury deployed apache-8.6.3 2.4.
...
➜ k -n mercury get deploy internal-issue-report-apache
NAME READY UP-TO-DATE AVAILABLE AGE
internal-issue-report-apache 2/2 2 2 96s
➜ helm -n mercury ls -a
NAME NAMESPACE STATUS CHART APP VERSION
internal-issue-report-apache mercury deployed apache-8.6.3 2.4.
internal-issue-report-apiv2 mercury deployed nginx-9.5.2 1.21.
internal-issue-report-app mercury deployed nginx-9.5.0 1.21.
internal-issue-report-daniel mercury pending-install nginx-9.5.0 1.21.
➜ helm -n mercury uninstall internal-issue-report-daniel
release "internal-issue-report-daniel" uninstalled
If a Secret belongs to a ServiceAccont , it'll have the annotation kubernetes.io/service-account.name. Here the Secret we're looking for is
neptune-secret-1.
This shows the base64 encoded token. To get the encoded one we could pipe it manually through base64 -d or we simply do:
Copy the token (part under token:) and paste it using vim.
File /opt/course/5/token should contain the token:

Question 6 | ReadinessProbe

Task weight: 7%
Create a single Pod named pod6 in Namespace default of image busybox:1.31.0. The Pod should have a readiness-probe executing cat
/tmp/ready. It should initially wait 5 and periodically wait 10 seconds. This will set the container ready only if the file /tmp/ready exists.
The Pod should run the command touch /tmp/ready && sleep 1d, which will create the necessary file to be ready and then idles. Create the
Pod and confirm it starts.
k -n neptune get sa # get overview
k -n neptune get secrets # shows all secrets of namespace
k -n neptune get secrets -oyaml | grep annotations -A 1 # shows secrets with first annotation
➜ k -n neptune get secret neptune-secret-1 -o yaml
apiVersion: v
data:
...
token:
ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltNWFaRmRxWkRKMmFHTnZRM0JxV0haT1IxZzFiM3BJY201SlowaEhOV3hUWmt3elFuRmFhVEZhZDJNaWZ
RLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOX
VZVzFsYzNCaFkyVWlPaUp1WlhCMGRXNWxJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbTVsY
0hSMWJtVXRjMkV0ZGpJdGRHOXJaVzR0Wm5FNU1tb2lMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIz
VnVkQzV1WVcxbElqb2libVZ3ZEhWdVpTMXpZUzEyTWlJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk
2ZFc1MExuVnBaQ0k2SWpZMlltUmpOak0yTFRKbFl6TXROREpoWkMwNE9HRTFMV0ZoWXpGbFpqWmxPVFpsTlNJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMm
FXTmxZV05qYjNWdWREcHVaWEIwZFc1bE9tNWxjSFIxYm1VdGMyRXRkaklpZlEuVllnYm9NNENUZDBwZENKNzh3alV3bXRhbGgtMnZzS2pBTnlQc2gtNmd1R
XdPdFdFcTVGYnc1WkhQdHZBZHJMbFB6cE9IRWJBZTRlVU05NUJSR1diWUlkd2p1Tjk1SjBENFJORmtWVXQ0OHR3b2FrUlY3aC1hUHV3c1FYSGhaWnp5NHlp
bUZIRzlVZm1zazVZcjRSVmNHNm4xMzd5LUZIMDhLOHpaaklQQXNLRHFOQlF0eGctbFp2d1ZNaTZ2aUlocnJ6QVFzME1CT1Y4Mk9KWUd5Mm8tV1FWYzBVVWF
uQ2Y5NFkzZ1QwWVRpcVF2Y3pZTXM2bno5dXQtWGd3aXRyQlk2VGo5QmdQcHJBOWtfajVxRXhfTFVVWlVwUEFpRU43T3pka0pzSThjdHRoMTBseXBJMUFlRn
I0M3Q2QUx5clFvQk0zOWFiRGZxM0Zrc1Itb2NfV
kind: Secret
...
➜ k -n neptune describe secret neptune-secret-
...
Data
====
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6Im5aZFdqZDJ2aGNvQ3BqWHZOR1g1b3pIcm5JZ0hHNWxTZkwzQnFaaTFad2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3Nl
cnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJuZXB0dW5lIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWN
jb3VudC9zZWNyZXQubmFtZSI6Im5lcHR1bmUtc2EtdjItdG9rZW4tZnE5MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb
VudC5uYW1lIjoibmVwdHVuZS1zYS12MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2YmRjNjM2LTJlY
zMtNDJhZC04OGE1LWFhYzFlZjZlOTZlNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpuZXB0dW5lOm5lcHR1bmUtc2EtdjIifQ.VYgboM4CTd0pd
CJ78wjUwmtalh-2vsKjANyPsh-6guEwOtWEq5Fbw5ZHPtvAdrLlPzpOHEbAe4eUM95BRGWbYIdwjuN95J0D4RNFkVUt48twoakRV7h-
aPuwsQXHhZZzy4yimFHG9Ufmsk5Yr4RVcG6n137y-FH08K8zZjIPAsKDqNBQtxg-lZvwVMi6viIhrrzAQs0MBOV82OJYGy2o-
WQVc0UUanCf94Y3gT0YTiqQvczYMs6nz9ut-
XgwitrBY6Tj9BgPprA9k_j5qEx_LUUZUpPAiEN7OzdkJsI8ctth10lypI1AeFr43t6ALyrQoBM39abDfq3FksR-oc_WMw
ca.crt: 1066 bytes
namespace: 7 bytes
vim /opt/course/5/token
# /opt/course/5/token
eyJhbGciOiJSUzI1NiIsImtpZCI6Im5aZFdqZDJ2aGNvQ3BqWHZOR1g1b3pIcm5JZ0hHNWxTZkwzQnFaaTFad2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3Nl
cnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJuZXB0dW5lIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWN
jb3VudC9zZWNyZXQubmFtZSI6Im5lcHR1bmUtc2EtdjItdG9rZW4tZnE5MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb
VudC5uYW1lIjoibmVwdHVuZS1zYS12MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2YmRjNjM2LTJlY
zMtNDJhZC04OGE1LWFhYzFlZjZlOTZlNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpuZXB0dW5lOm5lcHR1bmUtc2EtdjIifQ.VYgboM4CTd0pd
CJ78wjUwmtalh-2vsKjANyPsh-6guEwOtWEq5Fbw5ZHPtvAdrLlPzpOHEbAe4eUM95BRGWbYIdwjuN95J0D4RNFkVUt48twoakRV7h-
aPuwsQXHhZZzy4yimFHG9Ufmsk5Yr4RVcG6n137y-FH08K8zZjIPAsKDqNBQtxg-lZvwVMi6viIhrrzAQs0MBOV82OJYGy2o-
WQVc0UUanCf94Y3gT0YTiqQvczYMs6nz9ut-
XgwitrBY6Tj9BgPprA9k_j5qEx_LUUZUpPAiEN7OzdkJsI8ctth10lypI1AeFr43t6ALyrQoBM39abDfq3FksR-oc_WMw
Answer:
Search for a readiness-probe example on https://kubernetes.io/docs, then copy and alter the relevant section for the task:
Then:
Running k get pod6 we should see the job being created and completed:
We see that the Pod is finally ready.

Question 7 | Pods, Namespaces

Task weight: 4%
The board of Team Neptune decided to take over control of one e-commerce webserver from Team Saturn. The administrator who once
setup this webserver is not part of the organisation any longer. All information you could get was that the e-commerce system is called my-
happy-shop.
Search for the correct Pod in Namespace saturn and move it to Namespace neptune. It doesn't matter if you shut it down and spin it up
again, it probably hasn't any customers anyways.
Answer:
Let's see all those Pods :
k run pod6 --image=busybox:1.31.0 $do --command -- sh -c "touch /tmp/ready && sleep 1d" > 6 .yaml
vim 6 .yaml
# 6.yaml
apiVersion: v
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod
name: pod
spec:
containers:
  • command:
  • sh
  • -c
  • touch /tmp/ready && sleep 1d image: busybox:1.31. name: pod resources: {} readinessProbe: # add exec: # add command: # add
  • sh # add
  • -c # add
  • cat /tmp/ready # add initialDelaySeconds: 5 # add periodSeconds: 10 # add dnsPolicy: ClusterFirst restartPolicy: Always status: {}
k -f 6 .yaml create
➜ k get pod pod
NAME READY STATUS RESTARTS AGE
pod6 0/1 ContainerCreating 0 2s
➜ k get pod pod
NAME READY STATUS RESTARTS AGE
pod6 0/1 Running 0 7s
➜ k get pod pod
NAME READY STATUS RESTARTS AGE
pod6 1/1 Running 0 15s
The Pod names don't reveal any information. We assume the Pod we are searching has a label or annotation with the name my-happy-shop,
so we search for it:
We see the webserver we're looking for is webserver-sat-
Change the Namespace to neptune, also remove the status: section, the token volume, the token volumeMount and the nodeName, else
the new Pod won't start. The final file could look as clean like this:
Then we execute:
It seems the server is running in Namespace neptune, so we can do:
Let's confirm only one is running:
This should list only one pod called webserver-sat-003 in Namespace neptune, status running.

Question 8 | Deployment, Rollouts

Task weight: 4%
There is an existing Deployment named api-new-c32 in Namespace neptune. A developer did make an update to the Deployment but the
updated version never came online. Check the Deployment history and find a revision that works, then rollback to it. Could you tell Team
Neptune what the error was so it doesn't happen again?
Answer:
➜ k -n saturn get pod
NAME READY STATUS RESTARTS AGE
webserver-sat-001 1/1 Running 0 111m
webserver-sat-002 1/1 Running 0 111m
webserver-sat-003 1/1 Running 0 111m
webserver-sat-004 1/1 Running 0 111m
webserver-sat-005 1/1 Running 0 111m
webserver-sat-006 1/1 Running 0 111m
k -n saturn describe pod # describe all pods, then manually look for it
# or do some filtering like this
k -n saturn get pod -o yaml | grep my-happy-shop -A
k -n saturn get pod webserver-sat-003 -o yaml > 7_webserver-sat-003.yaml # export
vim 7_webserver-sat-003.yaml
# 7_webserver-sat-003.yaml
apiVersion: v
kind: Pod
metadata:
annotations:
description: this is the server for the E-Commerce System my-happy-shop
labels:
id: webserver-sat-
name: webserver-sat-
namespace: neptune # new namespace here
spec:
containers:
  • image: nginx:1.16.1-alpine imagePullPolicy: IfNotPresent name: webserver-sat restartPolicy: Always
k -n neptune create -f 7_webserver-sat-003.yaml
➜ k -n neptune get pod | grep webserver
webserver-sat-003 1/1 Running 0 22s
k -n saturn delete pod webserver-sat-003 --force --grace-period= 0
➜ k get pod -A | grep webserver-sat-
neptune webserver-sat-003 1/1 Running 0 6s
k -n neptune get deploy # overview
k -n neptune rollout -h
k -n neptune rollout history -h
We see 5 revisions, let's check Pod and Deployment status:
Let's check the pod for errors:
Someone seems to have added a new image with a spelling mistake in the name ngnix:1.16.3, that's the reason we can tell Team Neptune!
Now let's revert to the previous version:
Does this one work?
Yes! All up-to-date and available.
Also a fast way to get an overview of the ReplicaSets of a Deployment and their images could be done with:

Question 9 | Pod -> Deployment

Task weight: 5%
In Namespace pluto there is single Pod named holy-api. It has been working okay for a while now but Team Pluto needs it to be more
reliable. Convert the Pod into a Deployment with 3 replicas and name holy-api. The raw Pod template file is available at
/opt/course/9/holy-api-pod.yaml.
In addition, the new Deployment should set allowPrivilegeEscalation: false and privileged: false for the security context on
container level.
Please create the Deployment and save its yaml under /opt/course/9/holy-api-deployment.yaml.
Answer
There are multiple ways to do this, one is to copy an Deployment example from https://kubernetes.io/docs and then merge it with the existing
Pod yaml. That's what we will do now:
Now copy/use a Deployment example yaml and put the Pod's metadata: and spec: into the Deployment's template: section:
➜ k -n neptune rollout history deploy api-new-c
deployment.extensions/api-new-c
REVISION CHANGE-CAUSE
1 <none>
2 kubectl edit deployment api-new-c32 --namespace=neptune
3 kubectl edit deployment api-new-c32 --namespace=neptune
4 kubectl edit deployment api-new-c32 --namespace=neptune
5 kubectl edit deployment api-new-c32 --namespace=neptune
➜ k -n neptune get deploy,pod | grep api-new-c
deployment.extensions/api-new-c32 3/3 1 3 141m
pod/api-new-c32-65d998785d-jtmqq 1/1 Running 0 141m
pod/api-new-c32-686d6f6b65-mj2fp 1/1 Running 0 141m
pod/api-new-c32-6dd45bdb68-2p462 1/1 Running 0 141m
pod/api-new-c32-7d64747c87-zh648 0/1 ImagePullBackOff 0 141m
➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i error
... Error: ImagePullBackOff
➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i image
Image: ngnix:1.16.
Image ID:
Reason: ImagePullBackOff
Warning Failed 4m28s (x616 over 144m) kubelet, gke-s3ef67020-28c5-45f7--default-pool-248abd4f-s010 Error:
ImagePullBackOff
k -n neptune rollout undo deploy api-new-c
➜ k -n neptune get deploy api-new-c
NAME READY UP-TO-DATE AVAILABLE AGE
api-new-c32 3/3 3 3 146m
k -n neptune get rs -o wide | grep api-new-c
cp /opt/course/9/holy-api-pod.yaml /opt/course/9/holy-api-deployment.yaml # make a copy!
vim /opt/course/9/holy-api-deployment.yaml
# /opt/course/9/holy-api-deployment.yaml
To indent multiple lines using vim you should set the shiftwidth using :set shiftwidth=2. Then mark multiple lines using Shift v and the
up/down keys.
To then indent the marked lines press > or < and to repeat the action press.
Next create the new Deployment :
and confirm it's running:
Finally delete the single Pod :

Question 10 | Service, Logs

Task weight: 4%
Team Pluto needs a new cluster internal Service. Create a ClusterIP Service named project-plt-6cc-svc in Namespace pluto. This Service
should expose a single Pod named project-plt-6cc-api of image nginx:1.17.3-alpine, create that Pod as well. The Pod should be
identified by label project: plt-6cc-api. The Service should use tcp port redirection of 3333:80.
apiVersion: apps/v
kind: Deployment
metadata:
name: holy-api  # name stays the same
namespace: pluto # important
spec:
replicas: 3 # 3 replicas
selector:
matchLabels:
id: holy-api # set the correct selector
template:
# => from here down its the same as the pods metadata: and spec: sections
metadata:
labels:
id: holy-api
name: holy-api
spec:
containers:
  • env:
  • name: CACHE_KEY_ value: b&MTCi0=[T66RXm!jO@
  • name: CACHE_KEY_ value: PCAILGej5Ld@Q%{Q1=#
  • name: CACHE_KEY_ value: 2qz-]2OJlWDSTn_;RFQ image: nginx:1.17.3-alpine name: holy-api-container securityContext: # add allowPrivilegeEscalation: false  # add privileged: false  # add volumeMounts:
  • mountPath: /cache name: cache-volume
  • mountPath: /cache name: cache-volume
  • mountPath: /cache name: cache-volume volumes:
  • emptyDir: {} name: cache-volume
  • emptyDir: {} name: cache-volume
  • emptyDir: {} name: cache-volume
k -f /opt/course/9/holy-api-deployment.yaml create
➜ k -n pluto get pod | grep holy
NAME READY STATUS RESTARTS AGE
holy-api 1/1 Running 0 19m
holy-api-5dbfdb4569-8qr5x 1/1 Running 0 30s
holy-api-5dbfdb4569-b5clh 1/1 Running 0 30s
holy-api-5dbfdb4569-rj2gz 1/1 Running 0 30s
k -n pluto delete pod holy-api --force --grace-period= 0
➜ k -n pluto get pod,deployment | grep holy
pod/holy-api-5dbfdb4569-8qr5x 1/1 Running 0 2m4s
pod/holy-api-5dbfdb4569-b5clh 1/1 Running 0 2m4s
pod/holy-api-5dbfdb4569-rj2gz 1/1 Running 0 2m4s
deployment.extensions/holy-api 3/3 3 3 2m4s
Finally use for example curl from a temporary nginx:alpine Pod to get the response from the Service. Write the response into
/opt/course/10/service_test.html. Also check if the logs of Pod project-plt-6cc-api show the request and write those into
/opt/course/10/service_test.log.
Answer
This will create the requested Pod. In yaml it would look like this:
Next we create the service:
Expose will create a yaml where everything is already set for our case and no need to change anything:
We could also use create service but then we would need to change the yaml afterwards:
Check the Service is running:
Does the Service has one Endpoint?
k -n pluto run project-plt-6cc-api --image=nginx:1.17.3-alpine --labels project=plt-6cc-api
apiVersion: v
kind: Pod
metadata:
creationTimestamp: null
labels:
project: plt-6cc-api
name: project-plt-6cc-api
spec:
containers:
  • image: nginx:1.17.3-alpine name: project-plt-6cc-api resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
k -n pluto expose pod -h # help
k -n pluto expose pod project-plt-6cc-api --name project-plt-6cc-svc --port 3333 --target-port 80
apiVersion: v
kind: Service
metadata:
creationTimestamp: null
labels:
project: plt-6cc-api
name: project-plt-6cc-svc # good
namespace: pluto # great
spec:
ports:
  • port: 3333 # awesome protocol: TCP targetPort: 80 # nice selector: project: plt-6cc-api # beautiful status: loadBalancer: {}
k -n pluto create service -h # help
k -n pluto create service clusterip -h #help
k -n pluto create service clusterip project-plt-6cc-svc --tcp 3333 :80 $do
# now we would need to set the correct selector labels
➜ k -n pluto get pod,svc | grep 6cc
pod/project-plt-6cc-api 1/1 Running 0 9m42s
service/project-plt-6cc-svc ClusterIP 10.31.241.234 <none> 3333/TCP 2m24s
➜ k -n pluto describe svc project-plt-6cc-svc
Name: project-plt-6cc-svc
Namespace: pluto
Labels: project=plt-6cc-api
Annotations: <none>
Selector: project=plt-6cc-api
Type: ClusterIP
IP: 10.3.244.
Port: <unset> 3333/TCP
TargetPort: 80/TCP
Endpoints: 10.28.2.32:
Session Affinity: None
Events: <none>
Or even shorter:
Yes, endpoint there! Finally we check the connection using a temporary Pod :
Great! Notice that we use the Kubernetes Namespace dns resolving (project-plt-6cc-svc.pluto) here. We could only use the Service name if
we would also spin up the temporary Pod in Namespace pluto.
And now really finally copy or pipe the html content into /opt/course/10/service_test.html.
Also the requested logs:

Question 11 | Working with Containers

Task weight: 7%
During the last monthly meeting you mentioned your strong expertise in container technology. Now the Build&Release team of department
Sun is in need of your insight knowledge. There are files to build a container image located at /opt/course/11/image. The container will run
a Golang application which outputs information to stdout. You're asked to perform the following tasks:
NOTE: Make sure to run all commands as user k8s, for docker use sudo docker
1. Change the Dockerfile. The value of the environment variable SUN_CIPHER_ID should be set to the hardcoded value 5b9c1065-e39d-
4a43-a04a-e59bcea3e03f
2. Build the image using Docker, named registry.killer.sh:5000/sun-cipher, tagged as latest and v1-docker, push these to the
registry
3. Build the image using Podman, named registry.killer.sh:5000/sun-cipher, tagged as v1-podman, push it to the registry
4. Run a container using Podman, which keeps running in the background, named sun-cipher using image
registry.killer.sh:5000/sun-cipher:v1-podman. Run the container from k8s@terminal and not root@terminal
5. Write the logs your container sun-cipher produced into /opt/course/11/logs. Then write a list of all running Podman containers into
/opt/course/11/containers
➜ k -n pluto get ep
NAME ENDPOINTS AGE
project-plt-6cc-svc 10.28.2.32:80 84m
➜ k run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://project-plt-6cc-svc.pluto:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 32210 0 --:--:-- --:--:-- --:--:-- 32210
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
...
# /opt/course/10/service_test.html
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
...
k -n pluto logs project-plt-6cc-api > /opt/course/10/service_test.log
# /opt/course/10/service_test.log
10.44.0.0 - - [22/Jan/2021:23:19:55 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.69.1" "-"
Answer
Dockerfile : list of commands from which an Image can be build
Image : binary file which includes all data/requirements to be run as a Container
Container : running instance of an Image
Registry : place where we can push/pull Images to/from
1.
First we need to change the Dockerfile to:
2.
Then we build the image using Docker:
There we go, built and pushed.
3.
Next we build the image using Podman. Here it's only required to create one tag. The usage of Podman is very similar (for most cases even
identical) to Docker:
# build container stage 1
FROM docker.io/library/golang:1.15.15-alpine3.
WORKDIR /src
COPY..
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o bin/app.
# app container stage 2
FROM docker.io/library/alpine:3.12.
COPY --from=0 /src/bin/app app
# CHANGE NEXT LINE
ENV SUN_CIPHER_ID=5b9c1065-e39d-4a43-a04a-e59bcea3e03f
CMD ["./app"]
➜ cd /opt/course/11/image
➜ sudo docker build -t registry.killer.sh:5000/sun-cipher:latest -t registry.killer.sh:5000/sun-cipher:v1-docker.
...
Successfully built 409fde3c5bf
Successfully tagged registry.killer.sh:5000/sun-cipher:latest
Successfully tagged registry.killer.sh:5000/sun-cipher:v1-docker
➜ sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.killer.sh:5000/sun-cipher latest 409fde3c5bf9 24 seconds ago 7.76MB
registry.killer.sh:5000/sun-cipher v1-docker 409fde3c5bf9 24 seconds ago 7.76MB
...
➜ sudo docker push registry.killer.sh:5000/sun-cipher:latest
The push refers to repository [registry.killer.sh:5000/sun-cipher]
c947fb5eba52: Pushed
33e8713114f8: Pushed
latest: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739
➜ sudo docker push registry.killer.sh:5000/sun-cipher:v1-docker
The push refers to repository [registry.killer.sh:5000/sun-cipher]
c947fb5eba52: Layer already exists
33e8713114f8: Layer already exists
v1-docker: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739
➜ cd /opt/course/11/image
➜ podman build -t registry.killer.sh:5000/sun-cipher:v1-podman.
...
--> 38adc53bd
Successfully tagged registry.killer.sh:5000/sun-cipher:v1-podman
38adc53bd92881d91981c4b537f4f1b64f8de1de1b32eacc8479883170cee
➜ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.killer.sh:5000/sun-cipher v1-podman 38adc53bd928 2 minutes ago 8.03 MB
...
➜ podman push registry.killer.sh:5000/sun-cipher:v1-podman
Getting image source signatures
Built and pushed using Podman.
4.
We'll create a container from the perviously created image, using Podman, which keeps running in the background:
5.
Finally we need to collect some information into files:
This is looking not too bad at all. Our container skills are back in town!

Question 12 | Storage, PV, PVC, Pod volume

Task weight: 8%
Create a new PersistentVolume named earth-project-earthflower-pv. It should have a capacity of 2Gi , accessMode ReadWriteOnce ,
hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace earth named earth-project-earthflower-pvc. It should request 2Gi storage,
accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment project-earthflower in Namespace earth which mounts that volume at /tmp/project-data. The Pods of
that Deployment should be of image httpd:2.4.41-alpine.
Answer
Find an example from https://kubernetes.io/docs and alter it:
Copying blob 4d0d60db9eb6 done
Copying blob 33e8713114f8 done
Copying config bfa1a225f8 done
Writing manifest to image destination
Storing signatures
➜ podman run -d --name sun-cipher registry.killer.sh:5000/sun-cipher:v1-podman
f8199cba792f9fd2d1bd4decc9b7a9c0acfb975d95eda35f5f583c9efbf
➜ podman ps
CONTAINER ID IMAGE COMMAND ...
f8199cba792f registry.killer.sh:5000/sun-cipher:v1-podman ./app ...
➜ podman ps > /opt/course/11/containers
➜ podman logs sun-cipher
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8081
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 7887
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 1847
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4059
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 2081
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 1318
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4425
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 2540
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 456
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 3300
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 694
2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8511
2077/03/13 06:50:44 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8162
2077/03/13 06:50:54 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 5089
➜ podman logs sun-cipher > /opt/course/11/logs
vim 12_pv.yaml
Then create it:
Next the PersistentVolumeClaim :
Find an example from https://kubernetes.io/docs and alter it:
Then create:
And check that both have the status Bound:
Next we create a Deployment and mount that volume:
Alter the yaml to mount the volume:
# 12_pv.yaml
kind: PersistentVolume
apiVersion: v
metadata:
name: earth-project-earthflower-pv
spec:
capacity:
storage: 2Gi
accessModes:
  • ReadWriteOnce hostPath: path: "/Volumes/Data"
k -f 12_pv.yaml create
vim 12_pvc.yaml
# 12_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v
metadata:
name: earth-project-earthflower-pvc
namespace: earth
spec:
accessModes:
  • ReadWriteOnce resources: requests: storage: 2Gi
k -f 12_pvc.yaml create
➜ k -n earth get pv,pvc
NAME CAPACITY ACCESS MODES ... STATUS CLAIM
persistentvolume/...earthflower-pv 2Gi RWO ... Bound ...er-pvc
NAME STATUS VOLUME CAPACITY
persistentvolumeclaim/...earthflower-pvc Bound earth-project-earthflower-pv 2Gi
k -n earth create deploy project-earthflower --image=httpd:2.4.41-alpine $do > 12_dep.yaml
vim 12_dep.yaml
# 12_dep.yaml
apiVersion: apps/v
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: project-earthflower
name: project-earthflower
namespace: earth
spec:
replicas: 1
selector:
matchLabels:
app: project-earthflower
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: project-earthflower
spec:
volumes: # add
  • name: data # add persistentVolumeClaim: # add claimName: earth-project-earthflower-pvc # add containers:
  • image: httpd:2.4.41-alpine name: container
We can confirm it's mounting correctly:

Question 13 | Storage, StorageClass, PVC

Task weight: 6%
Team Moonpie, which has the Namespace moon, needs more storage. Create a new PersistentVolumeClaim named moon-pvc-126 in that
namespace. This claim should use a new StorageClass moon-retain with the provisioner set to moon-retainer and the reclaimPolicy set to
Retain. The claim should request storage of 3Gi , an accessMode of ReadWriteOnce and should use the new StorageClass.
The provisioner moon-retainer will be created by another team, so it's expected that the PVC will not boot yet. Confirm this by writing the log
message from the PVC into file /opt/course/13/pvc-126-reason.
Answer
Head to https://kubernetes.io/docs, search for "storageclass" and alter the example code to this:
Now the same for the PersistentVolumeClaim , head to the docs, copy an example and transform it into:
Next we check the status of the PVC :
volumeMounts: # add
  • name: data # add mountPath: /tmp/project-data # add
k -f 12_dep.yaml create
➜ k -n earth describe pod project-earthflower-d6887f7c5-pn5wv | grep -A2 Mounts:
Mounts:
/tmp/project-data from data (rw) # there it is
/var/run/secrets/kubernetes.io/serviceaccount from default-token-n2sjj (ro)
vim 13_sc.yaml
# 13_sc.yaml
apiVersion: storage.k8s.io/v
kind: StorageClass
metadata:
name: moon-retain
provisioner: moon-retainer
reclaimPolicy: Retain
k create -f 13_sc.yaml
vim 13_pvc.yaml
# 13_pvc.yaml
apiVersion: v
kind: PersistentVolumeClaim
metadata:
name: moon-pvc-126 # name as requested
namespace: moon # important
spec:
accessModes:
  • ReadWriteOnce # RWO resources: requests: storage: 3Gi # size storageClassName: moon-retain # uses our new storage class
k -f 13_pvc.yaml create
➜ k -n moon get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
moon-pvc-126 Pending moon-retain 2m57s
This confirms that the PVC waits for the provisioner moon-retainer to be created. Finally we copy or write the event message into the
requested location:

Question 14 | Secret, Secret-Volume, Secret-Env

Task weight: 4%
You need to make changes on an existing Pod in Namespace moon called secret-handler. Create a new Secret secret1 which contains
user=test and pass=pwd. The Secret 's content should be available in Pod secret-handler as environment variables SECRET1_USER and
SECRET1_PASS. The yaml for Pod secret-handler is available at /opt/course/14/secret-handler.yaml.
There is existing yaml for another Secret at /opt/course/14/secret2.yaml, create this Secret and mount it inside the same Pod at
/tmp/secret2. Your changes should be saved under /opt/course/14/secret-handler-new.yaml. Both Secrets should only be available in
Namespace moon.
Answer
The last command would generate this yaml:
Next we create the second Secret from the given location, making sure it'll be created in Namespace moon:
We will now edit the Pod yaml:
Add the following to the yaml:
➜ k -n moon describe pvc moon-pvc-
Name: moon-pvc-
...
Status: Pending
...
Events:
...
waiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system
administrator
# /opt/course/13/pvc-126-reason
waiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system
administrator
k -n moon get pod # show pods
k -n moon create secret -h # help
k -n moon create secret generic -h # help
k -n moon create secret generic secret1 --from-literal user=test --from-literal pass=pwd
apiVersion: v
data:
pass: cHdk
user: dGVzdA==
kind: Secret
metadata:
creationTimestamp: null
name: secret
namespace: moon
k -n moon -f /opt/course/14/secret2.yaml create
➜ k -n moon get secret
NAME TYPE DATA AGE
default-token-rvzcf kubernetes.io/service-account-token 3 66m
secret1 Opaque 2 4m3s
secret2 Opaque 1 8s
cp /opt/course/14/secret-handler.yaml /opt/course/14/secret-handler-new.yaml
vim /opt/course/14/secret-handler-new.yaml
# /opt/course/14/secret-handler-new.yaml
apiVersion: v
kind: Pod
metadata:
labels:
id: secret-handler
uuid: 1428721e-8d1c-4c09-b5d6-afd79200c56a
red_ident: 9cf7a7c0-fdb2-4c35-9c13-c2a0bb52b4a
type: automatic
name: secret-handler
There is also the possibility to import all keys from a Secret as env variables at once, though the env variable names will then be the same as in
the Secret , which doesn't work for the requirements here:
Then we apply the changes:
Instead of running delete and create we can also use recreate:
It was not requested directly, but you should always confirm it's working:

Question 15 | ConfigMap, Configmap-Volume

Task weight: 5%
namespace: moon
spec:
volumes:
  • name: cache-volume emptyDir: {}
  • name: cache-volume emptyDir: {}
  • name: cache-volume emptyDir: {}
  • name: secret2-volume # add secret: # add secretName: secret2 # add containers:
  • name: secret-handler image: bash:5.0. args: ['bash', '-c', 'sleep 2d'] volumeMounts:
  • mountPath: /cache name: cache-volume
  • mountPath: /cache name: cache-volume
  • mountPath: /cache name: cache-volume
  • name: secret2-volume # add mountPath: /tmp/secret2 # add env:
  • name: SECRET_KEY_ value: ">8$kH#kj..i8}HImQd{"
  • name: SECRET_KEY_ value: "IO=a4L/XkRdvN8jM=Y+"
  • name: SECRET_KEY_ value: "-7PA0_Z]>{pwa43r)__"
  • name: SECRET1_USER # add valueFrom: # add secretKeyRef: # add name: secret1 # add key: user # add
  • name: SECRET1_PASS # add valueFrom: # add secretKeyRef: # add name: secret1 # add key: pass # add
containers:
  • name: secret-handler ... envFrom:
  • secretRef: # also works for configMapRef name: secret
k -f /opt/course/14/secret-handler.yaml delete --force --grace-period= 0
k -f /opt/course/14/secret-handler-new.yaml create
k -f /opt/course/14/secret-handler-new.yaml replace --force --grace-period= 0
➜ k -n moon exec secret-handler -- env | grep SECRET
SECRET1_USER=test
SECRET1_PASS=pwd
➜ k -n moon exec secret-handler -- find /tmp/secret
/tmp/secret
/tmp/secret2/..data
/tmp/secret2/key
/tmp/secret2/..2019_09_11_09_03_08.
/tmp/secret2/..2019_09_11_09_03_08.147048594/key
➜ k -n moon exec secret-handler -- cat /tmp/secret2/key
12345678
Team Moonpie has a nginx server Deployment called web-moon in Namespace moon. Someone started configuring it but it was never
completed. To complete please create a ConfigMap called configmap-web-moon-html containing the content of file /opt/course/15/web-
moon.html under the data key-name index.html.
The Deployment web-moon is already configured to work with this ConfigMap and serve its content. Test the nginx configuration for example
using curl from a temporary nginx:alpine Pod.
Answer
Let's check the existing Pods :
Good so far, now let's create the missing ConfigMap :
This should create a ConfigMap with yaml like:
After waiting a bit or deleting/recreating (k -n moon rollout restart deploy web-moon) the Pods we should see:
Looking much better. Finally we check if the nginx returns the correct content:
Then use one IP to test the configuration:
➜ k -n moon get pod
NAME READY STATUS RESTARTS AGE
secret-handler 1/1 Running 0 55m
web-moon-847496c686-2rzj4 0/1 ContainerCreating 0 33s
web-moon-847496c686-9nwwj 0/1 ContainerCreating 0 33s
web-moon-847496c686-cxdbx 0/1 ContainerCreating 0 33s
web-moon-847496c686-hvqlw 0/1 ContainerCreating 0 33s
web-moon-847496c686-tj7ct 0/1 ContainerCreating 0 33s
➜ k -n moon describe pod web-moon-847496c686-2rzj
...
Warning FailedMount 31s (x7 over 63s) kubelet, gke-test-default-pool-ce83a51a-p6s4 MountVolume.SetUp failed for
volume "html-volume" : configmaps "configmap-web-moon-html" not found
k -n moon create configmap -h # help
k -n moon create configmap configmap-web-moon-html --from-file=index.html=/opt/course/15/web-moon.html # important to
set the index.html key
apiVersion: v
data:
index.html: | # notice the key index.html, this will be the filename when mounted
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Web Moon Webpage</title>
</head>
<body>
This is some great content.
</body>
</html>
kind: ConfigMap
metadata:
creationTimestamp: null
name: configmap-web-moon-html
namespace: moon
➜ k -n moon get pod
NAME READY STATUS RESTARTS AGE
secret-handler 1/1 Running 0 59m
web-moon-847496c686-2rzj4 1/1 Running 0 4m28s
web-moon-847496c686-9nwwj 1/1 Running 0 4m28s
web-moon-847496c686-cxdbx 1/1 Running 0 4m28s
web-moon-847496c686-hvqlw 1/1 Running 0 4m28s
web-moon-847496c686-tj7ct 1/1 Running 0 4m28s
k -n moon get pod -o wide # get pod cluster IPs
For debugging or further checks we could find out more about the Pods volume mounts:
And check the mounted folder content:
Here it was important that the file will have the name index.html and not the original one web-moon.html which is controlled through the
ConfigMap data key.

Question 16 | Logging sidecar

Task weight: 6%
The Tech Lead of Mercury2D decided it's time for more logging, to finally fight all these missing data incidents. There is an existing container
named cleaner-con in Deployment cleaner in Namespace mercury. This container mounts a volume and writes logs into a file called
cleaner.log.
The yaml for the existing Deployment is available at /opt/course/16/cleaner.yaml. Persist your changes at /opt/course/16/cleaner-
new.yaml but also make sure the Deployment is running.
Create a sidecar container named logger-con, image busybox:1.31.0 , which mounts the same volume and writes the content of
cleaner.log to stdout, you can use the tail -f command for this. This way it can be picked up by kubectl logs.
Check if the logs of the new container reveal something about the missing data incidents.
Answer
Add a sidecar container which outputs the log file to stdout:
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.44.0.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 80500 0 --:--:-- --:--:-- --:--:-- 157k
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Web Moon Webpage</title>
</head>
<body>
This is some great content.
</body>
➜ k -n moon describe pod web-moon-c77655cc-dc8v4 | grep -A2 Mounts:
Mounts:
/usr/share/nginx/html from html-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rvzcf (ro)
➜ k -n moon exec web-moon-c77655cc-dc8v4 find /usr/share/nginx/html
/usr/share/nginx/html
/usr/share/nginx/html/..2019_09_11_10_05_56.
/usr/share/nginx/html/..2019_09_11_10_05_56.336284411/index.html
/usr/share/nginx/html/..data
/usr/share/nginx/html/index.html
cp /opt/course/16/cleaner.yaml /opt/course/16/cleaner-new.yaml
vim /opt/course/16/cleaner-new.yaml
# /opt/course/16/cleaner-new.yaml
apiVersion: apps/v
kind: Deployment
metadata:
creationTimestamp: null
name: cleaner
namespace: mercury
spec:
replicas: 2
selector:
matchLabels:
id: cleaner
template:
metadata:
labels:
id: cleaner
spec:
volumes:
  • name: logs emptyDir: {} initContainers:
  • name: init
Then apply the changes and check the logs of the sidecar:
This will cause a deployment rollout of which we can get more details:
Check Pod statuses:
Finally check the logs of the logging sidecar container:
Mystery solved, something is removing files at random ;) It's important to understand how containers can communicate with each other using
volumes.

Question 17 | InitContainer

Task weight: 4%
Last lunch you told your coworker from department Mars Inc how amazing InitContainer s are. Now he would like to see one in action. There is
a Deployment yaml at /opt/course/17/test-init-container.yaml. This Deployment spins up a single Pod of image nginx:1.17.3-alpine
and serves files from a mounted volume, which is empty right now.
Create an InitContainer named init-con which also mounts that volume and creates a file index.html with content check this out! in the
root of the mounted volume. For this test we ignore that it doesn't contain valid html.
The InitContainer should be using image busybox:1.31.0. Test your implementation for example using curl from a temporary
nginx:alpine Pod.
Answer
Add the InitContainer :
image: bash:5.0.11
command: ['bash', '-c', 'echo init > /var/log/cleaner/cleaner.log']
volumeMounts:
  • name: logs mountPath: /var/log/cleaner containers:
  • name: cleaner-con image: bash:5.0.11 args: ['bash', '-c', 'while true; do echo date: "remove random file" >> /var/log/cleaner/cleaner.log; sleep 1; done'] volumeMounts:
  • name: logs mountPath: /var/log/cleaner
  • name: logger-con # add image: busybox:1.31.0 # add command: ["sh", "-c", "tail -f /var/log/cleaner/cleaner.log"] # add volumeMounts: # add
  • name: logs # add mountPath: /var/log/cleaner # add
k -f /opt/course/16/cleaner-new.yaml apply
k -n mercury rollout history deploy cleaner
k -n mercury rollout history deploy cleaner --revision 1
k -n mercury rollout history deploy cleaner --revision 2
➜ k -n mercury get pod
NAME READY STATUS RESTARTS AGE
cleaner-86b7758668-9pw6t 2/2 Running 0 6s
cleaner-86b7758668-qgh4v 0/2 Init:0/1 0 1s
➜ k -n mercury get pod
NAME READY STATUS RESTARTS AGE
cleaner-86b7758668-9pw6t 2/2 Running 0 14s
cleaner-86b7758668-qgh4v 2/2 Running 0 9s
➜ k -n mercury logs cleaner-576967576c-cqtgx -c logger-con
init
Wed Sep 11 10:45:44 UTC 2099: remove random file
Wed Sep 11 10:45:45 UTC 2099: remove random file
...
cp /opt/course/17/test-init-container.yaml ~/17_test-init-container.yaml
vim 17_test-init-container.yaml
# 17_test-init-container.yaml
apiVersion: apps/v1
Then we create the Deployment :
Finally we test the configuration:
Beautiful.

Question 18 | Service misconfiguration

Task weight: 4%
There seems to be an issue in Namespace mars where the ClusterIP service manager-api-svc should make the Pods of Deployment manager-
api-deployment available inside the cluster.
You can test this with curl manager-api-svc.mars:4444 from a temporary nginx:alpine Pod. Check for the misconfiguration and apply a
fix.
Answer
First let's get an overview:
Everything seems to be running, but we can't seem to get a connection:
kind: Deployment
metadata:
name: test-init-container
namespace: mars
spec:
replicas: 1
selector:
matchLabels:
id: test-init-container
template:
metadata:
labels:
id: test-init-container
spec:
volumes:
  • name: web-content emptyDir: {} initContainers: # initContainer start
  • name: init-con image: busybox:1.31.0 command: ['sh', '-c', 'echo "check this out!" > /tmp/web-content/index.html'] volumeMounts:
  • name: web-content mountPath: /tmp/web-content # initContainer end containers:
  • image: nginx:1.17.3-alpine name: nginx volumeMounts:
  • name: web-content mountPath: /usr/share/nginx/html ports:
  • containerPort: 80
k -f 17_test-init-container.yaml create
k -n mars get pod -o wide # to get the cluster IP
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
check this out!
➜ k -n mars get all
NAME READY STATUS RESTARTS AGE
pod/manager-api-deployment-dbcc6657d-bg2hh 1/1 Running 0 98m
pod/manager-api-deployment-dbcc6657d-f5fv4 1/1 Running 0 98m
pod/manager-api-deployment-dbcc6657d-httjv 1/1 Running 0 98m
pod/manager-api-deployment-dbcc6657d-k98xn 1/1 Running 0 98m
pod/test-init-container-5db7c99857-htx6b 1/1 Running 0 2m19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/manager-api-svc ClusterIP 10.15.241.159 <none> 4444/TCP 99m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/manager-api-deployment 4/4 4 4 98m
deployment.apps/test-init-container 1/1 1 1 2m19s
...
Ok, let's try to connect to one pod directly:
The Pods itself seem to work. Let's investigate the Service a bit:
Endpoint inspection is also possible using:
No endpoints - No good. We check the Service yaml:
Though Pods are usually never created without a Deployment or ReplicaSet , Services always select for Pods directly. This gives great flexibility
because Pods could be created through various customized ways. After saving the new selector we check the Service again for endpoints:
Endpoints - Good! Now we try connecting again:
And we fixed it. Good to know is how to be able to use Kubernetes DNS resolution from a different Namespace. Not necessary, but we could
spin up the temporary Pod in default Namespace :
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444
If you don't see a command prompt, try pressing enter.
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
curl: (28) Connection timed out after 1000 milliseconds
pod "tmp" deleted
pod mars/tmp terminated (Error)
k -n mars get pod -o wide # get cluster IP
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.0.1.14
% Total % Received % Xferd Average Speed Time Time Time Current
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
➜ k -n mars describe service manager-api-svc
Name: manager-api-svc
Namespace: mars
Labels: app=manager-api-svc
...
Endpoints: <none>
...
k -n mars get ep
k -n mars edit service manager-api-svc
# k -n mars edit service manager-api-svc
apiVersion: v1
kind: Service
metadata:
...
labels:
app: manager-api-svc
name: manager-api-svc
namespace: mars
...
spec:
clusterIP: 10.3.244.121
ports:
  • name: 4444-80 port: 4444 protocol: TCP targetPort: 80 selector: #id: manager-api-deployment # wrong selector, needs to point to pod! id: manager-api-pod sessionAffinity: None type: ClusterIP
➜ k -n mars get ep
NAME ENDPOINTS AGE
manager-api-svc 10.0.0.30:80,10.0.1.30:80,10.0.1.31:80 + 1 more... 41m
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 99k 0 --:--:-- --:--:-- --:--:-- 99k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Short manager-api-svc.mars or long manager-api-svc.mars.svc.cluster.local work.

Question 19 | Service ClusterIP->NodePort

Task weight: 3%
In Namespace jupiter you'll find an apache Deployment (with one replica) named jupiter-crew-deploy and a ClusterIP Service called
jupiter-crew-svc which exposes it. Change this service to a NodePort one to make it available on all nodes on port 30100.
Test the NodePort Service using the internal IP of all available nodes and the port 30100 using curl, you can reach the internal node IPs
directly from your main terminal. On which nodes is the Service reachable? On which node is the Pod running?
Answer
First we get an overview:
(Optional) Next we check if the ClusterIP Service actually works:
The Service is working great. Next we change the Service type to NodePort and set the port:
We check if the Service type was updated:
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host:
manager-api-svc
pod "tmp" deleted
pod default/tmp terminated (Error)
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc.mars:4444
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 68000 0 --:--:-- --:--:-- --:--:-- 68000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
➜ k -n jupiter get all
NAME READY STATUS RESTARTS AGE
pod/jupiter-crew-deploy-8cdf99bc9-klwqt 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jupiter-crew-svc ClusterIP 10.100.254.66 <none> 8080/TCP 34m
...
➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000
<html><body><h1>It works!</h1></body></html>
k -n jupiter edit service jupiter-crew-svc
# k -n jupiter edit service jupiter-crew-svc
apiVersion: v1
kind: Service
metadata:
name: jupiter-crew-svc
namespace: jupiter
...
spec:
clusterIP: 10.3.245.70
ports:
  • name: 8080-80 port: 8080 protocol: TCP targetPort: 80 nodePort: 30100 # add the nodePort selector: id: jupiter-crew sessionAffinity: None #type: ClusterIP type: NodePort # change type status: loadBalancer: {}
(Optional) And we confirm that the service is still reachable internally:
Nice. A NodePort Service kind of lies on top of a ClusterIP one, making the ClusterIP Service reachable on the Node IPs (internal and external).
Next we get the internal IPs of all nodes to check the connectivity:
On which nodes is the Service reachable?
On both, even the controlplane. On which node is the Pod running?
In our case on cluster1-node1, but could be any other worker if more available. Here we hopefully gained some insight into how a NodePort
Service works. Although the Pod is just running on one specific node, the Service makes it available through port 30100 on the internal and
external IP addresses of all nodes. This is at least the common/default behaviour but can depend on cluster configuration.

Question 20 | NetworkPolicy

Task weight: 9%
In Namespace venus you'll find two Deployments named api and frontend. Both Deployments are exposed inside the cluster using Services.
Create a NetworkPolicy named np1 which restricts outgoing tcp connections from Deployment frontend and only allows those going to
Deployment api. Make sure the NetworkPolicy still allows outgoing traffic on UDP/TCP ports 53 for DNS resolution.
Test using: wget http://www.google.com and wget api:2222 from a Pod of Deployment frontend.
Answer
INFO: For learning NetworkPolicies check out https://editor.cilium.io. But you're not allowed to use it during the exam.
First we get an overview:
(Optional) This is not necessary but we could check if the Services are working inside the cluster:
➜ k -n jupiter get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jupiter-crew-svc NodePort 10.3.245.70 <none> 8080:30100/TCP 3m52s
➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
<html><body><h1>It works!</h1></body></html>
➜ k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP ...
cluster1-controlplane1 Ready control-plane 18h v1.26.0 192.168.100.11 ...
cluster1-node1 Ready <none> 18h v1.26.0 192.168.100.12 ...
➜ curl 192.168.100.11:30100
<html><body><h1>It works!</h1></body></html>
➜ curl 192.168.100.12:30100
<html><body><h1>It works!</h1></body></html>
➜ k -n jupiter get pod jupiter-crew-deploy-8cdf99bc9-klwqt -o yaml | grep nodeName
nodeName: cluster1-node1
➜ k -n jupiter get pod -o wide # or even shorter
➜ k -n venus get all
NAME READY STATUS RESTARTS AGE
pod/api-5979b95578-gktxp 1/1 Running 0 57s
pod/api-5979b95578-lhcl5 1/1 Running 0 57s
pod/frontend-789cbdc677-c9v8h 1/1 Running 0 57s
pod/frontend-789cbdc677-npk2m 1/1 Running 0 57s
pod/frontend-789cbdc677-pl67g 1/1 Running 0 57s
pod/frontend-789cbdc677-rjt5r 1/1 Running 0 57s
pod/frontend-789cbdc677-xgf5n 1/1 Running 0 57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/api ClusterIP 10.3.255.137 <none> 2222/TCP 37s
service/frontend ClusterIP 10.3.255.135 <none> 80/TCP 57s
...
Then we use any frontend Pod and check if it can reach external names and the api Service :
We see Pods of frontend can reach the api and external names.
Now we head to https://kubernetes.io/docs, search for NetworkPolicy , copy the example code and adjust it to:
Notice that we specify two egress rules in the yaml above. If we specify multiple egress rules then these are connected using a logical OR. So in
the example above we do:
Let's have a look at example code which wouldn't work in our case:
In the yaml above we only specify one egress rule with two selectors. It can be translated into:
Apply the correct policy:
➜ k -n venus run tmp --restart=Never --rm -i --image=busybox -i -- wget -O- frontend:80
Connecting to frontend:80 (10.3.245.9:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
➜ k -n venus run tmp --restart=Never --rm --image=busybox -i -- wget -O- api:2222
Connecting to api:2222 (10.3.250.233:2222)
<html><body><h1>It works!</h1></body></html>
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- http://www.google.com
Connecting to http://www.google.com (216.58.205.227:80)
  • 100% |********************************| 12955 0:00:00 ETA <!doctype html> ...
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222
<html><body><h1>It works!</h1></body></html>
Connecting to api:2222 (10.3.255.137:2222)
  • 100% |********************************| 45 0:00:00 ETA ...
vim 20_np1.yaml
# 20_np1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np1
namespace: venus
spec:
podSelector:
matchLabels:
id: frontend # label of the pods this policy should be applied on
policyTypes:
  • Egress # we only want to control egress egress:
  • to: # 1st egress rule
  • podSelector: # allow egress only to pods with api label matchLabels: id: api
  • ports: # 2nd egress rule
  • port: 53 # allow DNS UDP protocol: UDP
  • port: 53 # allow DNS TCP protocol: TCP
allow outgoing traffic if
(destination pod has label id:api) OR ((port is 53 UDP) OR (port is 53 TCP))
# this example does not work in our case
...
egress:
  • to: # 1st AND ONLY egress rule
  • podSelector: # allow egress only to pods with api label matchLabels: id: api ports: # STILL THE SAME RULE but just an additional selector
  • port: 53 # allow DNS UDP protocol: UDP
  • port: 53 # allow DNS TCP protocol: TCP
allow outgoing traffic if
(destination pod has label id:api) AND ((port is 53 UDP) OR (port is 53 TCP))
And try again, external is not working any longer:
Internal connection to api work as before:

Question 21 | Requests and Limits, ServiceAccount

Task weight: 4%
Team Neptune needs 3 Pods of image httpd:2.4-alpine, create a Deployment named neptune-10ab for this. The containers should be
named neptune-pod-10ab.Each container should have a memory request of 20Mi and a memory limit of 50Mi.
Team Neptune has it's own ServiceAccount neptune-sa-v2 under which the Pods should run. The Deployment should be in Namespace
neptune.
Answer:
Now make the required changes using vim:
Then create the yaml:
k -f 20_np1.yaml create
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- http://www.google.de
Connecting to http://www.google.de:2222 (216.58.207.67:80)
^C
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- -T 5 http://www.google.de:80
Connecting to http://www.google.com (172.217.203.104:80)
wget: download timed out
command terminated with exit code 1
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222
<html><body><h1>It works!</h1></body></html>
Connecting to api:2222 (10.3.255.137:2222)
  • 100% |********************************| 45 0:00:00 ETA
k -n neptune create deployment -h # help
k -n neptune create deploy -h # deploy is short for deployment
# check the export on the very top of this document so we can use $do
k -n neptune create deploy neptune-10ab --image=httpd:2.4-alpine $do > 21 .yaml
vim 21 .yaml
# 21.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: neptune-10ab
name: neptune-10ab
namespace: neptune
spec:
replicas: 3 # change
selector:
matchLabels:
app: neptune-10ab
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: neptune-10ab
spec:
serviceAccountName: neptune-sa-v2 # add
containers:
  • image: httpd:2.4-alpine name: neptune-pod-10ab # change resources: # add limits: # add memory: 50Mi # add requests: # add memory: 20Mi # add status: {}
k create -f 21 .yaml # namespace already set in yaml
To verify all Pods are running we do:

Question 22 | Labels, Annotations

Task weight: 3%
Team Sunny needs to identify some of their Pods in namespace sun. They ask you to add a new label protected: true to all Pods with an
existing label type: worker or type: runner. Also add an annotation protected: do not delete this pod to all Pods having the new
label protected: true.
Answer
If we would only like to get pods with certain labels we can run:
We can use this label filtering also when using other commands, like setting new labels:
Or we could run:
Let's check the result:
Looking good. Finally we set the annotation using the newly assigned label protected: true:
➜ k -n neptune get pod | grep neptune-10ab
neptune-10ab-7d4b8d45b-4nzj5 1/1 Running 0 57s
neptune-10ab-7d4b8d45b-lzwrf 1/1 Running 0 17s
neptune-10ab-7d4b8d45b-z5hcc 1/1 Running 0 17s
➜ k -n sun get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
0509649a 1/1 Running 0 25s type=runner,type_old=messenger
0509649b 1/1 Running 0 24s type=worker
1428721e 1/1 Running 0 23s type=worker
1428721f 1/1 Running 0 22s type=worker
43b9a 1/1 Running 0 22s type=test
4c09 1/1 Running 0 21s type=worker
4c35 1/1 Running 0 20s type=worker
4fe4 1/1 Running 0 19s type=worker
5555a 1/1 Running 0 19s type=messenger
86cda 1/1 Running 0 18s type=runner
8d1c 1/1 Running 0 17s type=messenger
a004a 1/1 Running 0 16s type=runner
a94128196 1/1 Running 0 15s type=runner,type_old=messenger
afd79200c56a 1/1 Running 0 15s type=worker
b667 1/1 Running 0 14s type=worker
fdb2 1/1 Running 0 13s type=worker
k -n sun get pod -l type=runner # only pods with label runner
k label -h # help
k -n sun label pod -l type=runner protected=true # run for label runner
k -n sun label pod -l type=worker protected=true # run for label worker
k -n sun label pod -l "type in (worker,runner)" protected=true
➜ k -n sun get pod --show-labels
NAME ... AGE LABELS
0509649a ... 56s protected=true,type=runner,type_old=messenger
0509649b ... 55s protected=true,type=worker
1428721e ... 54s protected=true,type=worker
1428721f ... 53s protected=true,type=worker
43b9a ... 53s type=test
4c09 ... 52s protected=true,type=worker
4c35 ... 51s protected=true,type=worker
4fe4 ... 50s protected=true,type=worker
5555a ... 50s type=messenger
86cda ... 49s protected=true,type=runner
8d1c ... 48s type=messenger
a004a ... 47s protected=true,type=runner
a94128196 ... 46s protected=true,type=runner,type_old=messenger
afd79200c56a ... 46s protected=true,type=worker
b667 ... 45s protected=true,type=worker
fdb2 ... 44s protected=true,type=worker
k -n sun annotate pod -l protected=true protected="do not delete this pod"
Not requested in the task but for your own control you could run:

CKAD Simulator Preview Kubernetes 1. 26

This is a preview of the full CKAD Simulator course content.
The full course contains 22 questions and scenarios which cover all the CKAD areas. The course also provides a browser terminal which is a
very close replica of the original one. This is great to get used and comfortable before the real exam. After the test session (120 minutes), or if
you stop it early, you'll get access to all questions and their detailed solutions. You'll have 36 hours cluster access in total which means even
after the session, once you have the solutions, you can still play around.
The following preview will give you an idea of what the full course will provide. These preview questions are not part of the 22 in the full
course but in addition to it. But the preview questions are part of the same CKAD simulation environment which we setup for you, so with
access to the full course you can solve these too.
The answers provided here assume that you did run the initial terminal setup suggestions as provided in the tips section, but especially:
These questions can be solved in the test environment provided through the CKA Simulator

Preview Question 1

In Namespace pluto there is a Deployment named project-23-api. It has been working okay for a while but Team Pluto needs it to be more
reliable. Implement a liveness-probe which checks the container to be reachable on port 80. Initially the probe should wait 10 , periodically 15
seconds.
The original Deployment yaml is available at /opt/course/p1/project-23-api.yaml. Save your changes at /opt/course/p1/project-23-
api-new.yaml and apply the changes.
Answer
First we get an overview:
To note: we see another Pod here called holy-api which is part of another section. This is often the case in the provided scenarios, so be
careful to only manipulate the resources you need to. Just like in the real world and in the exam.
Next we use nginx:alpine and curl to check if one Pod is accessible on port 80:
We could also use busybox and wget for this:
Now that we're sure the Deployment works we can continue with altering the provided yaml:
k -n sun get pod -l protected=true -o yaml | grep -A 8 metadata:
alias k=kubectl
export do="--dry-run=client -o yaml"
➜ k -n pluto get all -o wide
NAME READY STATUS ... IP ...
pod/holy-api 1/1 Running ... 10.12.0.26 ...
pod/project-23-api-784857f54c-dx6h6 1/1 Running ... 10.12.2.15 ...
pod/project-23-api-784857f54c-sj8df 1/1 Running ... 10.12.1.18 ...
pod/project-23-api-784857f54c-t4xmh 1/1 Running ... 10.12.0.23 ...
NAME READY UP-TO-DATE AVAILABLE ...
deployment.apps/project-23-api 3/3 3 3 ...
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.12.2.15
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
➜ k run tmp --restart=Never --rm --image=busybox -i -- wget -O- 10.12.2.15
Connecting to 10.12.2.15 (10.12.2.15:80)
writing to stdout
  • 100% |********************************| 612 0:00:00 ETA written to stdout
<title>Welcome to nginx!</title>
Add the liveness-probe to the yaml:
Then let's apply the changes:
Next we wait 10 seconds and confirm the Pods are still running:
We can also check the configured liveness-probe settings on a Pod or the Deployment :

Preview Question 2

Team Sun needs a new Deployment named sunny with 4 replicas of image nginx:1.17.3-alpine in Namespace sun. The Deployment and its
Pods should use the existing ServiceAccount sa-sun-deploy.
Expose the Deployment internally using a ClusterIP Service named sun-srv on port 9999. The nginx containers should run as default on port
80. The management of Team Sun would like to execute a command to check that all Pods are running on occasion. Write that command into
file /opt/course/p2/sunny_status_command.sh. The command should use kubectl.
cp /opt/course/p1/project-23-api.yaml /opt/course/p1/project-23-api-new.yaml
vim /opt/course/p1/project-23-api-new.yaml
# /opt/course/p1/project-23-api-new.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-23-api
namespace: pluto
spec:
replicas: 3
selector:
matchLabels:
app: project-23-api
template:
metadata:
labels:
app: project-23-api
spec:
volumes:
  • name: cache-volume1 emptyDir: {}
  • name: cache-volume2 emptyDir: {}
  • name: cache-volume3 emptyDir: {} containers:
  • image: httpd:2.4-alpine name: httpd volumeMounts:
  • mountPath: /cache1 name: cache-volume1
  • mountPath: /cache2 name: cache-volume2
  • mountPath: /cache3 name: cache-volume3 env:
  • name: APP_ENV value: "prod"
  • name: APP_SECRET_N1 value: "IO=a4L/XkRdvN8jM=Y+"
  • name: APP_SECRET_P1 value: "-7PA0_Z]>{pwa43r)__" livenessProbe: # add tcpSocket: # add port: 80 # add initialDelaySeconds: 10 # add periodSeconds: 15 # add
k -f /opt/course/p1/project-23-api-new.yaml apply
➜ k -n pluto get pod
NAME READY STATUS RESTARTS AGE
holy-api 1/1 Running 0 144m
project-23-api-5b4579fd49-8knh8 1/1 Running 0 90s
project-23-api-5b4579fd49-cbgph 1/1 Running 0 88s
project-23-api-5b4579fd49-tcfq5 1/1 Running 0 86s
➜ k -n pluto describe pod project-23-api-5b4579fd49-8knh8 | grep Liveness
Liveness: tcp-socket :80 delay=10s timeout=1s period=15s #success=1 #failure=3
➜ k -n pluto describe deploy project-23-api | grep Liveness
Liveness: tcp-socket :80 delay=10s timeout=1s period=15s #success=1 #failure=3
Answer
Then alter its yaml to include the requirements:
Now create the yaml and confirm it's running:
Confirmed, the AGE column is always in important information about if changes were applied. Next we expose the Pods by created the Service :
Using expose instead of kubectl create service clusterip is faster because it already sets the correct selector-labels. The previous
command would produce this yaml:
Let's test the Service using wget from a temporary Pod :
k -n sun create deployment -h #help
# check the export on the very top of this document so we can use $do
k -n sun create deployment sunny --image=nginx:1.17.3-alpine $do > p2_sunny.yaml
vim p2_sunny.yaml
# p2_sunny.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: sunny
name: sunny
namespace: sun
spec:
replicas: 4 # change
selector:
matchLabels:
app: sunny
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: sunny
spec:
serviceAccountName: sa-sun-deploy # add
containers:
  • image: nginx:1.17.3-alpine name: nginx resources: {} status: {}
➜ k create -f p2_sunny.yaml
deployment.apps/sunny created
➜ k -n sun get pod
NAME READY STATUS RESTARTS AGE
0509649a 1/1 Running 0 149m
0509649b 1/1 Running 0 149m
1428721e 1/1 Running 0 149m
...
sunny-64df8dbdbb-9mxbw 1/1 Running 0 10s
sunny-64df8dbdbb-mp5cf 1/1 Running 0 10s
sunny-64df8dbdbb-pggdf 1/1 Running 0 6s
sunny-64df8dbdbb-zvqth 1/1 Running 0 7s
k -n sun expose -h # help
k -n sun expose deployment sunny --name sun-srv --port 9999 --target-port 80
# k -n sun expose deployment sunny --name sun-srv --port 9999 --target-port 80
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: sunny
name: sun-srv # required by task
spec:
ports:
  • port: 9999 # service port protocol: TCP targetPort: 80 # target port selector: app: sunny # selector is important status: loadBalancer: {}
Because the Service is in a different Namespace as our temporary Pod , it is reachable using the names sun-srv.sun or fully: sun-
srv.sun.svc.cluster.local.
Finally we need a command which can be executed to check if all Pods are runing, this can be done with:
To run the command:

Preview Question 3

Management of EarthAG recorded that one of their Services stopped working. Dirk, the administrator, left already for the long weekend. All the
information they could give you is that it was located in Namespace earth and that it stopped working after the latest rollout. All Services of
EarthAG should be reachable from inside the cluster.
Find the Service , fix any issues and confirm it's working again. Write the reason of the error into file /opt/course/p3/ticket-654.txt so Dirk
knows what the issue was.
Answer
First we get an overview of the resources in Namespace earth:
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 sun-srv.sun:9999
Connecting to sun-srv.sun:9999 (10.23.253.120:9999)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
vim /opt/course/p2/sunny_status_command.sh
# /opt/course/p2/sunny_status_command.sh
kubectl -n sun get deployment sunny
➜ sh /opt/course/p2/sunny_status_command.sh
NAME READY UP-TO-DATE AVAILABLE AGE
sunny 4/4 4 4 13m
➜ k -n earth get all
NAME READY STATUS RESTARTS AGE
pod/earth-2x3-api-584df69757-ngnwp 1/1 Running 0 116m
pod/earth-2x3-api-584df69757-ps8cs 1/1 Running 0 116m
pod/earth-2x3-api-584df69757-ww9q8 1/1 Running 0 116m
pod/earth-2x3-web-85c5b7986c-48vjt 1/1 Running 0 116m
pod/earth-2x3-web-85c5b7986c-6mqmb 1/1 Running 0 116m
pod/earth-2x3-web-85c5b7986c-6vjll 1/1 Running 0 116m
pod/earth-2x3-web-85c5b7986c-fnkbp 1/1 Running 0 116m
pod/earth-2x3-web-85c5b7986c-pjm5m 1/1 Running 0 116m
pod/earth-2x3-web-85c5b7986c-pwfvj 1/1 Running 0 116m
pod/earth-3cc-runner-6cb6cc6974-8wm5x 1/1 Running 0 116m
pod/earth-3cc-runner-6cb6cc6974-9fx8b 1/1 Running 0 116m
pod/earth-3cc-runner-6cb6cc6974-b9nrv 1/1 Running 0 116m
pod/earth-3cc-runner-heavy-6bf876f46d-b47vq 1/1 Running 0 116m
pod/earth-3cc-runner-heavy-6bf876f46d-mrzqd 1/1 Running 0 116m
pod/earth-3cc-runner-heavy-6bf876f46d-qkd74 1/1 Running 0 116m
pod/earth-3cc-web-6bfdf8b848-f74cj 0/1 Running 0 116m
pod/earth-3cc-web-6bfdf8b848-n4z7z 0/1 Running 0 116m
pod/earth-3cc-web-6bfdf8b848-rcmxs 0/1 Running 0 116m
pod/earth-3cc-web-6bfdf8b848-xl467 0/1 Running 0 116m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/earth-2x3-api-svc ClusterIP 10.3.241.242 <none> 4546/TCP 116m
service/earth-2x3-web-svc ClusterIP 10.3.250.247 <none> 4545/TCP 116m
service/earth-3cc-web ClusterIP 10.3.243.24 <none> 6363/TCP 116m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/earth-2x3-api 3/3 3 3 116m
deployment.apps/earth-2x3-web 6/6 6 6 116m
deployment.apps/earth-3cc-runner 3/3 3 3 116m
deployment.apps/earth-3cc-runner-heavy 3/3 3 3 116m
deployment.apps/earth-3cc-web 0/4 4 0 116m
NAME DESIRED CURRENT READY AGE
replicaset.apps/earth-2x3-api-584df69757 3 3 3 116m
replicaset.apps/earth-2x3-web-85c5b7986c 6 6 6 116m
replicaset.apps/earth-3cc-runner-6cb6cc6974 3 3 3 116m
replicaset.apps/earth-3cc-runner-heavy-6bf876f46d 3 3 3 116m
First impression could be that all Pods are in status RUNNING. But looking closely we see that some of the Pods are not ready, which also
confirms what we see about one Deployment and one replicaset. This could be our error to further investigate.
Another approach could be to check the Service s for missing endpoints:
Service earth-3cc-web doesn't have endpoints. This could be a selector/label misconfiguration or the endpoints are actually not
available/ready.
Checking all Services for connectivity should show the same (this step is optional and just for demonstration):
Notice that we use here for example earth-2x3-api-svc.earth. We could also spin up a temporary Pod in Namespace earth and connect
directly to earth-2x3-api-svc.
We get no connection to earth-3cc-web.earth:6363. Let's look at the Deployment earth-3cc-web. Here we see that the requested amount
of replicas is not available/ready:
To continue we check the Deployment yaml for some misconfiguration:
We change the readiness-probe port, save and check the Pods :
replicaset.apps/earth-3cc-web-6895587dc7 0 0 0 116m
replicaset.apps/earth-3cc-web-6bfdf8b848 4 4 0 116m
replicaset.apps/earth-3cc-web-d49645966 0 0 0 116m
➜ k -n earth get ep
NAME ENDPOINTS AGE
earth-2x3-api-svc 10.0.0.10:80,10.0.1.5:80,10.0.2.4:80 116m
earth-2x3-web-svc 10.0.0.11:80,10.0.0.12:80,10.0.1.6:80 + 3 more... 116m
earth-3cc-web
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-api-svc.earth:4546
...
<html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-web-svc.earth:4545
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000
<html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-3cc-web.earth:6363
If you don't see a command prompt, try pressing enter.
0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0
curl: (28) Connection timed out after 5000 milliseconds
pod "tmp" deleted
pod default/tmp terminated (Error)
➜ k -n earth get deploy earth-3cc-web
NAME READY UP-TO-DATE AVAILABLE AGE
earth-3cc-web 0/4 4 0 7m18s
k -n earth edit deploy earth-3cc-web
# k -n earth edit deploy earth-3cc-web
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
generation: 3 # there have been rollouts
name: earth-3cc-web
namespace: earth
...
spec:
...
template:
metadata:
creationTimestamp: null
labels:
id: earth-3cc-web
spec:
containers:
  • image: nginx:1.16.1-alpine imagePullPolicy: IfNotPresent name: nginx readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 20 successThreshold: 1 tcpSocket: port: 82 # this port doesn't seem to be right, should be 80 timeoutSeconds: 1 ...
Running, but still not in ready state. Wait 10 seconds (initialDelaySeconds of readinessProbe) and check again:
Let's check the service again:
We did it! Finally we write the reason into the requested location:

CKAD Tips Kubernetes 1. 26

In this section we'll provide some tips on how to handle the CKAD exam and browser terminal.

Knowledge

Study all topics as proposed in the curriculum till you feel comfortable with all
Learn and Study the in-browser scenarios on https://killercoda.com/killer-shell-ckad
Understand Rolling Update Deployment including maxSurge and maxUnavailable
Do 1 or 2 test session with this CKAD Simulator. Understand the solutions and maybe try out other ways to achieve the same
Setup your aliases, be fast and breath kubectl

CKAD Preparation

Read the Curriculum
Read the Handbook
Read the important tips
Read the FAQ
➜ k -n earth get pod -l id=earth-3cc-web
NAME READY STATUS RESTARTS AGE
earth-3cc-web-d49645966-52vb9 0/1 Running 0 6s
earth-3cc-web-d49645966-5tts6 0/1 Running 0 6s
earth-3cc-web-d49645966-db5gp 0/1 Running 0 6s
earth-3cc-web-d49645966-mk7gr 0/1 Running 0 6s
➜ k -n earth get pod -l id=earth-3cc-web
NAME READY STATUS RESTARTS AGE
earth-3cc-web-d49645966-52vb9 1/1 Running 0 32s
earth-3cc-web-d49645966-5tts6 1/1 Running 0 32s
earth-3cc-web-d49645966-db5gp 1/1 Running 0 32s
earth-3cc-web-d49645966-mk7gr 1/1 Running 0 32s
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-3cc-web.earth:6363
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 612 100 612 0 0 55636 0 --:--:-- --:--:-- --:--:-- 55636
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
...
vim /opt/course/p3/ticket-654.txt
# /opt/course/p3/ticket-654.txt
yo Dirk, wrong port for readinessProbe defined!

Kubernetes documentation

Get familiar with the Kubernetes documentation and be able to use the search. Allowed links are:
NOTE: Verify the list here

The Test Environment / Browser Terminal

You'll be provided with a browser terminal which uses Ubuntu 20. The standard shells included with a minimal install of Ubuntu 20 will be
available, including bash.
Laggin
There could be some lagging, definitely make sure you are using a good internet connection because your webcam and screen are uploading
all the time.
Kubectl autocompletion and commands
Autocompletion is configured by default, as well as the k alias source and others:
kubectl with k alias and Bash autocompletion
yq and jqfor YAML/JSON processing
tmux for terminal multiplexing
curl and wget for testing web services
man and man pages for further documentation
Copy & Paste
There could be issues copying text (like pod names) from the left task information into the terminal. Some suggested to "hard" hit or long hold
Cmd/Ctrl+C a few times to take action. Apart from that copy and paste should just work like in normal terminals.
Percentages and Score
There are 15-20 questions in the exam and 100% of total percentage to reach. Each questions shows the % it gives if you solve it. Your results
will be automatically checked according to the handbook. If you don't agree with the results you can request a review by contacting the Linux
Foundation support.
Notepad & Skipping Questions
You have access to a simple notepad in the browser which can be used for storing any kind of plain text. It makes sense to use this for saving
skipped question numbers and their percentages. This way it's possible to move some questions to the end. It might make sense to skip 2% or
3% questions and go directly to higher ones.
Contexts
You'll receive access to various different clusters and resources in each. They provide you the exact command you need to run to connect to
another cluster/context. But you should be comfortable working in different namespaces with kubectl.

PSI Bridge

Starting with PSI Bridge:
The exam will now be taken using the PSI Secure Browser, which can be downloaded using the newest versions of Microsoft Edge, Safari,
Chrome, or Firefox
Multiple monitors will no longer be permitted
Use of personal bookmarks will no longer be permitted
The new ExamUI includes improved features such as:
A remote desktop configured with the tools and software needed to complete the tasks
A timer that displays the actual time remaining (in minutes) and provides an alert with 30, 15, or 5 minute remaining
The content panel remains the same (presented on the Left Hand Side of the ExamUI)
Read more here.

Browser Terminal Setup

It should be considered to spend ~1 minute in the beginning to setup your terminal. In the real exam the vast majority of questions will be
done from the main terminal. For few you might need to ssh into another machine. Just be aware that configurations to your shell will not be
transferred in this case.

Minimal Setup

Alias
The alias k for kubectl will already be configured together with autocompletion. In case not you can configure it using this link.
Vim
The following settings will already be configured in your real exam environment in ~/.vimrc. But it can never hurt to be able to type these
down:
The expandtab make sure to use spaces for tabs. Memorize these and just type them down. You can't have any written notes with commands
on your desktop etc.

Optional Setup

Fast dry-run output
This way you can just run k run pod1 --image=nginx $do. Short for "dry output", but use whatever name you like.
Fast pod delete
This way you can run k delete pod1 $now and don't have to wait for ~30 seconds termination time.
Persist bash settings
You can store aliases and other setup in ~/.bashrc if you're planning on using different shells or tmux.
Alias Namespace
In addition you could define an alias like:
Which allows you to define the default namespace of the current context. Then once you switch a context or namespace you can just run:
But only do this if you used it before and are comfortable doing so. Else you need to specify the namespace for every call, which is also fine:

Be fast

Use the history command to reuse already entered commands or use even faster history search through Ctrl r.
If a command takes some time to execute, like sometimes kubectl delete pod x. You can put a task in the background using Ctrl z and pull
it back into foreground running command fg.
You can delete pods fast with:

Vim

Be great with vim.
toggle vim line numbers
When in vim you can press Esc and type :set number or :set nonumber followed by Enter to toggle line numbers. This can be useful when
finding syntax errors based on line - but can be bad when wanting to mark&copy by mouse. You can also just jump to a line number with Esc
:22 + Enter.
copy&paste
Get used to copy/paste/cut with vim:
set tabstop=2
set expandtab
set shiftwidth=2
export do="--dry-run=client -o yaml"
export now="--force --grace-period 0"
alias kn='kubectl config set-context --current --namespace '
kn default  # set default to default
kn my-namespace # set default to my-namespace
k -n my-namespace get all
k -n my-namespace get pod
...
k delete pod x --grace-period 0 --force
k delete pod x $now # if export from above is configured
Mark lines: Esc+V (then arrow keys)
Copy marked lines: y
Cut marked lines: d
Past lines: p or P
Indent multiple lines
To indent multiple lines press Esc and type :set shiftwidth=2. First mark multiple lines using Shift v and the up/down keys. Then to
indent the marked lines press > or <. You can then press. to repeat the action.

Split terminal screen

By default tmux is installed and can be used to split your one terminal into multiple. But just do this if you know your shit, because scrolling is
different and copy&pasting might be weird.

wuestkamp.com design faq store support legal / privacy

SCREENSHOTS**

Killer Shell - CKS CKA CKAD Simulator_page-0001 Killer Shell - CKS CKA CKAD Simulator_page-0002 Killer Shell - CKS CKA CKAD Simulator_page-0003 Killer Shell - CKS CKA CKAD Simulator_page-0004 Killer Shell - CKS CKA CKAD Simulator_page-0005 Killer Shell - CKS CKA CKAD Simulator_page-0006 Killer Shell - CKS CKA CKAD Simulator_page-0007 Killer Shell - CKS CKA CKAD Simulator_page-0008 Killer Shell - CKS CKA CKAD Simulator_page-0009 Killer Shell - CKS CKA CKAD Simulator_page-0010 Killer Shell - CKS CKA CKAD Simulator_page-0011 Killer Shell - CKS CKA CKAD Simulator_page-0012 Killer Shell - CKS CKA CKAD Simulator_page-0013 Killer Shell - CKS CKA CKAD Simulator_page-0014 Killer Shell - CKS CKA CKAD Simulator_page-0015 Killer Shell - CKS CKA CKAD Simulator_page-0016 Killer Shell - CKS CKA CKAD Simulator_page-0017 Killer Shell - CKS CKA CKAD Simulator_page-0018 Killer Shell - CKS CKA CKAD Simulator_page-0019 Killer Shell - CKS CKA CKAD Simulator_page-0020 Killer Shell - CKS CKA CKAD Simulator_page-0021 Killer Shell - CKS CKA CKAD Simulator_page-0022 Killer Shell - CKS CKA CKAD Simulator_page-0023 Killer Shell - CKS CKA CKAD Simulator_page-0024 Killer Shell - CKS CKA CKAD Simulator_page-0025 Killer Shell - CKS CKA CKAD Simulator_page-0026 Killer Shell - CKS CKA CKAD Simulator_page-0027 Killer Shell - CKS CKA CKAD Simulator_page-0028 Killer Shell - CKS CKA CKAD Simulator_page-0029 Killer Shell - CKS CKA CKAD Simulator_page-0030 Killer Shell - CKS CKA CKAD Simulator_page-0031 Killer Shell - CKS CKA CKAD Simulator_page-0032 Killer Shell - CKS CKA CKAD Simulator_page-0033 Killer Shell - CKS CKA CKAD Simulator_page-0034 Killer Shell - CKS CKA CKAD Simulator_page-0035 Killer Shell - CKS CKA CKAD Simulator_page-0036 Killer Shell - CKS CKA CKAD Simulator_page-0037

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment