Skip to content

Instantly share code, notes, and snippets.

@gabrielfsousa
Last active March 11, 2023 06:59
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save gabrielfsousa/16da69ce82dd778ddda54548b0d42fdd to your computer and use it in GitHub Desktop.
Save gabrielfsousa/16da69ce82dd778ddda54548b0d42fdd to your computer and use it in GitHub Desktop.
CKA exam preparation

Preparation for CKA with Kubernetes version 1.17.1 on REDHAT 8

What i need to know for the exam ? 🏁

% Domain
08% Application Lifecycle Management
12% Installation, Configuration & Validation
19% Core Concepts
11% Networking
05% Scheduling
12% Security
11% Cluster Maintenance
05% Logging / Monitoring
07% Storage
10% Troubleshooting

What is Kubernetes ?

  • Kubernetes is all about orchestration, or the deployment of resources and cleaning when they are no longer needed, in an automated fashion.
  • K8S is all about decoupled(1), transient(2) services.
  • K8S give us a very flexible and scalabe environment
(1) everything has been designed to not require anything else in particular
(2) the whole system expects various components to be terminated and replaced

INSTALLATION AND CONFIGURATION

DOC: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
SEARCHSTRING: Installing kubeadm

Install Docker

dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
dnf install docker-ce --nobest -y
systemctl start docker
systemctl enable docker
docker --version

Configuration

  • Swap should be disabled (see /etc/fstab)
  • Run swapoff -a
  • Letting iptables see bridged traffic
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

Install Kubernetes

  • Create REPO
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
  • Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  • Install kubelet, kubeadm and kubectl
yum install -y kubelet-1.17.1-0 kubeadm-1.17.1-0 kubectl-1.17.1-0 --disableexcludes=kubernetes
  • Crete DNS Alias with the IP address of the primary interface of the master server /etc/hosts
...
172.31.42.196 master
...

Initializing your control-plane node

  • Initialize with Calico Pod network CIDR
kubeadm init --kubernetes-version=1.17.1 --control-plane-endpoint="master:6443" --pod-network-cidr=192.168.0.0/16 --upload-certs | tee kubeadm-init.out

--upload-certs flag means that you dont to have to manually copy the certificates from the primary control plane node to the joining control plane nodes

  • Allow a non-root user admin level access to the cluster
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config 
  • Installing a Pod network add-on (Calico)
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
  • View other values we could have included when creating the cluster
kubeadm config print init-defaults
  • List nodes
kubectl get node -o=wide
  • View node MASTER detais
kubectl describe node master 

Grow the Cluster

  • In the second node, install Docker, configure and install Kubernetes
  • In the master node, find the token for kubeadm join command ( the token lasts 2 hours by default )
kubeadm token list
  • If adding a node more than two hours late create a new token in the master
kubeadm token create
  • Create and use a Discovery Token CA Cert Hash to ensure the node joins the cluster in a secure manner

DOC: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/
SEARCHSTRING: kubeadm join

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex
  • Use the token and hash, in this case a ssha256:long-hash to join the cluster from the second/worker node
kubeadm join --token e0zfuy.uy4sx38yw9n0c9xe master:6443 --discovery-token-ca-cert-hash sha256:3ec8d70ff081691a2fbd817f1ebbaa5a296863cc3ee4793d847d6ae278c768a7

Finish Cluster Setup

  • View the available nodes
kubectl get node
  • Look at the details of the node
kubectl describe node master
  • Allow the master server to run non-infrastructure pods

DOC: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
SEARCHSTRING: Installing kubeadm Control plane node isolation

kubectl describe node | grep -i taint
kubectl taint nodes --all node-role.kubernetes.io/master-
  • Determine if the DNS and Calico pods are ready for use
kubectl get pods --all-namespaces
  • Only if you notice the coredns-pods are stuck in ContainerCreatingstatus you may have to delete them, causing new ones to be generated
kubectl -n kube-system delete pod coredns-576cbf47c7-vq5dz coredns-576cbf47c7-rn6v4
  • When it finished you should see a new tunnel tunl0 interface. As you create objects more interfaces will be created, such as cali interfaces when you deploy pods
ip a

Deploy a Simple Application

  • We will test to see if we can deploy a simple application, create a new deployment

DOC: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
SEARCHSTRING: kubectl Cheat

kubectl create deployment nginx --image=nginx
kubectl get deployments
  • View the details of the deployment
kubectl describe deployment nginx
  • View the basic steps the cluster took in order to pull and deploy the new application
kubectl get events
  • You can also view the output in yaml format, which could be used to create this deployment again or new deployments
kubectl get deployment nginx -o yaml
  • Run the command again and redirect the output to a file. Then edit the file. Remove the creationTimestamp, resourceVersion, selfLink, and uid lines. Also remove all the lines including and after status:
kubectl get deployment nginx -o yaml > first.yaml
vim first.yaml
  • Delete the existing deployment.
kubectl delete deployment nginx
  • Create the deployment again this time using the file.
kubectl create -f first.yaml
  • Look at the yaml output of this iteration and compare it against the first
kubectl get deployment nginx -o yaml > second.yaml
diff first.yaml second.yaml
  • Now that we have worked with the raw output we will explore two other ways of generating useful YAML or JSON. Use the --dry-run option and verify no object was created
kubectl create deployment two --image=nginx --dry-run -o yaml
kubectl get deployment
  • Existing objects can be viewed in a ready to use YAML output. Take a look at the existing nginx deployment. Note there is more detail to the –export option
kubectl get deployments nginx --export -o yaml
  • The output can also be viewed in JSON output
kubectl get deployment nginx --export -o json
  • Lets expose the nginx app, but first we need to define the container port

DOC: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
SEARCHSTRING: Deployments

vim first.html
...
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx
    ports:                      #Add these
    - containerPort: 80         #three
      protocol: TCP             #lines
    resources: {}
...
kubectl replace -f first.yaml
kubectl get deploy,pod
kubectl expose deployment/nginx
  • Verify the service configuration. First look at the service, then the endpoint information. Note the ClusterIP is not the current endpoint. Calico provides the ClusterIP. The Endpoint is provided by kubelet and kube-proxy.
kubectl get svc nginx
kubectl get ep nginx
  • Determine which node the container is running on
kubectl describe pod nginx-85ff79dd56-cjd9s | grep Node:
  • Run tcpdump in the node where container is running, to view traffic on the tunl0
sudo tcpdump -i tunl0
  • Test access to the Cluster IP, port 80. You should see the generic nginx installed and working page
curl 10.103.3.7:80
curl 192.168.171.68:80
  • Now scale up the deployment from one to three web servers
kubectl scale deployment nginx --replicas=3
kubectl get deployment nginx
  • View the current endpoints. There now should be three
kubectl get ep nginx
  • Find the oldest pod of th enginx deployment and delete it. Use the AGE field to determine which was running the longest
kubectl get pod -o wide
kubectl delete pod nginx-85ff79dd56-6bmkb
  • Wait a minute then view the pods again. One should be newer than the others.
kubectl get po

Access from Outside the Cluster

  • Begin by getting a list of the pods
kubectl get po
  • Choose one of the pods and use the exec command to runprintenvinside the pod
kubectl exec nginx-85ff79dd56-cjd9s -- printenv |grep KUBERNETES
  • Find and then delete the existing service for nginx
kubectl get svc
kubectl delete svc nginx
  • Create the service again, but this time pass the LoadBalancertype. Check to see the status and note the external ports mentioned
kubectl expose deployment nginx --type=LoadBalancer
kubectl get svc
  • Open a browser on your local system, and use the public IP of your node and port 30218
  • Scale the deployment to zero replicas. Then test the web page again. Once all pods have finished terminating accessingthe web page should fail
kubectl scale deployment nginx --replicas=0
kubectl get po
  • Scale the deployment up to two replicas. The web page should work again
kubectl scale deployment nginx --replicas=2
kubectl get po
  • Delete the deployment to recover system resources. Note that deleting adeployment does not delete the endpoints or services
kubectl delete deployments nginx
kubectl delete ep nginx
kubectl delete svc nginx

KUBERNETES ARCHITECTURE

Working with CPU and Memory Constraints

DOC: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
SEARCHSTRING: Managing Resources for Containers

  • Use a container called stress, in a deployment which we will name hog, to generate load
kubectl create deployment hog --image vish/stress
kubectl get deployments
  • View the details. There are no settings limiting resource usage. Instead, there are empty curly brackets
kubectl describe deployment hog
kubectl get deployment hog -o yaml
  • We will use the YAML output to create our own configuration file
kubectl get deployment hog --export -o yaml > hog.yaml
  • Add memory limits
vim hog.yaml
...
spec:
  containers:
  - image: vish/stress
    imagePullPolicy: Always
    name: stress
    resources:             # Remove {}
      limits:              # Add
        memory: "4Gi"      # these
      requests:            # 4 
        memory: "2500Mi"   # lines
...
  • Replace the deployment using the newly edited file
kubectl replace -f hog.yaml
  • Verify the change has been made
kubectl get deployment hog -o yaml
  • View the stdio of the hog container
kubectl get po
kubectl logs hog-86f585df6-jzjxz
  • Edit the hog configuration file and add arguments for stress to consume CPU and memory. The args: entry should be indented the same number of spaces as resources:
vim hog.yaml
...
spec:
  containers:
  - image: vish/stress
    imagePullPolicy: Always
    name: stress
    resources:
      limits:
        cpu: "1"
        memory: "4Gi"
      requests:
        cpu: "0.5"
        memory: "2500Mi"
    args:
    - -cpus
    - "2"
    - -mem-total
    - "1950Mi"
    - -mem-alloc-size
    - "100Mi"
    - -mem-alloc-sleep
    - "1s"
...
  • Delete and recreate the deployment. You should see increased CPU usage almost immediately and memory allocationhappen in 100M chunks, allocated to the stress program via the running top command
kubectl delete deployment hog
kubectl create -f hog.yaml
top
kubectl get po
kubectl logs hog-67d6566856-62mnh
  • Delete the deployment
kubectl delete deployment hog

Resource Limits for a Namespace

DOC: https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/
SEARCHSTRING:Configure Minimum and Maximum CPU Constraints for a Namespace

  • Create a new namespace
kubectl create namespace low-usage-limit
kubectl get namespace
  • Create a YAML file which limits CPU and memory usage. The kind to use is LimitRange
vim low-resource-range.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: low-resource-range
spec:
  limits:
  - default:
      cpu: 1
      memory: 500Mi
    defaultRequest:
      cpu: 0.5
      memory: 100Mi
    type: Container
  • Create the LimitRange object and assign it to the newly created namespace low-usage-limit
kubectl -n=low-usage-limit create -f low-resource-range.yaml
  • Verify
kubectl get LimitRange --all-namespaces
  • Create a new deployment in the namespace
kubectl -n low-usage-limit create deployment limited-hog --image vish/stress
  • List the current deployments
kubectl get deployments --all-namespaces
  • View all pods within the namespace
kubectl -n low-usage-limit get pods
  • Look at the details of the pod. The settings inherited from the entire namespace
kubectl -n low-usage-limit get pod limited-hog-5c8d494fc5-2sqwx -o yaml
  • Copy and edit the config file for the original hog file. Add the namespace: line so that a new deployment would be in the low-usage-limit namespace. Delete the selflink line
kubectl create deployment hog --image vish/stress --dry-run -o yaml > hog.yaml
cp hog.yaml hog2.yaml
vim hog2.yaml
...
  labels:
    run: hog
  name: hog
  namespace: low-usage-limit     #Add this line
spec:
...
  • Create the deployment, view the deployments and run top. You should find that bothhogdeployments are using about thesame amount of resources, once the memory is fully allocated.
kubectl create -f hog2.yaml
kubectl get deployments --all-namespaces
top
  • Delete thehogdeployments to recover system resources
kubectl -n low-usage-limit delete deployment hog limited-hog

Basic Node Maintenance

DOC: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
SEARCHSTRING:kubectl drain

  • Create a deployment, then scale to create plenty of pods
kubectl create deployment maint --image=nginx
kubectl scale deployment maint --replicas=10
  • Use the terminal on all nodes to get a count of the current docker containers
sudo docker ps | wc -l
  • With other command we can also see where are the current docker containers
kubectl get po --all-namespaces -o wide | grep maint
  • In order to complete maintainence we may need to move containers from a node and prevent new ones from deploying. One way to do this is to drain, or cordon, the node.
kubectl get nodes
  • Modifying your second, worker node, update the node to drain the pods. Some resources may not drain, expect an error
kubectl drain worker --ignore-daemonsets
kubectl describe node |grep -i taint

we can use the –ignore-daemonsets option to ignore containers which are not intended to move

  • Run the command again. This time the output should both indicate the node has already been cordoned, then show thenode has been drained
kubectl drain worker --ignore-daemonsets --delete-local-data
  • Look on your worker node, you would see there should be fewer pods and containers than before
sudo docker ps | wc -l
kubectl get po --all-namespaces -o wide | grep maint
  • Update the node taint such that the scheduler will use the node again
kubectl uncordon worker
kubectl describe node |grep -i taint
  • Clean up by deleting themaintdeployment
kubectl delete deployment maint

API OBJECTS

RESTful API Access

DOC: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/
SEARCHSTRING: Access Clusters Using the Kubernetes API

  • Use kubectl config view to get overall cluster configuration, and find the server entry. This will give us both the IP and the port
kubectl config view
  • Next we need to find the bearer token. This is part of a default token. Look at a list of tokens, first all on the cluster, then just those in the default namespace. There will be a secret for each of the controllers of the cluster
kubectl get secrets --all-namespaces
kubectl get secrets
  • Look at the details of the secret. We will need thetoken: information from the output
kubectl describe secret default-token-pj7fl
  • Create a environment variable with the token
export TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
  • Test to see if you can get basic API information from your cluster.
curl https://master:6443/apis --header "Authorization: Bearer $TOKEN" -k
  • Lets change the path to api/v1
curl https://master:6443/api/v1 --header "Authorization: Bearer $TOKEN" -k
  • Listing namespaces should return an error. It shows our request is being seen as systemserviceaccount:, which does not have the RBAC authorization to list all namespaces in the cluster
curl https://master:6443/api/v1/namespaces --header "Authorization: Bearer $TOKEN" -k
  • Pods can also make use of included certificates to use the API. The certificates are automatically made available to a pod under the /var/run/secrets/kubernetes.io/serviceaccount/. The token file is the same value we put into the $TOKEN variable. Once you exit the container will not restart and the pod will show ascompleted
kubectl run -it busybox --image=busybox --restart=Never
ls /var/run/secrets/kubernetes.io/serviceaccount         #run inside de pod
  • Clean up by deleting thebusyboxcontainer
kubectl delete pod busybox

Using the Proxy

Another way to interact with the API is via aproxy. The proxy can be run from a node or from within aPodthrough the use ofa sidecar. In the following steps we will deploy a proxy listening to the loopback address. We will usecurlto access the APIserver. If thecurlrequest works, but does not from outside the cluster, we have narrowed down the issue to authenticationand authorization instead of issues further along the API ingestion process

  • Begin by starting the proxy. It will start in the foreground by default
kubectl proxy -h
kubectl proxy --api-prefix=/ &
  • Now use the same curl command, but point toward the IP and port shown by the proxy
curl http://127.0.0.1:8001/api/
  • Make an API call to retrieve the namespaces. Should work now as the proxy is making the request on your behalf
curl http://127.0.0.1:8001/api/v1/namespaces
  • Stop the proxy service as we won’t need it any more
kill <PID>

JOBS

Working with Jobs

DOC: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
SEARCHSTRING: jobs

While most API objects are deployed such that they continue to be available there are some which we may want to run a particular number of times called a Job, and others on a regular basis called a CronJob

  • Create a job
vim job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: sleepy
spec:
  template:
    spec:
      containers:
      - name: resting
        image: busybox
        command: ["/bin/sleep"]
        args: ["3"]
      restartPolicy: Never
  • Create the job, then verify and view the details
kubectl create -f job.yaml
kubectl get job
kubectl describe jobs.batch sleepy
kubectl get job
  • View the configuration information of the job. There are three parameters we can use to affect how the job runs. We can see that backoffLimit, completions, and the `parallelism``
kubectl get jobs.batch sleepy -o yaml
  • As the job continues to AGE in a completion state, delete the job
kubectl delete jobs.batch sleepy
  • Edit the YAML and add the completions: parameter and set it to 7
vim job.yaml
...
spec:
  completions: 7   #Add this line 
  template:
    spec:
...    
  • Create the job again. As you view the job note that COMPLETIONS begins as 0 of 7
kubectl create -f job.yaml
kubectl get jobs.batch
  • View the pods that running
kubectl get pods
  • Eventually all the jobs will have completed. Verify then delete the job
kubectl get jobs
kubectl delete jobs.batch sleepy
  • This time add in the parallelism: parameter. Set it to 2 such that two pods at a time will bedeployed
vim job.yaml
...
spec:
  completions: 7 
  parallelism: 2   #Add this line 
  template:
    spec:
...    
  • Create the job again. You should see the pods deployed two at a time
kubectl create -f job.yaml
kubectl get pods
kubectl get jobs
  • Add a parameter which will stop the job after a certain number of seconds. Set the activeDeadlineSeconds: to 15. The job and all pods will end once it runs for 12 seconds. We will also increase the sleep argument to 6, just to be sure does not expire by itself
vim job.yaml
...
spec:
  completions: 7
  parallelism: 2
  activeDeadlineSeconds: 12      #Add this line
  template:
    spec:
      containers:
      - name: resting
        image: busybox
        command: ["/bin/sleep"]
        args: ["6"]              #Edit this line
      restartPolicy: Never       
  • Delete and recreate the job again
kubectl delete jobs.batch sleepy
kubectl create -f job.yaml
kubectl get jobs
  • View the message: entry in theStatussection of the object YAML output
kubectl get job sleepy -o yaml
  • Delete the job
kubectl delete jobs.batch sleepy

Create a CronJob

DOC: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
SEARCHSTRING: cronjob

A CronJob creates a watch loop which will create a batch job on your behalf when the time becomes true

  • Create the new CronJob. View the jobs. It will take two minutes for the CronJob to run and generate a new batch Job
vim cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: sleepy
spec:
  schedule: "*/2 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name:  resting
            image: busybox
            command: ["/bin/sleep"]
            args: ["5"]
          restartPolicy: Never
kubectl create -f cronjob.yaml
kubectl get cronjobs.batch
kubectl get jobs.batch
  • Ensure that if the job continues for more than 10 seconds it is terminated. We will first edit the sleep command to run for 40 seconds then add the active Dead line Seconds: entry to the container
vim cronjob.yaml
...
spec:
  schedule: "*/2 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          activeDeadlineSeconds: 10      #Add this line
          containers:
          - name:  resting
            image: busybox
            command: ["/bin/sleep"]
            args: ["40"]
          restartPolicy: Never
  • Delete and recreate the CronJob. It may take a couple of minutes for the batch Job to be created and terminate due tothe timer
kubectl delete cronjobs.batch sleepy
kubectl create -f cronjob.yaml
kubectl get jobs
kubectl get cronjobs.batch
kubectl get jobs
kubectl get cronjobs.batch
kubectl delete cronjobs.batch sleepy

MANAGING STATE WITH DEPLOYMENTS

Working with ReplicaSets

DOC: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
SEARCHSTRING: replicasets

  • View any currentReplicaSets
kubectl get rs
  • Create a YAML file for a simple ReplicaSet
vim rs.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: rs-one
spec:
  replicas: 2
  selector:
    matchLabels:
      system: ReplicaOne
  template:
    metadata:
      labels:
        system: ReplicaOne
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.1
        ports:
        - containerPort: 80
kubectl create -f rs.yaml
kubectl describe rs rs-one
  • View the Pods created with the ReplicaSet
kubectl get pods
  • Now we will delete the ReplicaSet, but not the Pods it controls
kubectl delete rs rs-one --cascade=false
  • View theReplicaSetand Pods again
kubectl describe rs rs-one
kubectl get pods
  • Create the ReplicaSet again. As long as we do not change the selector field, the new ReplicaSet should take owner ship. Pod software versions cannot be updated this way
kubectl create -f rs.yaml
kubectl get rs
kubectl get pods
  • We will now isolate a Pod from its ReplicaSet. Begin by editing the label of a Pod. We will change the system: parameter to be IsolatedPod
kubectl edit pod rs-one-kjp5w
...
labels
  system: IsolatedPod    #Change from ReplicaOne
...
  • View the number of pods within the ReplicaSet. You should see two running
kubectl get rs
  • Now view the pods with the label key of system. You should note that there are three, with one being newer than others. The ReplicaSet made sure to keep two replicas, replacing the Pod which was isolated
kubectl get po -L system
  • Delete the ReplicaSet, then view any remaining Pods
kubectl delete rs rs-one
  • There should be no ReplicaSets, but one Pod
kubectl get rs
lubectl get pod
  • Delete the remaining Pod using the label
kubectl delete pod -l system=IsolatedPod

Working with DaemonSets

A DaemonSet is a watch loop object like a Deployment. The DaemonSet ensures that when a node is added to a cluster a pods will be created on that node. A Deployment would only ensure a particular number of pods are created in general, several could be on a single node. Using a DaemonSet can be helpful to ensure applications are on each node, helpful for things like metrics and logging especially in large clusters where hardware may be swapped out often. Should a node be removed from a cluster the DaemonSet would ensure the Pods are garbage collected before removal. Starting with Kubernetes v1.12 the scheduler handles DaemonSet deployment which means we can now configure certain nodes to not have a particular DaemonSet pods

  • Create a DaemonSet and verify the newly formed DaemonSet. There should be one Pod per node in the cluster
cp rs.yaml ds.yaml
apiVersion: apps/v1
kind: DaemonSet               #Edit this line
metadata:
  name: ds-one                #Edit this line
spec:
  replicas: 2                 #Remove this line
  selector:
    matchLabels:
      system: DaemonSetOne    #Edit this line
...
kubectl create -f ds.yaml
kubectl get ds
kubectl describe pod ds-one-b1dcv | grep Image:

Rolling Updates and Rollbacks

One of the advantages of micro-services is the ability to replace and upgrade a container while continuing to respond to client requests. We will use the OnDelete setting that upgrades a container when the predecessor is deleted, then the use the RollingUpdate feature as well, which begins a rolling update immediately

  • Begin by viewing the currentupdateStrategysetting for theDaemonSet
kubectl get ds ds-one -o yaml | grep -A 3 Strategy
  • Edit the object to use the OnDelete update strategy. This would allow the manual termination of some of the pods, resulting in an updated image when they are recreated
kubectl edit ds ds-one
...
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: OnDelete              #Update this line
...
  • Update theDaemonSetto use a newer version of thenginxserver. This time use the set command
kubectl set image ds ds-one nginx=nginx:1.18.0-alpine
  • Verify that the Image: parameter for the Pod checked in the previous section is unchanged
kubectl describe po ds-one-4sbn9 |grep Image:
  • Delete the Pod. Wait until the replacement Pod is running and check the version
kubectl delete po ds-one-4sbn9
kubectl get pod
kubectl describe po ds-one-jbbtv |grep Image:
  • View the image running on the older Pod. It should still show version
kubectl describe pod ds-one-z31r4 |grep Image:
  • View the history of changes for the DaemonSet. You should see two revisions listed. As we did not use the --record option we didn’t see why the object updated
kubectl rollout history ds ds-one
  • View the settings for the various versions of the DaemonSet. The Image:line should be the only difference between thetwo outputs
kubectl rollout history ds ds-one --revision=1
kubectl rollout history ds ds-one --revision=2 
  • Use kubectl rollout undo to change the DaemonSet back to an earlier version. As we are still using the OnDelete strategy there should be no change to the Pods
kubectl describe pod ds-one-jbbtv | grep Image:
  • Delete the Pod, wait for the replacement to spawn then check the image version again
kubectl delete pod ds-one-jbbtv
kubectl get pod
kubectl describe pod ds-one-x7q4z | grep Image:
kubectl describe ds |grep Image:
  • Create a new DaemonSet, this time setting the update policy to RollingUpdate
kubectl get ds ds-one -o yaml --export > ds2.yaml
vim ds2.yaml
...
  name: ds-two             #Update this line
... 
    type: RollingUpdate     #Update this line
  • Create the new DaemonSet
kubectl create -f ds2.yaml
kubectl get pod
kubectl describe pods |grep Image:
  • Edit the configuration file and set the image to a newer version. Include the --record option
kubectl edit ds ds-two --record
  • Now view the age of the Pods. Two should be much younger
kubectl get pod
  • Verify the Pods are using the new version of the software
kubectl describe pods |grep 'Image:\|DaemonSet/ds'
  • View the rollout status and the history of the DaemonSets
kubectl rollout status ds ds-two
kubectl rollout history ds ds-two
  • View the changes in the update they should look the same as the previous history, but did not require the Pods to be deleted for the update to take place
kubectl rollout history ds ds-two --revision=2
  • Clean up the system by removing the DaemonSets
kubectl delete ds ds-one ds-two

SERVICES

Deploy A New Service

  • Deploy two nginx servers using kubectl and a.yaml file, also create the namespace for the deployment
vim nginx-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-one
  labels:
    system: secondary
  namespace: accounting
spec:
  selector:
    matchLabels:
      system: secondary
  replicas: 2
  template:
    metadata:
      labels:
        system: secondary
    spec:
      containers:
      - image: nginx:1.11.1
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 8080
          protocol: TCP
      nodeSelector:
        system: secondOne
kubectl create ns accounting
kubectl create -f nginx-one.yaml
kubectl -n accounting get pods
  • View the node each has been assigned to (or not) and the reason, which shows under events at the end of the output
kubectl -n accounting describe pod nginx-one-6f9597b9f4-dzpnl
  • View the existing labels on the nodes in the cluster
kubectl get nodes --show-labels
  • Label the secondary node
kubectl label node worker system=secondOne
kubectl get nodes --show-labels
  • View the pods in the accounting namespace
kubectl -n accounting get pods
  • View Pods by the label we set in the YAML file. If you look back the Pods were given a label of app=nginx
kubectl get pods -l system=secondary --all-namespaces
  • Expose the new deployment
kubectl -n accounting expose deployment nginx-one
  • View the newly exposed endpoints
kubectl -n accounting get ep nginx-one
  • Attempt to access the Pod on port 80
curl <endpointIp>:80
  • View the serivce details
kubectl -n accounting describe service 

Configure a NodePort

Before we deployed a LoadBalancer which deployed a ClusterIP. We will deploy now a NodePort. While you can access a container from within the cluster, one can use a NodePort to NAT traffic from outside the cluster. One reason to deploy a NodePort instead, is that a LoadBalancer is also a load balancer resource from cloud providers like GKE and AWS.

  • Expose the deployment using the --type=NodePort
kubectl -n accounting expose deployment nginx-one --type=NodePort --name=servicenp
  • View the details of the service, for the autogenerated port
kubectl -n accounting describe service servicenp | grep NodePort:
  • Test access to the nginx web server using the combination of master URL and NodePort
curl http://master:<nodePortIp>

Use Labels to Manage Resources

  • Try to delete all Pods with the system=secondarylabel, in all namespaces
kubectl delete pods -l system=secondary --all-namespaces
  • View the Pods again. New versions of the Pods should be running as the controller responsible for them continues
kubectl -n accounting get pods
  • We also gave a label to the deployment. View the deployment in the accounting namespace
kubectl -n accounting get deploy --show-labels
  • Delete the deployment using its label
kubectl -n accounting delete deploy -l system=secondary
  • Remove the label from the secondary node
kubectl label node worker system-

VOLUMES AND DATA

Create a ConfigMap

  • We will create a ConfigMapcontaining primary colors. We will create a series of files to ingest into the ConfigMap. First, we create a directory primary and populate it with four files. Then we create a file in our home directory with our favorite color
mkdir primary
echo c > primary/cyan
echo m > primary/magenta
echo y > primary/yellow
echo k > primary/black
echo "known as key" >> primary/black
echo blue > favorite
  • Now we will create the ConfigMap and populate it with the files we created as well as a literal value from the command line
kubectl create configmap colors --from-literal=text=black  --from-file=./favorite  --from-file=./primary/
  • View how the data is organized inside the cluster
kubectl get configmap colors
kubectl get configmap colors -o yaml
  • View how the data is organized inside the cluster
kubectl get configmap colors
kubectl get configmap colors -o yaml
  • Now we can create a Pod to use the ConfigMap. Parameter is being defined as an environment variable
vim simpleshell.yaml
apiVersion: v1
kind: Pod
metadata:
  name: shell-demo
spec:
  containers:
  - name: nginx
    image: nginx
    env:
    - name: ilike
      valueFrom:
        configMapKeyRef:
          name: colors
          key: favorite
  • Create the Pod and view the environmental variable, then delete the pod
kubectl create -f simpleshell.yaml
kubectl exec shell-demo -- /bin/bash -c 'echo $ilike'
kubectl exec shell-demo -- env
kubectl delete pod shell-demo
  • All variables from a file can be included as environment variables as well. Lets use envFrom
vim simpleshell.yaml
apiVersion: v1
kind: Pod
metadata:
  name: shell-demo
spec:
  containers:
  - name: nginx
    image: nginx
    envFrom:
    - configMapRef:
        name: colors
kubectl create -f simpleshell2.yaml
kubectl exec shell-demo -- env
kubectl exec shell-demo -- /bin/bash -c 'env'
kubectl delete pod shell-demo
  • A ConfigMap can also be created from a YAML file
vim car-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fast-car
  namespace: default
data:
  car.make: Ford
  car.model: Mustang
  car.trim: Shelby
  • Create the ConfigMap and verify the settings
kubectl create -f car-map.yaml
kubectl get configmap fast-car -o yaml
  • We will now make the ConfigMap available to a Pod as a mounted volume
vim simpleshell3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: shell-demo
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: car-vol
      mountPath: /etc/cars
  volumes:
    - name: car-vol
      configMap:
        name: fast-car
  • Create the Pod again. Verify the volume exists and the contents of a file within
kubectl create -f simpleshell.yaml
kubectl exec shell-demo -- /bin/bash -c 'df -h |grep car'
kubectl exec shell-demo -- /bin/bash -c 'ls /etc/cars'
kubectl exec shell-demo -- /bin/bash -c 'cat /etc/cars/car.trim'
  • Delete the Pod and ConfigMaps
kubectl delete pods shell-demo
kubectl delete configmap fast-car colors

Creating a Persistent NFS Volume (PV)

  • Install NFS server on master node
dnf install nfs-utils
systemctl start nfs-server.service
systemctl enable nfs-server.service
systemctl status nfs-server.service
  • Make and populate a directory to be shared. Also give it similar permissions of /tmp/
mkdir /opt/share
chmod 1777 /opt/share/
echo software > /opt/share/helloworld.txt
  • Share the newly created directory
vim /etc/exports
/opt/share/ *(rw,sync,no_root_squash,subtree_check)
  • To export the directory, run the exportfs command
exportfs -arv

Test by mounting the resource from the second node

showmount -e master
mount master:/opt/share /mnt
ls -l /mnt
  • Create a YAML file for the object with kind: PersistentVolume. Use the hostname of the master server and the directory you created. Only syntax is checked, an incorrect name or directory will not generate an error, but a Pod using the resource will not start. Note that the accessModes do not currently affect actual access and are typically used as labels instead
vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvvol-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /opt/share
    server: master
    readOnly: false
  • Create the persistent volume, then verify its creation
kubectl create -f pv.yaml
kubectl get pv

Creating a Persistent Volume Claim (PVC)

Before Pods can take advantage of the new PV we need to create aPersistent Volume Claim(PVC)

  • Create a YAML file for the new pvc
vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-one
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 200Mi
  • Create and verify the new pvc is bound. Note that the size is 1Gi, even though 200Mi was suggested. Only a volume ofat least that size could be used
kubectl create -f pvc.yaml
kubectl get pvc
  • Look at the status of the pv again, to determine if it is in use. It should show a status of Bound
kubectl get pv
  • Create a new deployment to use the pvc
vim nfs-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  generation: 1
  labels:
    run: nginx
  name: nginx-nfs
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        volumeMounts:
        - name: nfs-vol
          mountPath: /opt
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      volumes:                          
      - name: nfs-vol
        persistentVolumeClaim:
          claimName: pvc-one
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
kubectl create -f nfs.yaml
kubectl get pods
kubectl describe pod nginx-nfs-745ddb6d6c-k76nb
  • View the status of thePVC. It should show as bound
kubectl get pvc
  • Lets clean
kubectl delete deploy nginx-nfs
kubectl delete pvc pvc-one
kubectl delete pv pvvol-1

Using a ResourceQuota to Limit PVC Count and Usage

We will use theResourceQuotaobjectto both limit the total consumption as well as the number of persistent volume claims

  • Create a yaml file for the ResourceQuota object
vim storage-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storagequota
spec:
  hard:
    persistentvolumeclaims: "10"
    requests.storage: "500Mi"
  • Create a new namespace called small
kubectl create namespace small
kubectl describe ns small
  • Create a new pv and pvc in the small namespace
kubectl -n small create -f pv.yaml
kubectl -n small create -f pvc.yaml
  • Create the new resource quota, placing this object into the small namespace
kubectl -n small create -f storage-quota.yaml
kubectl describe ns small
  • Remove the namespace line from the nfs.yaml, and create the container
vim nfs.yaml
kubectl -n small create -f nfs.yaml
kubectl -n small get deploy
kubectl -n small describe deploy nginx-nfs
kubectl -n small get pod
  • Ensure the Pod is running and is using the NFS mounted volume
kubectl -n small describe pod nginx-nfs-745ddb6d6c-7s4dh
  • View the quota usage of the namespace
kubectl describe ns small
  • Create a 350M file inside of the /opt/share directory on the host and view the quota usage again. Note that with NFS the size of the share is not counted against the deployment
dd if=/dev/zero of=/opt/share/bigfile bs=1M count=350
kubectl describe ns small
du -h /opt/
  • Now let us illustrate what happens when a deployment requests more than the quota. Begin by shutting down the existing deployment
kubectl -n small get deploy
kubectl -n small delete deploy nginx-nfs
  • Once the Pod has shut down view the resource usage of the namespace again. Note the storage did not get cleaned up when the pod was shut down
kubectl describe ns small
  • Remove the pvc then view the pv it was using. Note the RECLAIM POLICY and STATUS
kubectl -n small get pvc
kubectl -n small delete pvc pvc-one
  • Dynamically provisioned storage uses the ReclaimPolicy of the StorageClass which could be Delete, Retain, or some types allow Recycle. Manually created persistent volumes default to Retainunless set otherwise at creation. The default storage policy is to retain the storage to allow recovery of any data. To change this begin by viewing theyaml output
kubectl get pv/pvvol-1 -o yaml
  • Currently we will need to delete and re-create the object
kubectl delete pv/pvvol-1
grep Retain pv.yaml
kubectl create -f pv.yaml
  • We will usekubectl patchto change the retention policy to Delete
kubectl patch pv pvvol-1 -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
kubectl get pv/pvvol-1
kubectl describe ns small
  • Create the pvc again. Even with no pods running, note the resource usage
kubectl -n small create -f pvc.yaml
kubectl describe ns small
  • Remove the existing quota from the namespace
kubectl -n small get resourcequota
kubectl -n small delete resourcequota storagequota
  • Edit the storage-quota.yaml file and lower the capacity to 100Mi
vim storage-quota.yaml
  • Create and verify the new storage quota. Note the hard limit has already been exceeded
kubectl -n small create -f storage-quota.yaml
kubectl describe ns small
  • Create the deployment again. View the deployment
kubectl -n small create -f nfs.yaml
kubectl -n small describe deploy/nginx-nfs
kubectl -n small get po
  • As we were able to deploy more pods even with apparent hard quota set, let us test to see if the reclaim of storage takes place. Remove the deployment and the persistent volume claim
kubectl -n small delete deploy nginx-nfs
kubectl -n small delete pvc/pvc-one
  • View if the persistent volume exists. You will see it attempted a removal, but failed. If you look closer you will find the error has to do with the lack of a deleter volume plugin for NFS. Other storage protocols have a plugin
kubectl -n small get pv
kubectl delete pv/pvvol-1
  • Edit the persistent volume YAML file and change the persistentVolumeReclaimPolicy: to Recycle
vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvvol-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /opt/share
    server: master
    readOnly: false
  • Add a LimitRange to the namespace and attempt to create the persistent volume and persistent volume claim again
vim low-resource-range.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: low-resource-range
spec:
  limits:
  - default:
      cpu: 1
      memory: 500Mi
    defaultRequest:
      cpu: 0.5
      memory: 100Mi
    type: Container
kubectl -n small create -f low-resource-range.yaml
kubectl describe ns small
  • Create the persistent volume again. View the resource. Note the Reclaim Policyis Recycle
kubectl -n small create -f pv.yaml
  • Attempt to create the persistent volume claim again. The quota only takes effect if there is also a resource limit in effect
kubectl -n small create -f pvc.yaml
  • Edit theresourcequotato increase the requests.storage to 450mi
kubectl -n small edit resourcequota
...
spec:
  hard:
    persistentvolumeclaims: "10"
    requests.storage: 450Mi          
...
  • Create the pvc again. It should work this time. Then create the deployment again
kubectl -n small create -f pvc.yaml
kubectl -n small create -f nfs.yaml
kubectl describe ns small
  • Delete the deployment. View the status of the pvand pvc
kubectl -n small delete deploy nginx-nfs
kubectl -n small get pvc
kubectl -n small get pv
  • Delete the pvc and check the status of the pv. It should show as Available
kubectl -n small delete pvc pvc-one
kubectl -n small get pv
  • Remove the pv and any other resources created
kubectl delete pv pvvol-1

INGRESS

Advanced Service Exposure

  • Lets deploy one app top add it on ingress
kubectl create deployment secondapp --image=nginx
  • Find the labels currently in use by the deployment. We will use them to tie traffic from the ingress controller to the proper service
kubectl get deployments secondapp -o yaml |grep -A2 'labels'
  • Expose the new server as a NodePort using port 80
kubectl expose deployment secondapp --type=NodePort --port=80
  • As we have RBAC configured we need to make sure the controller will run and be able to work with all necessary ports, endpoints and resources. Create a YAML file to declare a clusterrole and a clusterrolebinding
vim ingress.rbac.yaml 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
  • Create the new role and binding
kubectl create -f ingress.rbac.yaml
  • Create the Traefik ingress controller
vim traefik-ds.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      name: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      hostNetwork: True
      containers:
      - image: traefik:v1.7.24-alpine
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
          hostPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
kubectl create -f traefik-ds.yaml
  • Create the rules, so it knows how to handle requests. We will pass a false header when testing. Also the service name needs to match the secondapp label
vim ingress.rule.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: www.testone.com
    http:
      paths:
      - backend:
          serviceName: secondapp
          servicePort: 80
        path: /
kubectl create -f ingress.rule.yaml
  • Test the internal and external IP addresses
curl -H "Host: www.testone.com" http://master/
  • Lets create other app
kubectl create deployment thirdpage --image=nginx
kubectl get deployment thirdpage -o yaml |grep -A2 'labels'
  • Expose the new server as a NodePort
kubectl expose deployment thirdpage --type=NodePort --port=80
  • Now we will customize the installation
kubectl exec -it thirdpage-579947d95-qxkss -- /bin/bash
insadepod# apt-get update
insadepod# apt-get install vim -y
insadepod# vim /usr/share/nginx/html/index.html
<!DOCTYPE html>
<html>
<head>
<title>Third Page</title>           #update this line
<style>
...
  • Edit the ingress rules to forward traffic to the thridpage service
kubectl edit ingress ingress-test
...
spec:
  rules:
  - host: www.testone.com
    http:
      paths:
      - backend:
          serviceName: secondapp
          servicePort: 80
        path: /
  - host: www.testtwo.com                # add
    http:                                # this
      paths:                             # lines
      - backend:                         # add
          serviceName: thirdpage         # this
          servicePort: 80                # lines 
        path: /                          # 
...
  • Test the second hostname
curl -H "Host: www.testtwo.com" http://master/
  • Open the Traefik.io ingress controller dashboard from a web browser
http://<PUBLICIP>:8080/dashboard/
  • Lets clean
kubectl delete deployment secondapp thirdpage

SCHEDULING

Assign Pods Using Labels

  • List all the nodes
kubectl get nodes
  • View the current labels and taints for the nodes
kubectl describe nodes |grep -A5 -i label
kubectl describe nodes |grep -i taint
  • Get a count of how many containers are running on both the master and worker nodes
kubectl get deployments --all-namespaces
master# docker ps |wc -l
worker# docker ps |wc -l
  • Assign labels to master and worker and check the labels
kubectl label nodes master status=vip
kubectl label nodes worker status=other
kubectl get nodes --show-labels

Lets create a pod

vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: vip
spec:
  containers:
  - name: vip1
    image: busybox
    args:
    - sleep
    - "1000000"
  - name: vip2
    image: busybox
    args:
    - sleep
    - "1000000"
  - name: vip3
    image: busybox
    args:
    - sleep
    - "1000000"
  - name: vip4
    image: busybox
    args:
    - sleep
    - "1000000"
  nodeSelector:
    status: vip
  • Deploy the new pod. Verify the containers have been created on the master node
kubectl create -f pod1.yaml
master# docker ps |wc -l
worker# docker ps |wc -l
  • Delete the pod then edit the file, commenting out the nodeSelector lines
kubectl delete pod vip
vim vip.yaml
...
- sleep
    - "1000000"
#  nodeSelector:
#    status: vip
  • Create the pod again, determine where the new containers have been deployed
kubectl create -f pod1.yaml
master# docker ps |wc -l
worker# docker ps |wc -l
  • Create another pod. Change the names from vip to others, and uncomment the nodeSelector lines
cp pod1.yaml pod2.yaml
sed -i s/vip/other/g pod2.yaml
vim pod2.yaml
...
- sleep
    - "1000000"
  nodeSelector:
    status: other
  • Create the pod2. Determine where they deploy
kubectl create -f pod2.yaml
master# docker ps |wc -l
worker# docker ps |wc -l
  • Lets clean
kubectl delete pods vip other
kubectl get pods

Using Taints to Control Pod Deployment

  • Create 8 nginx containers
vim 8nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: taint-deployment
spec:
  replicas: 8
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name:  nginx
        image: nginx:1.16.1
        ports:
        - containerPort: 80
kubectl apply -f 8nginx.yaml
  • Determine where the containers are running
kubectl get po --all-namespaces -o=wide
master# docker ps |wc -l
worker# docker ps |wc -l
  • Delete the deployment
kubectl delete deployment taint-deployment
  • Now we will use a taint to affect the deployment of new containers. There are three taints, NoSchedule ,PreferNoScheduleand NoExecute. The taints having to do with schedules will be used to determine newly deployed containers, but will not affect running containers. The use of NoExecutewill cause running containers to move. Taint the secondary node, verify it has the taint then create the deployment again. We will use the key of bubba to illustrate the key name is just some string an admin can use to track Pods
kubectl taint nodes worker bubba=value:PreferNoSchedule
kubectl describe node |grep Taint
kubectl apply -f 8nginx.yaml
  • Locate where the containers are running. More containers are on the master. Delete the deployment
kubectl get po --all-namespaces -o=wide
master# docker ps |wc -l
worker# docker ps |wc -l
kubectl delete deployment taint-deployment
  • Remove the taint
kubectl taint nodes worker bubba-
kubectl describe node |grep Taint
  • This time use the NoSchedule taint, then create the deployment again
kubectl taint nodes worker bubba=value:NoSchedule
kubectl apply -f 8nginx.yaml
kubectl get po --all-namespaces -o=wide
  • Remove the taint and delete the deployment. When you have determined that all the containers are terminated create the deployment again. Without any taint the containers should be spread across both nodes
kubectl delete deployment taint-deployment
kubectl taint nodes worker bubba-
kubectl apply -f 8nginx.yaml
kubectl get po --all-namespaces -o=wide

  • Use the NoExecute to taint the secondary (worker) node. Some containers will remain on the worker node to continuecommunication from the cluster. Locate where the containers are running
kubectl taint nodes worker bubba=value:NoExecute
kubectl get po --all-namespaces -o=wide
  • Remove the taint
kubectl taint nodes worker bubba-
kubectl get po --all-namespaces -o=wide
  • Lets clean
kubectl delete deployment taint-deployment

LOGGING AND TROUBLESHOOTING

Viewing Logs Output

Container standard out can be seen via thekubectl logscommand. If there is no standard out, you would not see anyoutput. In addition, the logs would be destroyed if the container is destroyed

  • View the current Pods in the cluster. Be sure to view Pods in all namespaces
kubectl get po --all-namespaces
  • View the logs associated with various infrastructure pods
kubectl -n=kube-system logs kube-apiserver-master
kubectl -n=kube-system logs etcd-master
...

Adding tools for monitoring and metrics

  • Install and check
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
kubectl -n kube-system get pods
  • Edit the metrics-server deployment to allow insecure TLS. The default certificate is x509 self-signed and not trustedby default. In production you may want to configure and replace the certificate
kubectl -n kube-system edit deployment metrics-server
...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: metrics-server
      name: metrics-server
    spec:
      containers:
      - args:
        - --kubelet-insecure-tls                                              # add this line
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname    # add this line
        - --cert-dir=/tmp
        - --secure-port=4443
        image: k8s.gcr.io/metrics-server-amd64:v0.3.6   
...
  • Test that the metrics server pod is running and does not show errors
kubectl -n kube-system logs metrics-server<TAB>
  • Test that the metrics working by viewing pod and node metrics
kubectl top pod --all-namespaces
kubectl top nodes

Configure the Dashboard

  • Install the dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
  • Edit the kubernetes-dashboard and change the type to a NodePort
kubectl get svc --all-namespaces
kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard
...
  sessionAffinity: None
  type: NodePort            # Edit this line
status:
  loadBalancer: {}
  • Check the kubernetes-dashboard service again. The Type should show as NodePort
kubectl -n kubernetes-dashboard get svc kubernetes-dashboard
  • There has been some issues with RBAC and the dashboard permissions to see objects. In order to ensure access toview various resources give the dashboard admin access
kubectl create clusterrolebinding dashaccess --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
  • Open the Dashboard from a web browser
http://<PUBLICIP>:<NODEPORT>
  • We will use the Token method to access the dashboard. With RBAC we need to use the proper token, the kubernetes-dashboard-token in this case
kubectl -n kubernetes-dashboard describe secrets kubernetes-dashboard-token-<TAB>

SECURITY

Working with TLS

While one can have multiple cluster root Certificate Authorities (CA) by default each cluster uses their own, intended for intra-cluster communication. The CA certificate bundle is distributed to each node and as a secret to default service accounts. Thekubeletis a local agent which ensures local containers are running and healthy

  • View the kubelet on both the master and secondary nodes. The kube-apiserver also shows security information suchas certificates and authorization mode. As kubeletis a systemd service we will start looking at that output. Look at the status output. Follow the CGroup and kubelet information, which is a long line where configuration settingsare drawn from, to find where the configuration file can be found
systemctl status kubelet.service
  • Take a look at the settings in the /var/lib/kubelet/config.yaml file. Among other information we can see the /etc/kubernetes/pki/directory is used for accessing the kube-apiserver. Near the end of the output it also sets thedirectory to find other pod spec files
less /var/lib/kubelet/config.yaml
  • Other agents on the master node interact with the kube-apiserver. View the configuration files where these settings are made. This was set in the previous YAML file. Look at one of the files for cert information
ls /etc/kubernetes/manifests/
less /etc/kubernetes/manifests/kube-controller-manager.yaml
  • The use of tokens has become central to authorizing component communication. The tokens are kept assecrets. Take a look at the current secrets in the kube-system namespace
kubectl -n kube-system get secrets
  • Take a closer look at one of the secrets and the token within. The certificate-controller-tokencould be one to look at. The use of the Tab key can help with long names
kubectl -n kube-system get secrets certificate<Tab> -o yaml
  • The kubectl config command can also be used to view and update parameters. When making updates this could avoid a typo removing access to the cluster. View the current configuration settings. The keys and certs are redacted from the output automatically
kubectl config view
  • View the options, such as setting a password for the admin instead of a key. Read through the examples and options
kubectl config set-credentials -h
  • Make a copy of your access configuration file. Later steps will update this file and we can view the differences
cp  ̃/.kube/config  ̃/.kube/config-backup
  • Explore working with cluster and security configurations both using kubectl and kubeadm. Among other values, find the name of your cluster
kubectl config <Tab><Tab>
kubeadm token -h
kubeadm config -h
  • Review the cluster default configuration settings. There may be some interesting tidbits to the security and infrastructure of the cluster
kubeadm config print init-defaults

Authentication and Authorization

Kubernetes clusters have two types of users service accounts and normal users, but normal users are assumedto be managed by an outside service. There are no objects to represent them and they cannot be added via an API call, but service accounts can be added. We will use RBAC to configure access to actions within a namespace for a new contractor, Developer Peter who will beworking on a new project

  • Create two namespaces, one for productionand the other for development
kubectl create ns development
kubectl create ns production
  • View the current clusters and context available. The context allows you to configure the cluster to use, namespace anduser for kubectl commands in an easy and consistent manner
kubectl config get-contexts
  • Create a new user Peter assign a password k8s
useradd -s /bin/bash peter
passwd peter
  • Generate a private key then Certificate Signing Request (CSR) for Peter
openssl genrsa -out peter.key 2048
openssl req -new -key peter.key -out peter.csr -subj "/CN=peter/O=development"
  • Using thew newly created request generate a self-signed certificate using the x509 protocol. Use the CA keys for the Kubernetes cluster and set a 45 day expiration
openssl x509 -req -in peter.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out peter.crt -days 45
  • Update the access config file to reference the new key and certificate. Normally we would move them to a safe directoryinstead of a non-root user’s home
kubectl config set-credentials peter --client-certificate=/home/peter.crt --client-key=/home/peter.key
  • View the update to your credentials file. Use diff to compare against the copy we made earlier
diff  ̃/.kube/config-backup .kube/config
  • We will now create a context. For this we will need the name of the cluster, namespace and CN of the user we set orsaw in previous steps
kubectl config set-context peter-context --cluster=kubernetes --namespace=development --user=peter
  • Attempt to view the Pods inside the peter-context. Be aware you will get an error
kubectl --context=peter-context get pods
  • Verify the context has been properly set
kubectl config get-contexts
  • Again check the recent changes to the cluster access config file
diff  ̃/.kube/config-backup .kube/config
  • We will now create a YAML file to associate RBAC rights to a particular namespace and Role
vim role-dev.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: development
  name: developer
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["deployments", "replicasets", "pods"]
  verbs: ["list", "get", "watch", "create", "update", "patch", "delete"]
# You can use ["*"] for all verbs
  • Create the RBAC
kubectl create -f role-dev.yaml
  • Create a RoleBinding to associate the Role
vim rolebind.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: developer-role-binding
  namespace: development
subjects:
- kind: User
  name: peter
  apiGroup: ""
roleRef:
  kind: Role
  name: developer
  apiGroup: ""
kubectl create -f rolebind.yaml
  • Test the context again. This time it should work
kubectl --context=peper-context get pods
  • Create a new pod, verify it exists, then delete it
kubectl --context=peter-context create deployment  nginx --image=nginx
kubectl --context=peter-context get pods
kubectl --context=peter-context delete deploy nginx
  • Create a different context for production
vim role-prod.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: production
  name: dev-prod
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["deployments", "replicasets", "pods"]
  verbs: ["get", "list", "watch"] # You can also use ["*"]
vim rolebindprod.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: production-role-binding  
  namespace: production          
subjects:
- kind: User
  name: peter
  apiGroup: ""
roleRef:
  kind: Role
  name: dev-prod                 
  apiGroup: ""
kubectl create -f role-prod.yaml
kubectl create -f rolebindprod.yaml
  • Create the new context for production use
kubectl config set-context peter-context-prod --cluster=kubernetes --namespace=production --user=peter
  • Verify that user peter can view pods using the new context
kubectl --context=peter-context-prod get pods
  • Try to create a Pod in production
kubectl --context=peter-context-prod create deployment nginx --image=nginx
  • View the details of a role
kubectl -n production describe role dev-prod

Admission Controllers

The last stop before a request is sent to the API server is anadmission control plug-in. They interact with features such as setting parameters like a default storage class, checking resource quotas, or security settings. A newer feature(v1.7.x) is dynamic controllers which allow new controllers to be ingested or configured at runtime

  • View the currentadmission controller settings. We can enable and disable specific plugins
grep admission /etc/kubernetes/manifests/kube-apiserver.yaml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment