Skip to content

Instantly share code, notes, and snippets.

@miguelmota
Last active August 17, 2023 18:32
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 5 You must be signed in to fork a gist
  • Save miguelmota/8729bdfb55c134ba7f68ea863fa0b4cb to your computer and use it in GitHub Desktop.
Save miguelmota/8729bdfb55c134ba7f68ea863fa0b4cb to your computer and use it in GitHub Desktop.
Kubernetes Certified Administrator (KCA) notes

kubernetes notes

set master node

user@miguelmota1:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[sudo] password for user: 
^B^[kk[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0901 00:44:36.182370   16584 kernel_validator.go:81] Validating kernel version
I0901 00:44:36.182463   16584 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [miguelmota1.mylabserver.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.113.178]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [miguelmota1.mylabserver.com localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [miguelmota1.mylabserver.com localhost] and IPs [172.31.113.178 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 39.001439 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node miguelmota1.mylabserver.com as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node miguelmota1.mylabserver.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "miguelmota1.mylabserver.com" as an annotation
[bootstraptoken] using token: vvp5zn.6eu7gkbno8yngxzf
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.31.113.178:6443 --token vvp5zn.6eu7gkbno8yngxzf --discovery-token-ca-cert-hash sha256:236a2cad7496e888be3268124b071d8496b01b7927373a1f6ad8c28a96e4087d

user@miguelmota1:~$ kmkdir -p $HOME/.kube^C
user@miguelmota1:~$ mkdir -p $HOME/.kube
user@miguelmota1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
user@miguelmota1:~$  sudo chown $(id -u):$(id -g) $HOME/.kube/config

^ will copy config files to a local directory

pods are small unit of compute

user@miguelmota1:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                  READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-6gvml                              0/1       Pending   0          1m
kube-system   coredns-78fcdf6894-mgbn8                              0/1       Pending   0          1m
kube-system   etcd-miguelmota1.mylabserver.com                      1/1       Running   0          57s
kube-system   kube-apiserver-miguelmota1.mylabserver.com            1/1       Running   0          1m
kube-system   kube-controller-manager-miguelmota1.mylabserver.com   1/1       Running   0          1m
kube-system   kube-proxy-qv4kd                                      1/1       Running   0          1m
kube-system   kube-scheduler-miguelmota1.mylabserver.com            1/1       Running   0          1m

scheduler determines which nodes will host which containers as they come in

join node to a cluter (on server 2)

sudo kubeadm join 172.31.113.178:6443 --token vvp5zn.6eu7gkbno8yngxzf --discovery-token-ca-cert-hash sha256:236a2cad7496e888be3268124b071d8496b01b7927373a1f6ad8c28a96e4087d

show nodes in cluster (on server 1)

user@miguelmota1:~$ kubectl get nodes
NAME                          STATUS     ROLES     AGE       VERSION
miguelmota1.mylabserver.com   NotReady   master    6m        v1.11.2
miguelmota2.mylabserver.com   NotReady   <none>    40s       v1.11.2
user@miguelmota1:~$ 

kube-proxy runs on each node to provide network services

kube master has kube-apiserver kube-apiserver uses etcd key value store for settings kube-scheduler is responsible for pods cloud-controller-manager is responsible for persistent storage and routing

nodes has a kubelet (takes orders from master) node has a kube proxy node has a pod a pod runs a container

kubernetes objects are persistent entities in kubernetes system kubernetes objects are "records of intent" describes

  • what applications are running
  • which nodes those applications are running on
  • policies around those applications object spec
  • provided to kubernetes
  • describe desired state of objects object status
  • provided by kuberentes
  • describe the actual state of the object kubctrl turns the yml file to an api request common kubernetes objects
  • nodes
  • pods
  • deployments
  • services
  • configmaps namespaces are virtual clusters namespaces allow for
  • resource quotes
  • multiple teams of users node
  • any worker machine (prev called minions)
  • can run pods
  • managed by master
  • kubelet orchestrates the containers cloud controller managers
  • route controller (gce clusters only)
  • service controller
  • PersistentVolumeLabels controller node controller
  • assigning CIDR block to newly registered node
  • keeps track of nodes
  • monitors the node health
  • evicts pods from unhealthy nodes (graceful termination) pod
  • simplest kubernetes object - represents one or more containers running on a single node
  • stateless and disposable services refer to deployments (port or ip) kubernetes service is imperative (do this thing) - ex: kubectl run nginx --image=nginx can also be declarative: using yml file no namespace uses "default" namespace as the namespace pods are managed by deployments services expose deployments third parties handle loadbalancing and port forwarding to those services, through ingress objects
kubectl describe job pi
kubectl logs pi-fmctx

create a pod

kubectl create -f alpine.yaml

delete pod

kubectl delete -f alpine.yaml

delete pod (other way)

kubectl delete pod alpine

delete pod (other way)

kubectl delete pod/alpine

connection between apiserver and nodes, pods and services are unencrypted therefore unsafe to run over public networks nodes are not inherently created by kubernetes, nodes are added to a cluster and kubernetes object is created to reflect them the master controls the kubernetes cluster kubernetes primites are pod,service,persistentVolume,and deployment. communication between apiserver and kubelet on the clusetr are not used for keep-alive xml packets data formats for kubernetes api call are JSON and YML containers run on nodes in typical deployment the kubernetes master listens on port 443 a pod represents a running process difference between docker volume and kubertnetes volume is that in docker it is loosely defined while in kubernets the volume hash the same lifetime as its surrounding pod MemoryPressure is a key that will be true if memory is running low unique ip addresses are assigned to pods a kubelet mounts volumes to containers Minikube is the recommended method for creating a single node kubernetes deployment on your local workstation Minkube doesn't require a cloud provider kubeadm can be used to deploy a multi-node locally but it's a little more challenging. need to select Cluster Network Interface (CNI) if going this route

get status of node and list DiskPressure and MemoryPressure statuses

kubectl describe node <node-name>

list all pods and which nodes they are currently running on

kubectl get pods --all-namespaces -o wide

list pods running in the kube-system namespace

kubectl get pods -n kube-system

Flannel is a pod networking application to allow pods to communicate vxlans is the technology flannel uses

Cluster communications

  • cover communication
  • everything in kubernetes goes through the api driver
  • TLS is default encryption
  • most installations handle the certificate creation
  • kubadm created certs
  • anything that connect to the API, including nodes, proxies, the scheduler, volume plugins should be authenticated

kubernetes has role-base access control (RBAC)

  • certain roles perform specific actions in the cluster

kubelets expose https endpoints which give access to both data and actions on the nodes. By default they are open

  • to secure endpoints, enable kubelet authentication by starting with "--anonymous-auth=false" and assigning x509 client cert

kubernetes uses etcd for configuration and secrets, it acts as the k/v store for entire cluster gaining write access to etcd is equivalent of gaining root on the whole cluster isolate etcd behind a firewall and only allow requests from the API servers rotate credentials frequently don't allow third parties into the kub-system namespace

setting HA

  • create reliable nodes that will form cluster
  • set up redundant and reliable storage service with a multinode deployment of etcd
  • start replicated and load balanced kubernetes api servers
  • set up a master-elected kubernetes scheduler and controller manager daemons everything the talks to the api must go through the load balancer step one
  • make the master node reliable
  • ensure services automatically restart if they fail
  • kubelet already does this
  • if kubelet goes down , need something to restart it
  • "monit" on debian systems or systemctl on systemd-based systems step 2
  • etcd already replicates storage to all master nodes
  • to lose data, all three nodes would need disk failures
  • increase 3 to 5 for more reliability can use clustered file system like Gluster or Ceph, or RAID array on each physical machine step three
  • create the initial log file touch /var/log/kube-apiserver.log
  • create a /srv/kubernetes/ directory on each node which should include:
    • basic_auth.csv - basic auth user/pwd
    • ca.crt
    • known_tokens.csv - tokens that entities (ie the kubelet) can use to talk to the apiserver
    • kubecfg.crt - client cert, pub key
    • kubecfg.key - client cert, priv key
    • server.crt - server cert, pub key
    • server.key - server cert, priv key
  • copy kube-apiserver.yaml into /etc/kubernetes/manifests on each of the master nodes
  • the kubelet monitors the directory and automatically creates an instance of the kube-apiserver using the pod definition specified in the file step four
  • allow state to change
  • controller managers and scheduler
  • these processes must not modify the clusters state simultaneously, use a lease-lock
  • each scheduler and controller manager can be launched with a "--leader-elect" flag
  • scheduler and controller-manager can be configured to talk to the api server that is on the same node or to a load balanced IP of api server

create empty log files on each node so that docker will mount the files and not make new directories

  • touch /var/log/kube-scheduler.log
  • touch /var/log/kube-controller-manager.log set up descriptions of scheduler and controller manager pods on each node by copying the kube-scheduler.yaml and kube-controller-manager.yaml into the /etc/kubernetes/manifests/
  • if worker goes down, kubernetes will detect and spin up replacement pods

kubetest is a testing suite for kubernetes e2e testing Ceph is an object store Canal, WeaveNet are all CNI providers CNI must enforce the network policies The master runs the apiserver must choose a CNI when deploying kubernetes with kubeadm

kubectl describe deployment nginx-deployment

output deployment yaml

kubectl describe deployment nginx-deployment -o yaml

update the deployment image

kubectl set image deployment/nginx-deployment nginx=nginx:1.8

see update status

kubectl rollout status deployment/nginx-deployment

update the deployment image using a yml file

kubectl apply -f nginx-deployment.yaml

view current deployments

kubectl get deployments

view deployment history revisions

kubctl rollout history deployment/nginx-deployment --revision=2

revert to a revision

kubectl rollout undo deployment/nginx-deployment --to-revision=2

get pod by label

kubectl get -l app=nginx -o wide

get status of pods

kubectl get pods name-of-pods -o wide

create a k/v map

kubectl create configmap my-map --from-literal=school=LinuxAcademy

get list of maps

kubectl get configmaps

list k/v of map

kubectl describe configmaps my-map

output map as yml

kubectl get configmap my-map -o yaml

use logs sub cmd to display environment variables

kubectl create -f pod-config.yaml
kubectl get pods --show-all
kubectl logs config-test-pod

use configmap to decouple configuration from yaml file

scale up number of replicas (pods)

kubectl scale deployment/nginx-deployment --replicas=3

environment variables are used to configure an application in a container

kubectl delete pod podname will spin up another pod to match number of replicas

Always, OnFailure, Never are valid restart policies

cpu:"250m" is how you limit cpu utilization to one quarter (m is for millicpus)

DaemonSet is used from a CNI container that needs to run on every node

pod labels are used to assign a pod to a particular node

pods make up deployments. services point to deployments.

labels are used to select and identify objects. annotations allow for a wider variety of characters that labels do not allow. both use k/v pair config maps

set a label on a pod

kubectl label pod mysql-foobar test=sure --overwrite

get information by label

kubectl describe pod-l test=sure

taints label nodes that are going to repel work

untaint

kubectl taint nodes my-node node-role.kubernetes.io=master-

taint

kubectl taint nodes my-node node-role.kubernetes.io=master:NoSchedule

deploy pods to particular node (ie has hardware requirements)

first tag node (ie net tag)

kubectl label node node1 net=gigabit

"nodeSelector" is a pod property which you can set the label to which deploy to

show info for only pod running

kube describe pod

the "schedulerName" tag in the spec can be used to specify which scheduler a pod should use. defaults to "default-scheduler"

a scheduler is a pod on the master node

the pod will not be scheduled until a node with the resources becomes available if the pod request more resources than available

taints are used to repel certain pods from nodes and are applied to nodes

two pods to have anti-affinity can be used to run on different nodes to avoid sharing failure domains

podAffinity is used for placing two or more pods on the same node

the scheduler determines which node will be used to instantiate the new pod

annotations are important when using multiple schedulers bc they remind operators which scheduler was used to place or fail to place a pod. annotations are used to provide additional non-identifying info about a pod and things like app version or scheduler that placed the pod

if toleration and a taint match during scheduling then the taint is ignored and the pod might be scheduled to the node Tolerations are applied to pods and allow the pod to schedule onto nodes with matching taints One or more taints applied to a node marks the node should not accept any pods that do not tolerate the taint.

"heapster" provides cluster-wide aggregator of monitoring and event data. runs as a pod

cAdvisor is an open source container resource usage and performance analysis agent. runs on port 4194

get logs of container in pod

kubectl logs pod-name

log location in systemd based os

/var/log/containers

tail logs

kubectl logs podname -f
kubectl logs podname --tail=10
kubectl logs podname --tail

/var/log/pods is where the kubernetes k/v store (etcd) logs live

kubectl exec mypod --cat /var/log/applog

get shell prompt to container

kubectl exec -it mypod --container sidecar1 -- /bin/bash

view metrics

kubectl top [nodes | pods]

get logs back from a dead pod

kubectl logs --previous

updating

sudo apt update kubelet

upgrade kubadm

kubeadm upgrade apply v1.9.1

drain pods

kubectl drain nodename --ignore-daemonsets
systemctl status kubelet
kubectl uncordon mynode

must evict pod to run on different node when trying to update node

list tokens

sudo kubeadm token list
sudo kubeadm token generate
sudo kubeadm token create <token from prev cmd> --ttl 3h --print-join-command

"uncordon" allows the scheduler to once again allow pods to be scheduled on the node

any drains that cause the number of ready replicas to fall below the specified budget are blocked

"Node Self Registration Mode" is the mode cluster should be to add more nodes. When the kubelet flag "--register-node=true" (default) is set the kubelet will attempt to register itself with the API server.

append "--runtime-config=api/all=false,api/v1=true" to use only v1 api

run kubeadm again when nodes fail when upgrading, it is idempotent

execute command inside of container

kubectl exec mypod /usr/bin/id

"Termination Messages" are logs about fatal events

Authorization Methods are ABAC, RBAC, Webhook

Admission control modules can access the content of objects

default authorization mode is "AlwaysAllow"

network policy is how pods communicate with each other

use labels to select pods and define rules

by default pods accept connections from everyone

network policies are implemented by the network plugin

"podSelector" property in yaml file "metadata" name: allow-all "metadata" name: default-deny

container security context takes precedence over pod security context

a Pod Security Policy is a cluster-level resource that controls security sensitive aspects of the pod specification

The PodSecurityPolicy objects define set of conditions that a pod must run in order to be accepted by the system

pod,container are the levels security context can be applied to

admins can limit a user in a namespae by creating a new role in the users namespace with appropriate rules

expose port

kubectl expose deployment webhead --type="NodePort" --port 80

"NodePort" means all node ports in the entire cluster

show exposed ports

kubectl get services

kube-proxy redirects to the appropriate node in the cluster

ingress is an api object that manages external access to the services in a cluster. ingress could be a gateway managed by a cloud provider

a service is a kubernetes service that identifies a set of pods using label selectors

ingress are services and pods that have IPs only routable by the cluster network ingress is a collection of rules that allow inbound connections

users request ingress by POSTing to the apiserver

most cloud providers deploy an ingress controller on the master. each ingress pod must be annotated with the appropriate class so that kubernetes knows that's an ingress controller.

metadata:
  annotations:
    ingress.kubernetes.io/rewrite-target:/

show ingress rules

kubectl get ing

secure an ingress with a secret which contains tls private key (tls.key) and cert (tls.crt)

data:
  tls.crt: <base64>
  tls.key: <base64>
kind: Secret

ingress controller is bootstrap with a load balancing policy that applies to all ingress objects

health checks are not exposed directly through the ingress

edit ingress with default editor

kubectl edit ing <name>

deploy a load balancer

type: LoadBalancer

get pods by kubernetes

kubectl get pods -n kube-system

get pods ran by user

kubectl get pods

dns entries are created automatically when the service is created

show deployments

kubectl get deployments

make dns resolvable

kubectl expose deployment dns-target

set a label on a node

kubectl label node mynode foo=bar

shorthand for services

kubectl get svc

deploy a service

kubectl expose deployment deployment-name

start a deployment

kubectl create -f deployment.yml

cm is shorthand for configmap

schedule a pod only on a node with the label "network=gigabit'

apiVersion:v1
kind:Pod
metadata:
name:
spec:
containers:
 -name:
image: 
nodeSelector:
network:gigabit

recycling policies of PVs

  • Retain (keep the contents)
  • Recycle (scrub the contents)

a kubelet is the control plane that runs on the nodes

get all recent events sorted by their timestamp

kubectl get events --sort-by=".metadata.creationTimestamp"

delete all objects created by a file

kubectl delete -f mistake.yml

po is abbriviation for pod

return a node to the service

kubectl uncordon mynode

create an nginx deployment with replicas without using yml

kubectl run nginx --image=nginx --replicas=3

ds is abbreviation for DaemonSet

all kubernetes yaml begin with these tags

apiVersion:
Kind:
metadata:

kubernetes object has a "spec" which acts as the "record of intent". The other main part is the "status"

shutdown a deployment

kubctl delete deployments/my-deployment

edit a live pod

kubectl edit pod mypod

set default editor with

KUBE_EDITOR="nano"

deploy is the abbreviation for deployment

get back the yaml describing a deployment

kubectl get deployment mydeployment -o yaml

delete everything under a namespace include the namespace

kubectl delete namespace mynamespace

service is a set of running pods that work together

the scheduler determines where to deploy pod

replication controller is a loop the drives the current state towards desired state

liset secrets

kube get secretes

"restartPolicy: Never" will only run task once. Used for db migrations

see running jobs

kubectl get jobs

show list of ips of all the pods (routed through load balancer)

kubectl describe service myservice

containers within a pod can read eac others ports

flanneld allocates subnet leases to each host flanneld runs on each host via DaemonSet

a cloud provider that supports kubernetes-provision load balancers is required to specify a service type of "LoadBalancer"

an ingress controller compatible with available and appropriate service providers like load balancers is required to request an ingress resource

ingress in an api object

network policies determines how set of pods are allowed to communicate with each other

the CNI handles inter-pod communication

ingress was introduced in kubernetes 1.1+

all traffic is sent to a single host if an ingress request is made with no associated rules

the result of service type of ClusterIP is a single IP address within the cluster that redirects traffic to a pod (possibly on a different node) serving the application (the pod)

ClusterIP is most commonly used with 3rd party load balancers

order is Preamble,podSelector,ingress,egress

.local is how pods can resolve the hostname

native pod storage is ephemeral

spec.volumes indicates which volumes to provide for the pod

spec.ontainers.volumeMounts indicates where to mount these volumes in the containers

volumes cannot mount onto other volumes

CSI = Container Storage Interface

"downwardAPI" mounts a directory and writes data in plain text files

"emptyDir" - created when a pod is assigned to a node. exists only when pod runs on a particular node

a container crashing does not delete storage from a pod

persistentVolume - api for users that abstracts implementation details of storage

persistentVolumeClaim - method for users to claim durable storage regardless of implementation

Persistent Volume (PV)

  • provisioned storage in the cluster
  • do not share lifecycle of pod

PersistentVolumeClaim (PVC)

  • request for storage
  • pod consumes node resources
  • pod can request specific cpu and memory requirements

PVs and PVCs have a set lifecycle

  • provision
  • bind
  • reclaim

storage AccessModes

  • ReadWriteOnce - can be mounted as r/w by one node only (RWO)
  • ReadOnlyMany - can be mounted read-only by many-nodes (ROX)
  • ReadWriteMany - can be mounted r/w by many nodes (RWX)

make directory to share with pods

sudo mkdir -p /var/nfs/general
chown nobody:nogroup /var/nfs/general
sudo vim /etc/exports
# add:
# /var/nfs/general <local ip of node>(rw,sync,no_subtree_check)
# then restart
sudo systemctl restart nfs-kernel-server

install nfs on each node (including master)

sudo apt install nfs-common
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment