Skip to content

Instantly share code, notes, and snippets.

@rupeshtiwari
Last active April 23, 2024 20:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rupeshtiwari/6f4618f2191a6847f87ea079f3eeddb1 to your computer and use it in GitHub Desktop.
Save rupeshtiwari/6f4618f2191a6847f87ea079f3eeddb1 to your computer and use it in GitHub Desktop.
Kubernetes from Basics to Guru

image

image

image

image

K8 resources:

  1. container
  2. pod
  3. deployment = app
  4. service
  5. ingress = exposing your service
  6. volumes = inside the container, persistent volume
  7. configmap= inside the container
  8. secret= inside the container
  9. cronjob = fires the job that start the app
  10. job = to run the app start pod

image

Networking in K8s

nodes are connected to internal (cluster network) and the external network (external network) software defined network (SDN), another software defined netowrk is pod network each pod (p1, p2 , p3..) will have its own ip address.

To access the pod from outside you need Service also called as (clusterIP). Service is connected to cluster network bydefault and load balancing to different pods. Service is a API based load balancer and the service has its own IP address. This service IP address bydefault visible inside the cluster only. Service IP address is not visible external. By default service can not be accessed outside. 2 solutions are there, 1/ Solutions is ingress that placed logically outside. Ingress is connected to your provided DNS , user can do DNS query to resolve the ingress. Ingress connects directly to service, Ingress knows which port service is available. It is providing http/s traffic only. 2/ The other solution is nodeport is exposing a port on cluster node (3200 ... ) these ports are forwarded to the pod port. The user will connect through dns to external LB that connects to exposed nodeport, from the nodeport they reachout to the pod and connect to actual application.

image

image

image

Decoupling Storage from the Applications

image

Container has volume that can point to any storage NFS, S3, GCS etc. You need decoupling, the pod is defined in YML file that is not portable. In one cluster u r using local storge, you are using google storage in another. So u use Persistent Volume that could be implemented by any storage type, then u dont need to know what volume u r using at pod level. PVC is connected to pod, pod needs storage persistance volume claim is suppose to give persistent volume of the type defined, if nothing is find it uses storage class that auto provision persistent volume.

image

Decoupling Configuration Files and Variables from Applications

Container image is created from docker file (container file) u encapsulate the container image in pod. K8 provides resources like 1/ configmap that can be implemented by cloud or local using it as variable or configurable file. 2/ If your config is very secured use secret that can be implemented any where. 3/ volume it will give u persisted volume mounted by PVC to the pod.

image

Understanding API Access

You havve kube-apiserver required to access etcd there is Role and ClusterRoles defines permission for kube-apiserver. Kubectl is client utility working with .kube/config users in k8 is signed pki certificates. You can have pods that needs information from kub-apiserver, they need to create default resource ServiceAccount used by pods only. ServiceAccount is like a user used by pod they want to do stuff with etcd and use some role how would they do? RoleBinding and ClusterRoleBinding to access role and clusterrole, u need one side service account and another side what role u want to bind.

image

you havve kube-apiserver required to access etcd there is Role and ClusterRoles defines permission for kube-apiserver. Kubectl is client utility working with .kube/config users in k8 is signed pki certificates. You can have pods that needs information from kub-apiserver, they need to create default resource ServiceAccount used by pods only. ServiceAccount is like a user used by pod they want to do stuff with etcd and use some role how would they do? RoleBinding and ClusterRoleBinding to access role and clusterrole, u need one side service account and another side what role u want to bind.

image

image

image

image

image

image

Istio

image

Gateway will accept traffic into service mesh , that has pointer to virtual service has property called selector allows you to select v1 and v2 of app helps for canary deployments. V1 and v2 are the pods. In each pod side car container is inserted inside the pod. This side car container has envoyproxy that is helping istio to manage the traffic.

image

Using GitOps to Provide Zero-downtime Application Updates

Rolling update

image

image

image

Blue/Green deployment

image

image

Canary deployment

image

image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment