Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Kubernetes Fundamentals
Why do we need Orchestration?
- To start a container cluster with simple commands.
- To auto-scale
- To maintain cluster state
Orchestration Tools for Docker:
- Docker Compose
- Kubernetes
- Mesos
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.
-Automatic binpacking
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
-Horizontal scaling
Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
-Service discovery and load balancing
No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them.
-Automated rollouts and rollbacks
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
-Secret and configuration management
Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.
-Storage orchestration
Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
-Batch execution
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
# Deploying kubernetes via Hyperkube
mkdir kubernetes
cd kubernetes/
curl -O
chmod +x kubectl
cp -p kubectl /usr/local/bin/kubectl
export K8S_VERSION=v1.2.2
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
--name=kubelet \
-d \${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="" \
--address="" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns= \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
## K8s Terms:
-kubectl : Command-line client for k8s.
-etcd : Key value store which k8s uses.
-nodes: The host system which are part of k8s cluster.
-pods : Each containerized application is called pod.
-services: Network ports serving applications running on pods.
-replicas: The number of desired pods are called replicas.
-deployments: A cluster of related pods & services.
## K8s commands:
kubectl get nodes
kubectl get pods
kubectl get services
kubectl get deployments
kubectl create -f guestbook_example_k8s.yaml
kubectl delete -f guestbook_example_k8s.yaml
## Deploying App (GuestBook Example)
Sample yaml download:
kubectl create -f guestbook_example_k8s.yaml
# Remove K8s containers
docker rm -f $(docker ps -a -q)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment