Skip to content

Instantly share code, notes, and snippets.

@MithunArunan
Last active February 18, 2020 13:53
Show Gist options
  • Save MithunArunan/86629e5262a020063f7f4011e78f382e to your computer and use it in GitHub Desktop.
Save MithunArunan/86629e5262a020063f7f4011e78f382e to your computer and use it in GitHub Desktop.
Making cloud native computing universal and sustainable.

Docker - Open source containerization

Operating-system-level virtualization, also known as containerization, refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called containers, partitions, virtualization engines (VEs) or jails (FreeBSD jail or chroot jail), may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside a container can only see the container's contents and devices assigned to the container.

Overview

  • Bundle all your application dependencies into a docker image
  • Portable, you can run it as a container on a MAC, Windows and linux server.
  • Consistent - Across dev, staging and production cluster
  • Reusable - DockerHub contains official images

Writing a Dockerfile

FROM <base_image> RUN ENV ARG RUN CMD

Multi stage docker builds can help remove sensitive information from the docker image.

Tagging images

Before deploying any image let’s create another tag, preferably not with latest. :master

Major release
	<image-name>:<version>

Minor release

	<image-name>:<version>-<commit-id-7chars>

Docker regsitry

Containers

Docker commands

References

12 Factor App - Docker

EFK / ELK Stack

Collector - FluentD/Beats (Filebeat/Metricbeat)

Backend store - ES

Visualization - Kibana

Visualizing logs in Kubernetes with FluentD/ES/Kibana

EFK

  • Collect stdout/stderr logs using fluentd in kubernetes cluster as DaemonSet.
  • Add kubernetes metadata to the logs
  • Logrotate and Backup all the raw logs to s3 with kubernetes metadata (if needed to use other than ES as a backend store)
  • Store all the logs in elastic search backend in a parsed format
  • Backup all the elastic search index periodically
  • Connect Kibana dashboard to ES backend and query the logs

fluent-plugin-elasticsearch

fluent-plugin-kubernetes_metadata_filter

EFK stack - kubernetes

Setup Kops CLI and kubectl CLI

curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

AWS - Kops

Setup AWS CLI and kops IAM user/group

aws iam create-group --group-name kops

aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops

aws iam create-user --user-name kops

aws iam add-user-to-group --user-name kops --group-name kops

aws iam create-access-key --user-name kops

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Cluster state storage

aws s3api create-bucket \
    --bucket product-example-com-state-store \
    --region us-west-2	\
    --create-bucket-configuration LocationConstraint=us-west-2

Create cluster

export NAME=product.k8s.local
export KOPS_STATE_STORE=s3://product-example-com-state-store
aws ec2 describe-availability-zones --region us-west-2
kops create cluster \
    --zones us-west-2a \
    ${NAME}
kops edit cluster ${NAME}
kops update cluster ${NAME} --yes
kops get nodes
kops validate cluster

kops delete cluster --name ${NAME}
kops delete cluster --name ${NAME} --yes

Run k8s dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kops get secrets kube --type secret -oplaintext

Cluster spec & network topology

AWS - Kops - Terraform

OnPremise - Kops

References

https://kubernetes.io/docs/getting-started-guides/scratch/

https://github.com/kubernetes/kops

https://github.com/kubernetes/kops/blob/master/docs/aws.md

https://kubernetes.io/docs/getting-started-guides/kops/

https://kubernetes.io/docs/getting-started-guides/aws/

https://kubernetes.io/docs/getting-started-guides/kubespray/

Kubeflow - Codelabs

Kubeflow - User guide

//to download ksonnet for linux (including Cloud Shell)
KS_VER=ks_0.9.2_linux_amd64

//to download ksonnet for macOS
KS_VER=ks_0.9.2_darwin_amd64

//download tar of ksonnet
wget https://github.com/ksonnet/ksonnet/releases/download/v0.9.2/$KS_VER.tar.gz

//unpack file
tar -xvf $KS_VER.tar.gz

//add ks command to path
PATH=$PATH:$(pwd)/$KS_VER
kubectl create clusterrolebinding default-admin \
      --clusterrole=cluster-admin --user=$(gcloud config get-value account)

Follow steps in User guide

Kubernetes

  • Automates deployments, scaling
  • Deploys containers based on OS level virtualization instead of hardware level virtualization
  • Decoupled from the underlying infrastructure and the os distributions
  • Fast, lightweight and portable.
  • Service discovery
  • Load balancing
  • Secrets
  • Health checks
  • Auto scaling/restart/healing of nodes
  • Zero downtime deploys

Why Kubernetes?

  • Bound to particular Cloud Provider (GCP, AWS or Azure).
  • OnPremise Deployment.
  • Downtime on any production change.
  • Cost incurred.
  • Storage.
  • Modular infrastructure as code.

Kubernetes Core Concepts

Kubernetes Master

Maintains the health of cluster. Interacts with underlying cloud providers.

kube-apiserver kube-scheduler kube-controller-manager etcd cloud-controller-manager addons DNS WEB UI

Kubernetes Node (minion)

Runs all the workloads. 2 process - kubelet, kube-proxy

Kubernetes objects

Pod

Smallest and simplest unit in kube object model. Running a single process in a cluster, can contain one or multiple containers.

Service

Logical set of pods. Load balances between the pods. Though each pods have an ip you need service to expose them to public

Volume

Namespace

default kube-system kube-public

Labels and selectors

Annotations

Deployment

  • Declarative updates for pods

DaemonSet

ReplicaSet

Access Kubernetes Cluster

Refer

openssl genrsa -out mithun.key 2048
openssl req -new -key mithun.key -out mithun.csr -subj "/CN=mithun/O=admin"
openssl x509 -req -in mithun.csr -CA /etc/kubernetes/pki/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out employee.crt -days 500
kubectl config set-cluster <cluster_name> --server=https://<master-node-ip>:<master-node-port> --insecure-skip-tls-verify=true
kubectl config get-clusters

kubectl config set-credentials <cluster_name> --client-certificate= --client-key= --cluster=<cluster_name>
kubectl config set-credentials <cluster_name>  --username=<username> --password=<password> --cluster=<cluster_name>

kubectl config set-context <cluster_name> --user=<cluster_name> --cluster=<cluster_name>
kubectl config use-context <cluster_name>
kubectl config view
kubectl get pods

Kubernetes Cluster configurations

Grouping all the kubernetes and docker configurations in one place k8s-configs dockerfiles - base docker images

Services

Vault (for storing secrets) Vault-ui Kube-ops-view All other microservices

K8S configs

Deployment.yaml

Create a label ‘app’ for grouping pods

Service.yaml
Use ClusterIP for exposing the services internally, let’s create ingress when we would like to expose them to public.

ClusterIP - Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
LoadBalancer - Exposes the service externally using a cloud provider’s load balancer
NodePort - Exposes the service on each Node’s IP at a static port (the NodePort)
Ingress.yaml
Pvc.yaml - Persistent Volume Claim

k8s commands

kubectl apply -f k8s-spec-directory/ → kubectl apply -f juno/

telepresence --swap-deployment voice-worker --docker-run -it -v $PWD:/home/voice-worker gcr.io/vernacular-tools/voice-services/voice-worker:1

Setting up Vault in local

docker pull vault docker pull consul docker pull djenriquez/vault-ui

Vault binary download

docker run --cap-add=IPC_LOCK -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=roottoken' -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200' -d --name=vault vault docker run -d -p 8201:8201 -e PORT=8201 -e VAULT_URL_DEFAULT=http://192.168.12.155:8200 -e VAULT_AUTH_DEFAULT=GITHUB --name vault-ui djenriquez/vault-ui

Next Steps

  • Telepresence

  • Minikube

  • Dockers for development

  • Helm

References

Kubernetes - Design principles

Kubernetes configuration examples

GKE - letsencrypt

Kubernetes - Vault integration

Kubernetes - NFS on GCP

KubeSpray

Meet the underlay requirements

  • Install python3.6 & pip3
sudo apt update
sudo apt install python3-pip
  • Clone Kubespray repository

  • Setup using the following in the automation server

sudo python3 -m pip install -r requirements.txt

Ansible (v2.5+), python-netaddr and Jinja (2.9+)

Target servers must have access to docker image registry.

Configure target servers to allow IPv4 forwarding.

Copy SSH key to to all the target servers part of the inventory.

Disable firewall in the network of the target servers.

If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the ansible_become flag or command parameters --become or -b should be specified

Compose an inventory file

cp -rfp inventory/sample inventory/voice-cluster
declare -a IPS=(10.160.0.2 10.160.0.3 10.160.0.4)
CONFIG_FILE=inventory/voice-cluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
cat inventory/voice-cluster/hosts.ini 

Plan your cluster deployment

IMPORTANT: Edit my_inventory/groups_vars/*.yaml to override data vars:

Deploy a Cluster

ansible-playbook -i inventory/voice-cluster/hosts.yml cluster.yml -b -v \
  --private-key=~/.ssh/private_key

ansible-playbook -u root  -b -v  -i inventory/voice-cluster/hosts.ini cluster.yml --private-key=~/.ssh/google_compute_engine
cat inventory/voice-cluster/credentials/kube_user.creds 
ssh -i ~/.ssh/google_compute_engine mithun@35.233.186.42
ssh -i ~/.ssh/google_compute_engine mithun@35.230.1.247
ssh -i ~/.ssh/google_compute_engine mithun@35.225.234.120

Verify the deployment

Reference

Installing Kubernetes On-premises with Kubespray Kubespray - GitHub

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment