Skip to content

Instantly share code, notes, and snippets.

@yunzhu-li
Last active October 22, 2017 15:11
Show Gist options
  • Save yunzhu-li/446f4df9ee4b59f6b7573f3b83406a41 to your computer and use it in GitHub Desktop.
Save yunzhu-li/446f4df9ee4b59f6b7573f3b83406a41 to your computer and use it in GitHub Desktop.
An introductory Docker & Kubernetes workshop material

Docker-K8s Workshop

Docker

Docker vs VM

  • Fast
  • Isolated (compare to single VM - multiple services)
  • Light-weight
  • Define environment with code

Hello World

docker run hello-world

Run a Linux container

docker run -it --rm centos
cat /etc/redhat-release
exit

Run a Python container

docker run -it --rm python:alpine
print('hello')
exit()
docker run <IMAGE:TAG> <COMMAND>
docker run -it --rm centos ls

What’s an image? What’s a tag?

Pull an image

docker pull alpine

Run a web service inside container

Develop a simple API

mkdir backend
cd backend
npm init
npm install --save express

# Program
cat << EOF > index.js
const os = require('os')
const crypto = require('crypto');
const express = require('express')
const app = express()

app.get('/', function (req, res) {
    // CPU intensive operation
    var hash = 'start ' + Math.random();
    for (var i = 0; i < 25600; i++) {
        hash = crypto.createHmac('sha256', '000').update(hash).digest('hex');
    }
    res.send(os.hostname());
})

app.listen(3000, function () {
    console.log('Listening on port 3000!');
})

process.on('SIGTERM', function() {
    process.exit(0);
});

process.on('SIGINT', function() {
    process.exit(0);
});
EOF

Run server

nodejs index.js

Access from browser

Build your own image

Create a Dockerfile

cat << EOF > Dockerfile
FROM node:8-alpine
COPY package.json /app/package.json
COPY index.js /app/index.js
RUN cd /app && npm install
CMD ["node", "/app/index.js"]
EOF

Build an image

docker build -t <NAME>-backend .

Run

docker run -it --rm -p 80:3000 <NAME>-backend

More Docker usage

Run in background

docker run -d -p 80:3000 <NAME>-backend

View running containers

docker ps

View all containers

docker ps -a

Attach to running container

docker attach <CONTAINER_ID_OR_NAME>

Stop container

docker stop <CONTAINER_ID_OR_NAME>

Delete container

docker rm <CONTAINER_ID_OR_NAME>

Processes inside container

A container can have multiple processes, but when PID 1 exits, container exits.

docker run -d --name alpine alpine top
docker exec -it alpine sh
ps
kill 1

Layers

If have time

Push image

Push image to GCR

# Tag image
docker tag <NAME>-backend us.gcr.io/docker-k8s-workshop/<NAME>-backend:1

# Push
gcloud docker -- push us.gcr.io/docker-k8s-workshop/<NAME>-backend:1

Kubernetes

Problems

  • Distributed workload
  • Resilient service, problem detection / recovery
  • Rolling update / rollback
  • Load balancing (Google Cloud Load Balancer)
  • Service discovery (ClusterIP + DNS)
  • Secret management
  • Artifacts storage (GCR in this workshop)
  • Build + deploy pipeline
  • HTTPS
  • Monitoring

Setup

gcloud container clusters get-credentials workshop-cluster --zone us-east1-d --project docker-k8s-workshop

Show nodes

kubectl get nodes

Your first K8s manifest file

---
apiVersion: apps/v1beta1
kind: Deployment

metadata:
  name: <NAME>-backend

spec:
  replicas: 1

  template:
    metadata:
      labels:
        app: <NAME>-backend

    spec:
      containers:
        - name: <NAME>-backend
          image: us.gcr.io/docker-k8s-workshop/<NAME>-backend:1
          ports:
            - containerPort: 8000

Deploy!

kubectl apply -f kube.yaml

Check deployment just created

kubectl get deployments

Check Pods

kubectl get pods

Each pod has its own IP

kubectl get pods -o wide

Resize deployment (distributed workload)

spec:
  replicas: 3

Delete one pod (automatic recovery)

kubectl delete pod <POD_NAME>

Create a service

---
apiVersion: v1
kind: Service

metadata:
  name: <NAME>-backend

spec:
  type: NodePort
  selector:
    app: <NAME>-backend

  ports:
    - name: <NAME>-backend-port
      port: 3000
      protocol: TCP

Check services

kubectl get services

Expose service to the Internet

---
apiVersion: extensions/v1beta1
kind: Ingress

metadata:
  name: <NAME>-backend

spec:
  backend:
    serviceName: <NAME>-backend
    servicePort: <NAME>-backend-port

Check Ingress

kubectl get ingress

Access from browser.

Update service

cat << EOF > index.js
const os = require('os')
const crypto = require('crypto');
const express = require('express')
const app = express()

app.get('/', function (req, res) {
    // CPU intensive operation
    var hash = 'start ' + Math.random();
    for (var i = 0; i < 25600; i++) {
        hash = crypto.createHmac('sha256', '000').update(hash).digest('hex');
    }
    res.send('[v2] ' + os.hostname());
})

app.listen(3000, function () {
    console.log('Listening on port 3000!');
})

process.on('SIGTERM', function() {
    process.exit(0);
});

process.on('SIGINT', function() {
    process.exit(0);
});
EOF

# Build
docker build -t <NAME>-backend .

# Tag
docker tag <NAME>-backend us.gcr.io/docker-k8s-workshop/<NAME>-backend:2

# Push
gcloud docker -- push us.gcr.io/docker-k8s-workshop/<NAME>-backend:1

View current Pods

kubectl get pods

Update service

kubectl apply -f kube.yaml

View Pods again

kubectl get pods

Rollback

kubectl rollout undo deployment <NAME>-backend

View result in browser

Service discovery

DNS Pods and Services - Kubernetes

my-svc.my-namespace.svc.cluster.local

Autoscaling

Pod-level autoscaling: HorizontalPodAutoscaler scales the number of Pods based on observed CPU utilization.

---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: <NAME>-backend
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: <NAME>-backend
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

Node-level autoscaling: GKE supports node-level autoscaling when current node pools can’t satisfy all Pod resource requests.

Manage resources

    spec:
      containers:
          resources:
            requests:
              cpu: 100m
              memory: 200Mi

Health check

    spec:
      containers:
          livenessProbe:
            httpGet:
              path: /
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3 # 30 seconds

          readinessProbe:
            httpGet:
              path: /
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 5
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 2 # 10 seconds

Anti affinity

  template:
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - <NAME>-backend
                topologyKey: kubernetes.io/hostname
@meiyan-xie
Copy link

Thanks a lot~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment