Skip to content

Instantly share code, notes, and snippets.

@foray1010
Last active March 8, 2018 02:15
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save foray1010/9e7f68deb951a7756feee058cc7342c4 to your computer and use it in GitHub Desktop.
Save foray1010/9e7f68deb951a7756feee058cc7342c4 to your computer and use it in GitHub Desktop.
6 Mar 2018 - Kubernetes (k8s) workshop notes

6 Mar 2018 - Kubernetes (k8s) workshop notes

workshop github guideline: https://github.com/aws-samples/aws-workshop-for-kubernetes

if you wanna follow the guideline, make sure you:

  1. setup cloud9, follow the instructions under the heading: Create AWS Cloud9 Environment
  2. setup Kubernetes multi master cluster, follow the instructions under the heading: Create a Kubernetes Cluster with kops and Kubernetes Cluster Context. Suggest to create a multi-master cluster as the examples will be easier to follow.

Concepts

  1. Pod

    • Pod is a group of containers (not necessarily docker, can be rkt container)
    • All containers in one pod always run in same instance
    • All containers share the same storage and network
      • Every pod has its own unique internal IP address

      • the pod IP address must not be relied on as it keeps changing during redeployment/recovery

        Service can be relied for this purpose as a service discovery method

    • All containers share the same resource (memory/CPU)
      • there are two types of resource for memory and CPU
        1. Limit (upper limit, optional)
          • If excess, the pod will be killed, auto restart [Burstable]
          • If not defined, it can use as much resource as the instance have [BestEffort]
            • If another pod is added, this pod will reduce the auto Limit automatically
        2. Request (how much we need, at least) [Guaranteed]
          • this resource is allocated to this pod only and guarantee it always available
          • Will only deploy to instance has this amount of memory
    • Pod supports failover by default, k8s will restart the pod when failed. If failed again, it will wait for x seconds before retry, if failed again, it will wait for more time before retry
  2. Deployment

    • a yaml file is suggested to store all parameters for pods deployment

      example:

      apiVersion: v1
      kind: Pod
      metadata:
        name: nginx-pod-guaranteed2
        labels:
          name: nginx-pod
      spec:
        containers:
          -
            name: nginx
            image: nginx:latest
            resources:
              limits:
                memory: "200Mi"
                cpu: 1
              requests:
                memory: "200Mi"
                cpu: 1
            ports:
              - containerPort: 80
    • k8s supports canary deployment, rolling deployment and also blue/green deployment

    • How redeployment works:

      • k8s will create a new set of replicate set which the number is identical to current replicate set, with updated code
      • every new node started and pass health check, one old node will be removed
      • if one new node failed health check, all new nodes will rollback and terminated old nodes will start again
      • so be reminded that there is a period that both new and old versions are running together
  3. Service

    • Pod has its own IP address, but it changes in case of redeployment/recovery, so we should rely on service's IP.

    • There are 3 types of IP address for service:

      • Load balancer/ELB(for public)
      • Cluster IP(serves internally)
      • Pod IP
    • Should make use of Label feature to identify pod in single cluster

      Example

      apiVersion: v1
      kind: Service
      metadata:
      name: echo-service
      spec:
      selector:
          app: echo-pod # Label
      ports:
      - name: http
          protocol: TCP
          port: 80
          targetPort: 8080
      type: LoadBalancerec2-user # <---a ELB type
  4. Namespace

    • as it is fairy common to run pods that developed by different team / for different project, on the same cluster. Namespace is needed to avoid pod name collusion.
    • Namespace can also be used for NetworkPolicy, which can control if Namespace A can accept traffic from Namespace B but not Namespace C
    • one common NetworkPolicy implementation is Calico
  5. Daemon Set / Monitoring

    • Daemon Set ensure that a copy of the pod runs on a selected set of nodes
    • As new nodes are added to the cluster, pods are started on them. As nodes are removed, pods are removed through garbage collection.
    • One common use case is to create a daemon set pod for monitoring a node health (as we don't need multiple pods to monitor the health). prometheus is the most famous monitoring tool for k8s.
  6. Logging

    • there are multiple to achieve logging, the most common pattern will be following:
      1. Sidecar
        • a helper container on EVERY pods to collect log from other containers in the same pod
      2. ELK
        • E stands for Elasticsearch
        • L stands for Logstash
        • K stands for Kibana (UI for log)
        • for more info, see here
      3. EFK
        • E and K same as ELK
        • F stands for Fluentd
  7. ResourceQuota

    • a yaml file that defined a resource usage for both Limit/Request in all pods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment