Skip to content

Instantly share code, notes, and snippets.

@mikebway
Last active December 20, 2019 21:03
Show Gist options
  • Save mikebway/83d25de3ad57618bbc8165e7b45cb553 to your computer and use it in GitHub Desktop.
Save mikebway/83d25de3ad57618bbc8165e7b45cb553 to your computer and use it in GitHub Desktop.

Kubernetes Reminders

A list of things I will forget if I don't use Kubernetes for a week.

Command Line Completion

Command line completion capabilities can be establish for both Bash and ZSH shells using the kubectl completion command. Executing the following will dump the completion configuration script to a file that can then be reference by your .bashrc setup script or equivalent.

kubectl completion bash > .bash_kubectl

Add a Secret for Github or GitLab Docker Repository Access

IMPORTANT: Clear your shell history after doing this or put it in a script file that you delete after running. If you don't, you will be leaving secret values where they can be found.

kubectl create secret docker-registry gitlabcred \
            --docker-server=registry.gitlab.com \
            --docker-username=GIT_USERNAME_GOES_HERE \
            --docker-email=GIT_EMAIL_ADDRESS_GOES_HERE \
            --docker-password=GIT_SECRET_GOES_HERE

See All the Containers That Are Running

The following lists the containers under the pods for all namespaces:

kubectl top pod --all-namespaces --containers

Create a Single Container Pod with YAML from a Github Package Image

Minimal YAML for a single service pod, serving on a single port, looks like this:

apiVersion: v1
kind: Pod
metadata:
  name: partner-directory
  namespace: default
spec:
  restartPolicy: Never
  imagePullSecrets:
    - name: githubcred
  containers:
    - name: partner-grpc
      image: >-
        registry.gitlab.com/mikebway/container-playpen/partner-grpc:1.0.0
      ports:
        - containerPort: 50051
          protocol: TCP
  restartPolicy: Never

Have Kubernetes create the pod from the above yaml as follows:

kubectl create -f partner.yaml

Deleting a Pod With kubectl

If you have the pod yaml file (i.e. like the one above), you can delete the pod as follows:

kubectl delete -f ./pod..yaml

Deployments Rather Than Simple Pods

Deployments allow additional considerations to be specified, over and above that available to pod defintions. Deployments create pods, adding controls on things like how many times the pod should be replicated.

Here is a deployment specification for a two container pod, with just a single instance / replica:

kind: Deployment
metadata:
  name: partner-dir
spec:
  replicas: 1
  selector:
    matchLabels:
      app: partner-dir
  template:
    metadata:
      labels:
        app: partner-dir
    spec:
      imagePullSecrets:
        - name: gitlabcred
      containers:
        - name: partner-grpc
          image: registry.gitlab.com/mikebway/container-playpen/partner-grpc:1.0.0
          ports:
            - containerPort: 50051
              protocol: TCP
              name: grpc
        - name: partner-gql
          image: registry.gitlab.com/mikebway/container-playpen/partner-gql:1.0.1
          ports:
            - containerPort: 4000
              protocol: TCP
              name: graphql

Deployments can be invoked with the same kubectl apply -f, kubectl update -f and kubectl delete -f commands as pods.

Ingress etc

One of the better / simpler descriptions of Ingress vs NodePort vs LoadBalancer: https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-ingress-guide-nginx-example.html.

A step wise description of how to enable ingress for MicroK8s can be found at https://kndrck.co/posts/microk8s_ingress_example/

In summary: to set up ingress on MicroK8s, the Nginx service has to be added and enabled:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

microk8s.enable ingress

NOTE: The ingress-nginx YAML file referenced here corrects the URL found in the https://kndrck.co/posts/microk8s_ingress_example/ guide, inserting the /static segment in the path.

Add a Service Definition to the Deployment YAML

The deployment is unchanged, we just add a service definition to the end. ClusterIP is the default type but we spell it it out to be clear:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: partner-dir
spec:
  replicas: 1
  selector:
    matchLabels:
      app: partner-dir
  template:
    metadata:
      labels: 
        app: partner-dir
    spec:
      imagePullSecrets:
        - name: gitlabcred    
      containers:
        - name: partner-grpc
          image: registry.gitlab.com/mikebway/container-playpen/partner-grpc:1.0.0
          ports:
            - containerPort: 50051
              protocol: TCP
              name: grpc
        - name: partner-gql
          image: registry.gitlab.com/mikebway/container-playpen/partner-gql:1.0.1
          ports:
            - containerPort: 4000
              protocol: TCP
              name: graphql
  
---

apiVersion: v1
kind: Service
metadata:
  name: partner-dir-service
spec:
  type: ClusterIP
  selector:
    app: partner-dir
  ports:
    - targetPort: 50051
      port: 50051
      name: grpc
    - targetPort: 4000
      port: 4000
      name: graphql

Configure the Ingress Controller to Reference the Deployed App

Google provides one of the better introductions to how to build up the content of a Kubernetes ingress YAML file here: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer.

The Nginx proxy web service will be running on port 80. We map a URL path on that server to our GraphQL service as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: fanout-ingress
spec:
  rules:
  - http:
      paths:
      - path: /partner-dir/graphql
        backend:
          serviceName: partner-dir
          servicePort: 4000

Install and Configure Helm

Install Helm using HomeBrew as follows:

brew install kubernetes-helm

For Linux installations, do this:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

NOTE: Helm 3.x does not require the installation of the Tiller service into Kubernetes, making the world a more secure place!

Helm 3.x does not come preconfigured with any chart repositories, you have to add all the ones that you might want to use. The list of repositories known to Helm Hub is maintained in GitHub and is a good starting point to find the big name repos: https://github.com/helm/hub/blob/master/config/repo-values.yaml.

To figure out which chart repositories you might care about, look at the list of repo names down the left side of the display at https://hub.helm.sh/charts.

A few obvious repos to add to your Helm configuration are as follows:

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com
helm repo add linkerd2 https://helm.linkerd.io/stable
helm repo add gitlab https://charts.gitlab.io/
helm repo add bitnami https://charts.bitnami.com
helm repo add flagger https://flagger.app
helm repo add elastic https://helm.elastic.co
helm repo add openfaas https://openfaas.github.io/faas-netes
helm repo add nginx https://helm.nginx.com/stable
helm repo add zooz https://zooz.github.io/helm/
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo add loki https://grafana.github.io/loki/charts
helm repo add codecentric https://codecentric.github.io/helm-charts
helm repo add aws https://aws.github.io/eks-charts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment