Skip to content

Instantly share code, notes, and snippets.

@beardedtim
Last active August 30, 2022 15:44
Show Gist options
  • Save beardedtim/f6dc6cd0b80c6dc775476888056559a0 to your computer and use it in GitHub Desktop.
Save beardedtim/f6dc6cd0b80c6dc775476888056559a0 to your computer and use it in GitHub Desktop.
How to create a local K8s development via Minikube

Add an Ingress

Step 1: Add Ingress

k8s/ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: helper-ingress
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 10m
spec:
  ingressClassName: nginx
  rules:
    - host: helper.mck-p.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: client
                port:
                  number: 5000

Step 2: Install Ingress Controller in Cluster

This is only needed if you are using Digital Ocean/Some Cloud Provider

➜ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories

➜ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "neo4j" chart repository
...Successfully got an update from the "loki" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "istio" chart repository
...Successfully got an update from the "grafana" chart repository
Update Complete. ⎈ Happy Helming!⎈ 

helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true
NAME: nginx-ingress
LAST DEPLOYED: Tue Aug 30 11:37:40 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
  
➜ kubectl apply -k ./envs/prod
configmap/postgres-config created
secret/api-secrets-t8mk29g5gk created
secret/client-secrets-t59f4k9hkh created
service/api created
service/client created
service/postgres created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
deployment.apps/api created
deployment.apps/client created
deployment.apps/postgres created
ingress.networking.k8s.io/helper-ingress created

Step 3: Wait until Address in K9s Ingress

Wait until Address

Once it has an IP address, point your DNS A record at that IP address

Via Digital Ocean

Local K8s Development Via Minikube

Overview

In this document, we will go over the step-by-step instructions needed for you to get a local Kubernetes cluster running that is running and deploying our local Node.JS applications. We will learn how to use Kustomize and Patches to be able to run the same configuration across environments while still being able to patch with environment-specific values like Secrets, Docker Host, etc.

You can see the final code at the repo

Prerequisites

You will need the following software installed and working. Below are the versions that are currently on my machine but future versions should be okay. Maybe?

➜ kubectl version  
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:41:42Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.4-rc.0", GitCommit:"39f5a506c812c89e8129ec3c6baaf9f495b370ea", GitTreeState:"clean", BuildDate:"2021-10-27T18:59:59Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
-> docker -v               
Docker version 20.10.17, build 100c701
➜ minikube version 
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
➜ k9s version  
 ____  __.________       
|    |/ _/   __   \______
|      < \____    /  ___/
|    |  \   /    /\___ \ 
|____|__ \ /____//____  >
        \/            \/ 

Version:    v0.25.7
Commit:     6b6a490c73af8719a56e1c4a8dec92a6c2466dce
Date:       2021-11-26T21:04:14Z

Step 0: Stop and Delete

If you are trying this tutorial after getting your local cluster into a weird state or you want to start over from scratch, you can stop and delete the minikube cluster:

➜ minikube stop && minikube delete                                                     
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 node stopped.
🔥  Deleting "minikube" in docker ...
🔥  Deleting container "minikube" ...
🔥  Removing /home/patty/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.

Step 1: Create Local Cluster

Before we can do anything, we need a cluster to do it in! We ask Minikube to start us one, with a few choice add-ons and settings. You can modify to fit your needs, the important thing is the base command

➜ minikube start --addons=ingress,metrics-server --cpus=max --kubernetes-version=latest
😄  minikube v1.24.0 on Debian buster/sid
✨  Automatically selected the docker driver. Other choices: virtualbox, ssh
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=16, Memory=8000MB) ...
🐳  Preparing Kubernetes v1.22.4-rc.0 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.0.4
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
🔎  Verifying ingress addon...
🌟  Enabled addons: storage-provisioner, default-storageclass, metrics-server, ingress
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Step 2: Connect via K9s

Now that we have a cluster set up and and kubectl configured, we can start looking around the cluster via the K9s CLI:

k9s

Expected Output

K9s dashboard with zero pods

Step 3: Create Namespace

We have zero Pods but more importantly, we have zero anything. Let's make sure that everything is all setup correctly and that we can run kubectl commands against the cluster and create a Namesepace in K8s.

Create a new folder, k8s and add the following files:

k8s/kustomization.yaml

namespace: mck-p

resources:
  - ./common

which says that all resources in this file, unless otherwise noted, will be set in the mck-p namespace. It them lists all of the resources that make up this project. Since it isn't a file, we know that we are to expect a ./common/kustomization.yaml file:

k8s/common/kustomization.yaml

resources:
  - ./namespace.yaml

Which is a list of resources that make up this folder. We can keep doing this, with <folder>/kustomization.yaml all the way we want. At some point, we're going to need to reference a file, some K8s Resource Definition. For now, we are going to just reference a namespace.yaml file that will define a Namespace in K8s configuration:

k8s/common/namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: mck-p

Now we can run the following command and see that it take affect

➜ kubectl apply -k ./k8s           
namespace/mck-p created

K9s

You can now check if there is a new namespace in the cluster via K9s. Type shift+; to bring up the search console and type in namespace

Namespace search

and hit enter. You should now see the namespace you created in the list

Namespace list

You now can do anything that you want with the K8s cluster and the kustomization list of resources. A common pattern would be to add some application that needs to have some Pods or instances of Docker Containers accessible to either internal or external clients.

As an example, suppose you have a Node.JS Typescript server that you want to be able to ask for the current time via HTTP requests and you want to have 3 of those up at any one time with a load balancer in front of it:

Deploy Typescript Application to K8s

Step 1: Init and Install Deps

We would start with creating a basic Typescript Docker Application:

➜ mkdir apps/time && cd apps/time

➜ yarn init -y
yarn init v1.22.19
warning The yes flag has been set. This will automatically answer yes to all questions, which may have security implications.
success Saved package.json
Done in 0.02s.

➜ yarn add -D typescript @types/node
yarn add v1.22.19
info No lockfile found.
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
success Saved 2 new dependencies.
info Direct dependencies
├─ @types/node@18.7.14
└─ typescript@4.8.2
info All dependencies
├─ @types/node@18.7.14
└─ typescript@4.8.2
Done in 1.59s.

➜ npx tsc -init         
yarn run v1.22.19
$ /home/patty/McK-P/apps/time/node_modules/.bin/tsc -init

Created a new tsconfig.json with:                                                                                       
                                                                                                                     TS 
  target: es2016
  module: commonjs
  strict: true
  esModuleInterop: true
  skipLibCheck: true
  forceConsistentCasingInFileNames: true


You can learn more at https://aka.ms/tsconfig
Done in 0.13s.

Step 2: Create Application File

apps/time/index.ts

import http from "http";

const server = http.createServer((req, res) => {
  res.end(new Date().toISOString());
});

const port = process.env.PORT || 5000;

server.listen(port, () => {
  console.log("System Listening at :%s", port);
});

Step 3: Build Dockerfile

apps/time/Dockerfile

FROM node:16 as BUILD

WORKDIR /app

COPY . .

RUN yarn install --frozen-lockfile && npx tsc

FROM node:16 as FINAL

WORKDIR /app

COPY package.json tsconfig.json yarn.lock ./
RUN yarn install --production=true --frozen-lockfile
COPY --from=BUILD /app/dist/ ./dist/

ENV PORT=5000

CMD ["node", "dist/index.js"]

Step 4: Build Docker Image

You will need to build the Docker image in a specific terminal in order for this to work. It is because you need to point the Docker environment at Minikube and any Docker commands you run will be ran in the context of Minikube:

eval $(minikube docker-env) 

We can now build the Docker image from within apps/time:

➜ docker build  -t mck-p/time-app .
Sending build context to Docker daemon  70.75MB
Step 1/11 : FROM node:16 as BUILD
16: Pulling from library/node
76dff75df4d9: Pull complete 
3e8c90a1c4bb: Pull complete 
b3662c105080: Pull complete 
ad5dcb7dd592: Pull complete 
fa57cc7ce341: Pull complete 
2d623c8b550d: Pull complete 
56ed828b953c: Pull complete 
09ff1abee05c: Pull complete 
b596abc1ac96: Pull complete 
Digest: sha256:0c672d547405fe64808ea28b49c5772b1026f81b3b716ff44c10c96abf177d6a
Status: Downloaded newer image for node:16
 ---> c8af85aa3027
Step 2/11 : WORKDIR /app
 ---> Running in 389295fb0bbf
Removing intermediate container 389295fb0bbf
 ---> b9b48ecb61a1
Step 3/11 : COPY . .
 ---> 56960fa89951
Step 4/11 : RUN yarn install --frozen-lockfile && npx tsc
 ---> Running in 29c32f8b9a04
yarn install v1.22.19
[1/4] Resolving packages...
success Already up-to-date.
Done in 0.05s.
npm notice 
npm notice New minor version of npm available! 8.15.0 -> 8.18.0
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v8.18.0>
npm notice Run `npm install -g npm@8.18.0` to update!
npm notice 
Removing intermediate container 29c32f8b9a04
 ---> 7fa5eff4f707
Step 5/11 : FROM node:16 as FINAL
 ---> c8af85aa3027
Step 6/11 : WORKDIR /app
 ---> Using cache
 ---> b9b48ecb61a1
Step 7/11 : COPY package.json tsconfig.json yarn.lock ./
 ---> e83050473d06
Step 8/11 : RUN yarn install --production=true --frozen-lockfile
 ---> Running in 4a3dfe3d5d4b
yarn install v1.22.19
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
Done in 0.83s.
Removing intermediate container 4a3dfe3d5d4b
 ---> 0a673f1d8d51
Step 9/11 : COPY --from=BUILD /app/dist/ ./dist/
 ---> 7d88e830fcaf
Step 10/11 : ENV PORT=5000
 ---> Running in 69fe2ec65936
Removing intermediate container 69fe2ec65936
 ---> b9f64e587102
Step 11/11 : CMD ["node", "dist/index.js"]
 ---> Running in 740266ebce48
Removing intermediate container 740266ebce48
 ---> 8934e7809710
Successfully built 8934e7809710
Successfully tagged mck-p/time-app:latest

Step 5: Add Application to K8s

We can now add this to the K8s resources files:

k8s/apps/time/kustomization.yaml

resources:
  - ./deployment.yaml
  - ./service.yaml

Pretty much every "app" thing is going to have a Deployment (or StatefuleSet) and a Service. A Deployment tells K8 how to create the application and the Service tells K8s how to access it and open its ports.

k8s/apps/time/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: time
  labels:
    app: time
spec:
  replicas: 3
  selector:
    matchLabels:
      app: time
  template:
    metadata:
      labels:
        app: time
    spec:
      containers:
      - name: time
        image: mck-p/time-app
        ports:
        - containerPort: 5000

where we say to create 3 instances of the the mck-p/time-app Docker image, with ports of 5000 being exposed, and some metadata like labels and such, and call all of that as the Deployment called time.

k8s/apps/time/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: time
spec:
  selector:
    app: time
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 5000

Here we tell k8s to create some Service that will expose the time application (defined in the Deplyoment above) port of 5000 and give it some addressable port of 5000 within the cluster somewhere.

One last file to add:

k8s/apps/kustomization.yaml

resources:
  - ./time

We can now update the root kustomization file to include - ./apps:

namespace: mck-p

resources:
  - ./common
  - ./apps

and run the apply command in the root of the project:

➜ kubectl apply -k ./k8s
namespace/mck-p unchanged
service/time created
deployment.apps/time created

Step 6: Fix Image Pull Error

Inside of K9s, go to the mck-p namespace via selecting it in the namespace list (see above for instructions). Once there, you should see some errors:

ImagePullPolicy Error

This is expected at this time because we never told K8s to assume that it is on the machine already and it is currently trying to pull the latest image from Docker hub. To fix that, we need to tell K8s for our deployment to use the imagePullPolicy of Never:

_k8s/apps/time/deployment.yaml

[...]
    spec:
      containers:
      - name: time
        image: mck-p/time-app
        imagePullPolicy: Never
        ports:
        - containerPort: 5000

and re-running apply:

➜ kubectl apply -k ./k8s
namespace/mck-p unchanged
service/time unchanged
deployment.apps/time configured

which fixes the issue:

Fixed Issue

but now it will never pull from Docker hub. What we want is that in development to use Never but in other environment to maybe have a different patch.

Step 7: Add Patches

Create a new folder envs/dev and add a kustomization.yaml file:

envs/dev/kustomization.yaml

resources:
- ../../k8s

patches:
- path: image-pull-policy.yaml
  target:
    kind: Deployment
    name: time

which references a patch file that targets the Deployment named time:

envs/dev/image-pull-policy.yaml

- op: add
  path: /spec/template/spec/containers/0/imagePullPolicy
  value: "Never"

We can now remove imagePullPolicy from the Deployment and instead apply the envs/dev folder and it will add the correct imagePullPolicy for us:

➜ kubectl apply -k ./envs/dev 
namespace/mck-p unchanged
service/time unchanged
deployment.apps/time unchanged

Step 8: Update Docker Image

Let's say that we wanted to add a message around the date:

const server = http.createServer((req, res) => {
  res.end(`Hello at ${new Date().toISOString()}`);
});

Inside the same terminal that you ran the eval command (if you don't have that, run the eval command in a new terminal. If you do not, Minikube will not know), run the build command again:

$/McK-P/apps/time
➜ docker build  -t mck-p/time-app .
Sending build context to Docker daemon  70.75MB
Step 1/11 : FROM node:16 as BUILD
 ---> c8af85aa3027
Step 2/11 : WORKDIR /app
 ---> Using cache
 ---> b9b48ecb61a1
Step 3/11 : COPY . .
 ---> 56674a2983eb
Step 4/11 : RUN yarn install --frozen-lockfile && npx tsc
 ---> Running in 6ba0da2831b1
yarn install v1.22.19
[1/4] Resolving packages...
success Already up-to-date.
Done in 0.05s.
npm notice 
npm notice New minor version of npm available! 8.15.0 -> 8.18.0
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v8.18.0>
npm notice Run `npm install -g npm@8.18.0` to update!
npm notice 
Removing intermediate container 6ba0da2831b1
 ---> 35e380bb1080
Step 5/11 : FROM node:16 as FINAL
 ---> c8af85aa3027
Step 6/11 : WORKDIR /app
 ---> Using cache
 ---> b9b48ecb61a1
Step 7/11 : COPY package.json tsconfig.json yarn.lock ./
 ---> Using cache
 ---> e83050473d06
Step 8/11 : RUN yarn install --production=true --frozen-lockfile
 ---> Using cache
 ---> 0a673f1d8d51
Step 9/11 : COPY --from=BUILD /app/dist/ ./dist/
 ---> 375a77b37cb9
Step 10/11 : ENV PORT=5000
 ---> Running in 592de061f720
Removing intermediate container 592de061f720
 ---> 88468ee2bd46
Step 11/11 : CMD ["node", "dist/index.js"]
 ---> Running in 15ae2e2f8562
Removing intermediate container 15ae2e2f8562
 ---> de9fb622b6b3
Successfully built de9fb622b6b3
Successfully tagged mck-p/time-app:latest

and now we can restart the Pods with the new images by ctrl+k them in K9s:

K9s commands

Step 9: Port Forward

We have some pods up that are running some code and we need to access them. So let's search service inside K9s to get to the service list:

Service List

and we can run shift+f to select the port to forward to and from:

Port Forwarding

and click enter a few times. We should now be able to hit localhost:5000 and it result in the time deployment being hit:

➜ curl localhost:5000
Hello at 2022-08-30T12:54:12.748Z

and we can test that by updating the image, re-building docker, and seeing if the message changes:

const server = http.createServer((req, res) => {
  res.end(`Hello from the server. The time is ${new Date().toISOString()}`);
});
$ McK-P/apps/time
➜ docker build  -t mck-p/time-app .
Sending build context to Docker daemon  70.75MB
....
Successfully tagged mck-p/time-app:latest

and ctrl+k all the pods via k9s and add the port forwarding to the service and re-run the request:

➜ curl localhost:5000              
Hello from the server. The time 2022-08-30T12:56:17.068Z
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment