Skip to content

Instantly share code, notes, and snippets.

@trondhindenes
Last active April 17, 2024 10:07
Show Gist options
  • Star 20 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save trondhindenes/0307fbe9cda1164115353b4632a31ea9 to your computer and use it in GitHub Desktop.
Save trondhindenes/0307fbe9cda1164115353b4632a31ea9 to your computer and use it in GitHub Desktop.
Run KinD (Kubernetes in Docker) as part of Gitlab CI job
#Spin up Kubernetes control plane as part of before_script, and destroys it using after_script
#Some custom logic to get to the right ip address
#Requres the gitlab docker runner, with "pass-thru" to the host docker socket.
stages:
- test
image: python:3.6.6 #the docker image you run in needs Docker installed, and access to the host docker socket.
test_integration_k8s:
tags:
- linux-docker
stage: test
before_script:
- curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
- chmod +x kubectl
- mv kubectl /usr/local/bin/
- curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/0.1.0/kind-linux-amd64
- chmod +x kind
- mv kind /usr/local/bin/
- kind create cluster --name $CI_PIPELINE_ID --wait 180s
- export KUBECONFIG="$(kind get kubeconfig-path --name $CI_PIPELINE_ID)"
- REAL_IP=$(ip route|awk '/default/ { print $3 }')
- sed -i -e "s/localhost/$REAL_IP/g" $KUBECONFIG
script:
- kubectl get nodes --insecure-skip-tls-verify=true
after_script:
- kind delete cluster --name $CI_PIPELINE_ID
@nflaig
Copy link

nflaig commented Jun 1, 2021

@trondhindenes is there a specific reason why you manually delete the cluster in the after_script? Since this is ran inside a container I would expect it to be cleaned up anyways after the job finished

@trondhindenes
Copy link
Author

you're probably right. It's been a long time since I've looked at this, so I don't remember I'm afraid.

@jwillker
Copy link

jwillker commented Aug 3, 2021

@trondhindenes is there a specific reason why you manually delete the cluster in the after_script? Since this is ran inside a container I would expect it to be cleaned up anyways after the job finished

If you don't do that, a container running kind will be running forever in the Gitlab runner host.

@kochcj1
Copy link

kochcj1 commented Oct 3, 2023

OMG, thank you, thank you, thank you!!!

kubectl kept getting a connection refused error because it wasn't able to connect to the control plane, but with your help, I was able to resolve the issue.

Note that kind get kubeconfig-path is no longer supported in newer versions of kind, so I had to tweak things a bit:

- REAL_IP=$(ip route|awk '/default/ { print $3 }')
- sed -i -e "s/0.0.0.0/$REAL_IP/g" $HOME/.kube/config

I also had to tell kind to bind to 0.0.0.0 instead of to localhost/127.0.0.1 by passing it a config like this:
kind create cluster --config kind-config.yml --wait 60s

and this...

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"

@pirolas001
Copy link

OMG, thank you, thank you, thank you!!!

kubectl kept getting a connection refused error because it wasn't able to connect to the control plane, but with your help, I was able to resolve the issue.

Note that kind get kubeconfig-path is no longer supported in newer versions of kind, so I had to tweak things a bit:

- REAL_IP=$(ip route|awk '/default/ { print $3 }')
- sed -i -e "s/0.0.0.0/$REAL_IP/g" $HOME/.kube/config

I also had to tell kind to bind to 0.0.0.0 instead of to localhost/127.0.0.1 by passing it a config like this: kind create cluster --config kind-config.yml --wait 60s

and this...

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "0.0.0.0"

Were you able to do a health check in a deployment you made? I'm trying to do a deployment of a nginx and then do a basic curl to check the status, but I'm not being able to do so. Can't get it right with the network logic behind the gitlab runner and the kind container. I'm using the image docker-git. Any idea?

@trondhindenes
Copy link
Author

Were you able to do a health check in a deployment you made? I'm trying to do a deployment of a nginx and then do a basic curl to check the status, but I'm not being able to do so. Can't get it right with the network logic behind the gitlab runner and the kind container. I'm using the image docker-git. Any idea?

Hi, glad it helped you! I'm afraid I haven't worked on this for many years so I don't have any helpful tips to share, sorry!

@kochcj1
Copy link

kochcj1 commented Oct 4, 2023

@pirolas001, I don't know if this would help, but when doing local testing, I was able to make my web service that's running in the cluster accessible like this: kubectl port-forward service/<service name> <local port>:<service's target port>. Perhaps kubectl proxy might also help?

@dealer426
Copy link

I am having issues with using a local GH Runner and building the cluster locally, it seems that I can't reconnect to the cluster in a seperate stage, has anyone successfully done this with kind and podman on wsl2?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment