KIND uses containerd by default as container runtime, however, it is possible to switch it by CRI-O with some modifications
- Create the new node image, it's based on current KIND images, so the same process applies, you just need to tweak the CRI-O config accordenly (the Dockerfile here may need to be modifies for other k8s versions)
docker build -t kindnode/crio:1.19 .
The image is bigger than the KIND one, of course :-)
REPOSITORY TAG IMAGE ID CREATED SIZE
kindnode/crio 1.19 f71390c5d83f 43 minutes ago 1.59GB
kindest/node v1.19.1 dcaefb48dc5a 40 hours ago 1.36GB
- With the new image, we just need to create our new cluster with it and patch kubeadm to use the crio socket:
kind create cluster --name crio --image kindnode/crio:1.18 --config kind-config-crio.yaml
and voila, you have a kubernetes cluster using crio as runtime:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
crio-control-plane Ready master 3m12s v1.18.8 172.19.0.4 <none> Ubuntu Groovy Gorilla (development branch) 4.18.0-193.6.3.el8_2.x86_64 cri-o://1.18.3
crio-worker Ready <none> 2m23s v1.18.8 172.19.0.2 <none> Ubuntu Groovy Gorilla (development branch) 4.18.0-193.6.3.el8_2.x86_64 cri-o://1.18.3
crio-worker2 Ready <none> 2m23s v1.18.8 172.19.0.3 <none> Ubuntu Groovy Gorilla (development branch) 4.18.0-193.6.3.el8_2.x86_64 cri-o://1.18.3
- Install new CRI-O version, since CRI-O is a standalone binary you just need to copy it in each node and restart it:
for n in $(kind get nodes --name crio); do
docker cp crio $n:/usr/bin/crio
docker exec $n systemctl restart crio
done
Ok, I've automated the process and there are images published with CRIO and latest stable Kind versions (Kubernetes version used is the latest stable published by Kind)
https://github.com/aojea/kind-images/actions/workflows/crio.yaml
You can find the images here
https://quay.io/repository/aojea/kindnode?tab=tags
format is quay.io/aojea/kindnode:crio$(timestamp)
Usage