Skip to content

Instantly share code, notes, and snippets.

@aojea
Last active December 27, 2023 16:11
Show Gist options
  • Star 13 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save aojea/bd1fb766302779b77b8f68fa0a81c0f2 to your computer and use it in GitHub Desktop.
Save aojea/bd1fb766302779b77b8f68fa0a81c0f2 to your computer and use it in GitHub Desktop.
Using crio runtime in KIND

How to use CRI-O runtime with KIND

KIND uses containerd by default as container runtime, however, it is possible to switch it by CRI-O with some modifications

  1. Create the new node image, it's based on current KIND images, so the same process applies, you just need to tweak the CRI-O config accordenly (the Dockerfile here may need to be modifies for other k8s versions)
docker build -t kindnode/crio:1.19 .

The image is bigger than the KIND one, of course :-)

REPOSITORY                                             TAG                                 IMAGE ID            CREATED             SIZE
kindnode/crio                                          1.19                                f71390c5d83f        43 minutes ago      1.59GB
kindest/node                                           v1.19.1                             dcaefb48dc5a        40 hours ago        1.36GB
  1. With the new image, we just need to create our new cluster with it and patch kubeadm to use the crio socket:
kind create cluster --name crio --image kindnode/crio:1.18 --config kind-config-crio.yaml

and voila, you have a kubernetes cluster using crio as runtime:

kubectl get nodes -o wide
NAME                 STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                                     KERNEL-VERSION                CONTAINER-RUNTIME
crio-control-plane   Ready    master   3m12s   v1.18.8   172.19.0.4    <none>        Ubuntu Groovy Gorilla (development branch)   4.18.0-193.6.3.el8_2.x86_64   cri-o://1.18.3
crio-worker          Ready    <none>   2m23s   v1.18.8   172.19.0.2    <none>        Ubuntu Groovy Gorilla (development branch)   4.18.0-193.6.3.el8_2.x86_64   cri-o://1.18.3
crio-worker2         Ready    <none>   2m23s   v1.18.8   172.19.0.3    <none>        Ubuntu Groovy Gorilla (development branch)   4.18.0-193.6.3.el8_2.x86_64   cri-o://1.18.3
  1. Install new CRI-O version, since CRI-O is a standalone binary you just need to copy it in each node and restart it:
for n in $(kind get nodes --name crio); do
  docker cp crio $n:/usr/bin/crio
  docker exec $n systemctl restart crio
done
ARG IMAGE=kindest/node
ARG VERSION=1.19
ARG MINOR=1
ARG OS=xUbuntu_20.04
FROM ${IMAGE}:v${VERSION}.${MINOR}
ARG VERSION
ARG OS
RUN echo "Installing Packages ..." \
&& DEBIAN_FRONTEND=noninteractive clean-install \
tcpdump \
vim \
gnupg \
tzdata \
&& echo "Installing cri-o" \
&& export CONTAINERS_URL="https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/${OS}/" \
&& echo "deb ${CONTAINERS_URL} /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list \
&& export CRIO_URL="http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/${VERSION}/${OS}/" \
&& echo "deb ${CRIO_URL} /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.list \
&& curl -L ${CONTAINERS_URL}Release.key | apt-key add - || true \
&& curl -L ${CRIO_URL}Release.key | apt-key add - || true \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get --option=Dpkg::Options::=--force-confdef install -y cri-o cri-o-runc \
&& ln -s /usr/libexec/podman/conmon /usr/local/bin/conmon \
&& sed -i 's/^pause_image =.*/pause_image = \"k8s.gcr.io\/pause:3.2\"/' /etc/crio/crio.conf \
&& sed -i 's/.*storage_driver.*/storage_driver = \"vfs\"/' /etc/crio/crio.conf \
&& sed -i 's/^cgroup_manager =.*/cgroup_manager = \"cgroupfs\"/' /etc/crio/crio.conf \
&& sed -i 's/^cgroup_manager =.*/a conmon_cgroup = \"pod\"/' /etc/crio/crio.conf \
&& sed -i 's/containerd/crio/g' /etc/crictl.yaml \
&& systemctl disable containerd \
&& systemctl enable crio
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
criSocket: unix:///run/crio/crio.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
criSocket: unix:///run/crio/crio.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
criSocket: unix:///run/crio/crio.sock
kubeletExtraArgs:
cgroup-driver: cgroupfs
@saschagrunert
Copy link

@aojea do you think we could patch this into Kind to have it as optional runtime?

@aojea
Copy link
Author

aojea commented Sep 1, 2020

It can be much simpler than this snippet, is just adding the crio and conmon binary with the crio.conf to the images, but I know @BenTheElder has some plan about the runtime kubernetes-sigs/kind#1042

@BenTheElder is it possible to have a better UX to use crio in kind?

@BenTheElder
Copy link

TLDR: not at this time.

In Depth: ...

kind is not currently planning to support different runtimes within the nodes.

  • we have a single unified base image, which is more maintainable
  • we need to integrate with things that are not in CRI for production usage (e.g. preloading images / sideloading images, network config)
  • dockershim is legacy
  • we have experience managing containerd, and it cleanly supplies the additional functionality
  • we can more easily compare this to the GCE CI
  • we wind up getting lots of user issues related to low level details even though we ostensibly just provide working kubernetes. we don't have the bandwidth for fielding a more expansive set of these

We generally tell users not to depend on the node distro contents. For example people have also wanted a different "os" for the nodes, but kind is about running kubernetes, for which any functioning option should be sufficient. For extremely low level integration work (e.g. kernel drives for ceph) kind is not particularly suitable.

This patch works in some sense but it seriously degrades various functionality including offline support.

@BenTheElder
Copy link

I wouldn't say we never would, given evidence that CRI actually affects well behaved applications and testing Kubernetes, and sufficient demand, but I think if anything that would be a failing of CRI.
Kubernetes and Applications shoudn't need to care about the underlying choice, but a distro "installer" does have to.

@ajwock
Copy link

ajwock commented Aug 24, 2021

Because k8s and crio change rapidly and have already broken these files, here are some updated files:

Dockerfile:

ARG IMAGE=kindest/node
ARG VERSION=1.21
ARG MINOR=1
ARG OS=xUbuntu_21.04

FROM ${IMAGE}:v${VERSION}.${MINOR}

ARG VERSION
ARG OS

RUN echo "Installing Packages ..." \
    && DEBIAN_FRONTEND=noninteractive clean-install \
    tcpdump \
    vim \
    gnupg \
    tzdata \
 && echo "Installing cri-o" \
    && export CONTAINERS_URL="https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/${OS}/" \
    && echo "deb ${CONTAINERS_URL} /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list \
    && export CRIO_URL="http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/${VERSION}/${OS}/" \
    && echo "deb ${CRIO_URL} /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.list \
    && curl -L ${CONTAINERS_URL}Release.key | apt-key add - || true \
    && curl -L ${CRIO_URL}Release.key | apt-key add - || true \
    && apt-get update \
    && DEBIAN_FRONTEND=noninteractive apt-get --option=Dpkg::Options::=--force-confdef install -y cri-o cri-o-runc \
    && ln -s /usr/libexec/podman/conmon /usr/local/bin/conmon \
    && printf "[crio.runtime]\ncgroup_manager=\"cgroupfs\"\nconmon_cgroup=\"pod\"\n" > /etc/crio/crio.conf \
    && sed -i 's/containerd/crio/g' /etc/crictl.yaml \
 && systemctl disable containerd \
 && systemctl enable crio

kind-config-crio.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      criSocket: unix:///var/run/crio/crio.sock
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      criSocket: unix:///var/run/crio/crio.sock
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      criSocket: unix:///var/run/crio/crio.sock

Tech versions (just what this was tested with- different versions may work, try at your own risk):
crio 1.21.2
go 1.17.1
kind v0.11.1
kubelet version 1.21.1

os is fedora 32 with cgroups v2

@aojea
Copy link
Author

aojea commented Aug 24, 2021

I'm publishing unofficial images here https://github.com/aojea/kind-images , maybe we should start to automate it for crio too 🤔

Thanks for the work @ajwock

@saschagrunert
Copy link

saschagrunert commented Aug 24, 2021

I'm publishing unofficial images here https://github.com/aojea/kind-images , maybe we should start to automate it for crio too

Happy to help with anything if there is anything we can do.

@aojea
Copy link
Author

aojea commented Aug 30, 2021

Ok, I've automated the process and there are images published with CRIO and latest stable Kind versions (Kubernetes version used is the latest stable published by Kind)
https://github.com/aojea/kind-images/actions/workflows/crio.yaml

You can find the images here
https://quay.io/repository/aojea/kindnode?tab=tags

format is quay.io/aojea/kindnode:crio$(timestamp)

Usage

wget https://raw.githubusercontent.com/aojea/kind-images/master/kind-crio.yaml
kind create cluster --name crio --image quay.io/aojea/kindnode:crio1630331170 --config kind-crio.yaml 

@andrejusc
Copy link

Hi @aojea - I'm trying to use one of latest automatically built image quay.io/aojea/kindnode:crio1639534040 and getting such error:

[FAILED] Failed to start Container Runtime Interface for OCI (CRI-O).
See 'systemctl status crio.service' for details.

and while navigating into that running container and checking status - there are such lines:

Dec 15 23:51:58 crio-control-plane crio[147]: time="2021-12-15 23:51:58.298625001Z" level=fatal msg="Validating root config: failed to get store to set defaults: kernel does not support overlay fs: 'overlay' is not supported over xfs at \"/var/lib/containers/storage/overlay\": backing file system is unsupported for this graph driver"
Dec 15 23:51:58 crio-control-plane systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
Dec 15 23:51:58 crio-control-plane systemd[1]: crio.service: Failed with result 'exit-code'.
Dec 15 23:51:58 crio-control-plane systemd[1]: Failed to start Container Runtime Interface for OCI (CRI-O).

But once I uncomment this line inside /etc/containers/storage.conf file:

#mount_program = "/usr/bin/fuse-overlayfs"

then at least I could start crio via systemcl start crio.service.

Not sure if this is right place to ask such question about your prepared image, but curious what else should I try to check/change? I'm exploring kind+rootless podman setup of things.

@aojea
Copy link
Author

aojea commented Dec 16, 2021

we are going to merge a fix in kind for rootless that seems related

kubernetes-sigs/kind#2559

for rootless wait until that PR is merged and use kind latest version from @main

@andrejusc
Copy link

@aojea - is there some snapshot folder to download @main kind binary from? Or only way is to clone that repo and try to build locally?

@aojea
Copy link
Author

aojea commented Dec 16, 2021

go install sigs.k8s.io/kind@main

@andrejusc
Copy link

@aojea - installed such kind version from main: go: downloading sigs.k8s.io/kind v0.11.2-0.20211216085318-c88b8ec95949
But with it - using all other same things/image as yesterday - getting into such earlier failure during kind cluster creation:

 ✗ Writing configuration 📜 
ERROR: failed to create cluster: failed to generate kubeadm config content: version "1.21.1" is not compatible with rootless provider (hint: kind v0.11.x may work with this version)

@aojea
Copy link
Author

aojea commented Dec 16, 2021

You have to use the crio image

@andrejusc
Copy link

Yes, I'm using currently quay.io/aojea/kindnode:crio1639534040 image and seeing this new issue with kind from @main. Unless you mean by crio image something else.

@aojea
Copy link
Author

aojea commented Dec 16, 2021

I see the problem, the crio images are hardcoded to kubernetes 1.21 that doesn't support rootless

https://github.com/aojea/kind-images/blob/f1c63b97dca4dd82476a751f2508d6f043b8f80f/.github/workflows/crio.yaml#L11

@andrejusc
Copy link

In case it's important - I could spin off kind cluster with such another image kindest/node:v1.21.2 with no issues under same setup.

@tetianakravchenko
Copy link

I am trying to run kind with k8s version 1.23 and with crio. Cluster doesn't start properly:

kind-with-crio % kubectl get pods -n kube-system
NAME                                         READY   STATUS                 RESTARTS   AGE
coredns-64897985d-5qks5                      0/1     Running                0          6m28s
coredns-64897985d-df99k                      0/1     Running                0          6m28s
etcd-crio-control-plane                      1/1     Running                0          6m41s
kindnet-gj5vj                                1/1     Running                0          6m28s
kube-apiserver-crio-control-plane            1/1     Running                0          6m41s
kube-controller-manager-crio-control-plane   1/1     Running                0          6m41s
kube-proxy-whlqw                             0/1     CreateContainerError   0          6m28s
kube-scheduler-crio-control-plane            1/1     Running                0          6m41s

coredns stuck in not ready state, logs are full with [INFO] plugin/ready: Still waiting on: "kubernetes"
kube-proxy fails with level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
anyone who had the same issue and found a solution here? any ideas here?

@aojea
Copy link
Author

aojea commented May 16, 2022

I am trying to run kind with k8s version 1.23 and with crio. Cluster doesn't start properly:

kind-with-crio % kubectl get pods -n kube-system
NAME                                         READY   STATUS                 RESTARTS   AGE
coredns-64897985d-5qks5                      0/1     Running                0          6m28s
coredns-64897985d-df99k                      0/1     Running                0          6m28s
etcd-crio-control-plane                      1/1     Running                0          6m41s
kindnet-gj5vj                                1/1     Running                0          6m28s
kube-apiserver-crio-control-plane            1/1     Running                0          6m41s
kube-controller-manager-crio-control-plane   1/1     Running                0          6m41s
kube-proxy-whlqw                             0/1     CreateContainerError   0          6m28s
kube-scheduler-crio-control-plane            1/1     Running                0          6m41s

coredns stuck in not ready state, logs are full with [INFO] plugin/ready: Still waiting on: "kubernetes" kube-proxy fails with level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted" anyone who had the same issue and found a solution here? any ideas here?

at some point it broke , there is some compatibility issue with newer versions and I don't have much time to work on this

@electrocucaracha
Copy link

I have recently worked in a PoC for testing WasmEdge performance. Initially, I started with CRI-O as runtime manager, using this Dockerfile for the integration. This approach was working with the current 1.24.1 version, but unfortunately I have to abandon it because I have to load some local images with kind load command, and it seems to be tight related to ContainerD. Anyway, I share my discoveries in case that someone find them useful.

@andrejusc
Copy link

I've got it working now using as a base such in Dockerfile:

ARG IMAGE=kindest/node
ARG VERSION=1.24
ARG MINOR=3
ARG OS=xUbuntu_22.04

and having podman 3.4.2 (running it in rootfull way so far - still need to learn more about cgroups v2 for rootless cases) using systemd as cgroupManager and same passed in Dockerfile to cri-o:

printf "[crio.runtime]\ncgroup_manager=\"systemd\"\nconmon_cgroup=\"pod\"\n" > /etc/crio/crio.conf

and was able to create then cluster of 1 control plane node and have that node like such:

$ kubectl --context kind-test123 get nodes -o wide
NAME                    STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                      CONTAINER-RUNTIME
test123-control-plane   Ready    control-plane   74m   v1.24.3   10.89.0.13    <none>        Ubuntu 22.04.1 LTS   5.4.17-2136.306.1.3.el8uek.x86_64   cri-o://1.24.2

And to test - created httpbin deployment from here: https://github.com/istio/istio/blob/master/samples/httpbin/httpbin.yaml
and did port forwarding via:

kubectl --context kind-test123 port-forward svc/httpbin 8000:8000 -n default

and could hit that with curl calls like such:

curl -X GET localhost:8000/get

And if of interest - here is my crictl images list:

# crictl images
IMAGE                                      TAG                  IMAGE ID            SIZE
docker.io/kennethreitz/httpbin             latest               b138b9264903f       545MB
docker.io/kindest/kindnetd                 v20220726-ed811e41   d921cee849482       63.3MB
docker.io/kindest/local-path-provisioner   v0.0.22-kind.0       4c1e997385b8f       48.9MB
k8s.gcr.io/coredns/coredns                 v1.8.6               a4ca41631cc7a       47MB
k8s.gcr.io/etcd                            3.5.3-0              aebe758cef4cd       301MB
k8s.gcr.io/kube-apiserver                  v1.24.3              d521dd763e2e3       131MB
k8s.gcr.io/kube-controller-manager         v1.24.3              586c112956dfc       121MB
k8s.gcr.io/kube-proxy                      v1.24.3              2ae1ba6417cbc       112MB
k8s.gcr.io/kube-scheduler                  v1.24.3              3a5aa3a515f5d       52.3MB
registry.k8s.io/pause                      3.6                  6270bb605e12e       690kB

So far so good - could play with that then as well in addition to containerd based configurations.

@kitt1987
Copy link

I built a new base image for cri-o. Feel free to give it a try. https://github.com/warm-metal/kindest-base-crio

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment