This gist is a quick explainer on how to install and configure gVisor on K3s. There is a sort of hidden gotcha if you aren't reading the documentation thoroughly. If you already have a K3s cluster setup skip to the appropriate section below.
Install K3s as described in the documentation.
Install gVisor as described in the documentation. I am using Ubuntu so I will describe the process to install the apt package.
sudo apt-get update && \
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg
curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | sudo tee /etc/apt/sources.list.d/gvisor.list > /dev/null
sudo apt-get update && sudo apt-get install -y runsc
K3s uses containerd
by default so you will need to configure it to use gVisor (documentation).
The documentation asks you to modify a /etc/containerd/config.toml
file. K3s installs this in a different location, /var/lib/rancher/k3s/agent/etc/containerd/config.toml
. However it is important to note that you cannot modify this file directly. Any changes made to it are overwritten. The actual solution is to create a Go template file with the additions you'd like to include. This file must be placed in that directory and named config.toml.tmpl
.
#/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
{{ template "base" . }}
[plugins."io.containerd.runtime.v1.linux"]
shim_debug = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
Restart containerd for this to take effect.
sudo systemctl restart k3s
cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx-gvisor
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
EOF
Check the container is running with the following:
kubectl get pod nginx-gvisor -o wide
If the output looks like the following things should be good.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-gvisor 1/1 Running 0 22s 10.42.0.9 k3s2 <none> <none>
To ensure that gVisor is working appropriately exec into the container and run dmesg.
kubectl exec --stdin --tty nginx-gvisor -- bash
Run the dmesg
command and the output should look like the following:
root@nginx-gvisor:/# dmesg
[ 0.000000] Starting gVisor...
[ 0.521643] Moving files to filing cabinet...
[ 0.578430] Feeding the init monster...
[ 1.061361] Daemonizing children...
[ 1.492269] Granting licence to kill(2)...
[ 1.506292] Constructing home...
[ 1.752169] Recruiting cron-ies...
[ 2.118226] Consulting tar man page...
[ 2.155947] Synthesizing system calls...
[ 2.509727] Creating cloned children...
[ 2.611705] Conjuring /dev/null black hole...
[ 2.743786] Setting up VFS...
[ 2.767337] Setting up FUSE...
[ 2.818077] Ready!
The most important part is that starting gVisor
part. If you see that you should be good.