Skip to content

Instantly share code, notes, and snippets.

@grahamwhaley
Last active April 2, 2019 09:00
Show Gist options
  • Save grahamwhaley/787c702d56dbba0f3588eb241edb6251 to your computer and use it in GitHub Desktop.
Save grahamwhaley/787c702d56dbba0f3588eb241edb6251 to your computer and use it in GitHub Desktop.

Prereqs for Kata containers on Kubernetes workshop

To fully participate in the workshop, you will require access to a Kubernetes cluster with the following properties:

  • Version >= v1.12 kubernetes
  • That is scratch/disposable (i.e. not a live cluster you care about)
  • That supports virtualisation (Kata runs containers in VMs, so the kubelet must be running on a node that is able to run a VM. - see below)
  • preferably CRI-O installed as the CRI runtime on the cluster, along with the matching runc as the default container runtime.
    • Kata Containers also works with containerd, but the workshop demo will be conducted on a stack configured with CRI-O. you are welcome to follow along with containerd and adapt as necessary if you wish. See this page
  • Preferably your stack will have the Kubernetes RuntimeClass FeatureGate enabled on both the APIserver and the kubelet. The workshop will discuss how to enable this FeatureGate, but it may be easier to enable this at cluster setup time.

Local k8s stack options

There are a few methods that can be used to obtain a local (on your local machine) k8s stack setup and ready to use for the workshop. A remote cloud instance may also be useable, as long as it can support virtualisation (which implies it is either a bare metal machine or supports nested virtualisation).

Minikube

To enable kata under minikube on a Linux host, we need to add a few configuration options to the default minikube setup. This is nice and easy, as minikube supports them on the setup commandline. Note, afaik, minikube on a Mac does not support nested virtualisation, so will not satisfy the requirements. Having said that, you should still be able to follow along with the configuration and installation of Kata, but won't be able to launch any Kata containers themselves.

Here are the features, and why we need them:

what why
--vm-driver kvm2 The host VM driver I tested with
--cpus 4 Bump up from default of 2 if you have the capacity, just for perf
--memory 6144 Allocate more memory, as Kata containers default to 1 or 2Gb
--feature-gates=RuntimeClass=true Kata needs to use the RuntimeClass k8s feature
--network-plugin=cni As recommended for minikube CRI-o
--enable-default-cni As recommended for minikube CRI-o
--container-runtime=cri-o Using CRI-O for Kata
--bootstrapper=kubeadm As recommended for minikube CRI-o

for minikube specific installation instructions see the docs, which will also help locate the information needed to get the kvm2 driver installed etc.

Here then is the command I ran to get my basic minikube set up ready to add kata:

minikube start \
 --vm-driver kvm2 \
 --cpus 4 \
 --memory 6144 \
 --feature-gates=RuntimeClass=true \
 --network-plugin=cni \
 --enable-default-cni \
 --container-runtime=cri-o \
 --bootstrapper=kubeadm

That command will take a little while to pull down and install items, but ultimately should complete successfully.

ClearLinux Cloud Native Setup

NOTE: I had some challenges on my machine getting all the relevant vagrant items set up to use this stackup. I would therefore recommend, if you can, you start by trying to set up the above minikube stack first to see if that will satisfy your requirements.

Clear Linux provides a convinient vagrant based 3-node stack setup as an example of a Kata Kubernetes deployment. We can utilise this stack for this workshop.

Download the setup scripts from this github repo.

Read the Documentation, and stop at the "Join workers to the cluster" stage. It is recommended you do join at least one of the slave nodes to the master node cluster.

Setup the initial stack like:

$ cd cloud-native-setup/clr-k8s-examples
$ ./create_stack.sh minimal
# And follow the onscreen instructions and documentation

Checking for nesting support

Your node needs to have virtualisation enabled, to allow kubelet to launch Kata VMs. To check if you have virtualisation enabled on the node, try:

$ egrep --color 'vmx|svm' /proc/cpuinfo

and look for vmx or svm coloured red in the output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment