Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Run multiple minikube Kubernetes clusters on Ubuntu Linux with KVM

Ramp up your Kubernetes development, CI-tooling or testing workflow by running multiple Kubernetes clusters on Ubuntu Linux with KVM and minikube.

In this tutorial we will combine the popular minikube tool with Linux's Kernel-based Virtual Machine (KVM) support. It is a great way to re-purpose an old machine that you found on eBay or have gathering gust under your desk. An Intel NUC would also make a great host for this tutorial if you want to buy some new hardware. Another popular angle is to use a bare metal host in the cloud and I've provided some details on that below.

We'll set up all the tooling so that you can build one or many single-node Kubernetes clusters and then deploy applications to them such as OpenFaaS using familiar tooling like helm. I'll then show you how to access the Kubernetes clusters from a remote machine such as your laptop.


  • This tutorial uses Ubuntu 16.04 as a base installation, but other distributions are supported by KVM. You'll need to find out how to install KVM with your package manager. If you're using Fedora you can follow these instructions to Install KVM.
  • You'll need nested virtualization available on a cloud host, a spare machine under your desk or a bare metal machine in the cloud. You can find affordable bare metal at Scaleway or high-spec/performance bare-metal over at:

Install Tooling

Run all of these commands on your Linux host unless otherwise specified.

Install KVM

KVM enables virtualization on Linux, but other options are also available for use with minikube such as: virtualbox/vmwarefusion/kvm/xhyve/hyperv.

Follow instructions here to install packages from apt:

sudo apt-get install -qy \
  qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker
sudo kvm-ok

Install kubectl

curl -LO$(curl -s
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Install minikube and the KVM2 driver

curl -SLO
curl -SLO

chmod +x docker-machine-driver-kvm2
chmod +x minikube-linux-amd64

sudo mv docker-machine-driver-kvm2 /usr/local/bin
sudo mv minikube-linux-amd64 /usr/local/bin/minikube

Create a cluster with kubeadm

Using the kubeadm bootstrapper will enable RBAC

Create your first cluster VM:

minikube start --bootstrapper=kubeadm --vm-driver=kvm2 --memory 4096 --cpus 4 --profile cluster1

You can set up additional separate VMs using the --profile flag.

Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
 160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

You can manage each cluster using kubectl by switching between contexts saved in ~/.kube/config.yaml. The kubectx tool is also popular in the community for switching between these quickly.

kubectl config get-contexts
*         cluster1   cluster1   cluster1   

Working with the various IPs

Pass the --profile flag to minikube commands so that you can access the ports you want.

minikube ip --profile cluster1

You may want to use SSH port forwarding with ssh -L port:port or kubectl port-forward to give access to services and NodePorts available on the minikube VM. If you're comfortable with iptables you could also set up some NAT rules, but I would recommend against it since the IP addresses of your minikube environments may change when being restarted.

Deploy something to test it out (OpenFaaS)

You can now install something to test the cluster.

  • Configure helm and tiller:
curl | bash
kubectl -n kube-system create sa tiller \
  && kubectl create clusterrolebinding tiller   \
  --clusterrole cluster-admin   \
helm init --skip-refresh --upgrade --service-account tiller
  • Setup OpenFaaS via helm:
kubectl apply -f

helm repo add openfaas

helm repo update  && helm upgrade openfaas \
  --install openfaas/openfaas  \
  --namespace openfaas  \
  --set functionNamespace=openfaas-fn
  • Now access the OpenFaaS gateway via the NodePort:
curl http://$(minikube ip --profile cluster1):31112/system/info

{"provider":{"provider":"faas-netes","version":{"sha":"5539cf43c15a28e9af998cdc25b5da06252b62e1","release":"0.6.0"},"orchestration":"kubernetes"},"version":{"commit_message":"Attach X-Call-Id to asynchronous calls","sha":"c86de503c7a20a46645239b9b081e029b15bf69b","release":"0.8.11"}}

Access the cluster from your laptop

You can also gain access into the cluster(s) from a remote machine.

Port forward from your laptop to the Linux machine:

Find the IP of the minikube VM with echo $(minikube ip --profile cluster1) i.e.

ssh -L -N 31112: user@linux-host

You can now access the OpenFaaS installation from your remote Linux host via or even using faas-cli --gateway

You can also open up your OpenFaaS installation to your friends or for testing public webhooks via ngrok. Run the tool on your Linux host

./ngrok http $(minikube ip --profile cluster1):31112

Wrapping up

We've now built one or many Kubernetes clusters using minikube and KVM using the --profile flag to separate them and assign each its own name. Port-forwarding, ngrok or SSH provided us with temporary access into the clusters for testing purposes. Where could you take this text?

After you have gained some muscle-memory with creating and accessing these development clusters, you could go on to bake them into projects. A single-use cluster would be great to use in a CI/CD pipeline for end-to-end testing or any other tasks that require multiple configurations such as ensuring backwards compatibility with different Kubernetes versions or with RBAC enabled or disabled.

If you have comments, questions or suggestions feel free to reach out over Twitter @alexellisuk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment