Skip to content

Instantly share code, notes, and snippets.

@rdimitrov
Last active February 20, 2021 03:20
Show Gist options
  • Save rdimitrov/bd01882509da327482a623c96d719282 to your computer and use it in GitHub Desktop.
Save rdimitrov/bd01882509da327482a623c96d719282 to your computer and use it in GitHub Desktop.
How to create a virtual ARM cluster

How to create a virtual ARM cluster

1. Introduction

The following document guides the user through the creation of one or more ARM virtual machines. It also covers the steps needed to setup a Kubernetes or Docker Swarm cluster using those VMs.

The ARM-based devices are already widely popular, for example smartphones, IoT, Raspberry Pi’s and almost every single board computer. Having such a device virtualized lets the user to experiment with various projects available in the ARM community. And most importantly, the user gets all of this for free. There's no need to buy or own any ARM device, SD cards, cables and other network equipment, before being sure that you actually need it.

Pros:

  • No additional costs for hardware (Raspberry Pi boards, Ethernet dongles, SD cards, cables, etc)
  • Scale easily (if a node is needed, just spin up another VM)
  • Leverage the benefits of virtualization (for ex. create a VM checkpoint of a clean deployment, experiment on top of it and if something fails just reload the VM from that specific checkpoint)
  • Create a heterogeneous setup (you can have x86_64, ARMv7 and ARMv8 nodes in a single environment)

Cons:

  • Lower performance if the nodes (guest VMs) cannot leverage the acceleration capabilities of the host CPU.

At the end of this tutorial the user should have a complete ARMv7/ARMv8 Docker Swarm or Kubernetes cluster.

2. Prerequisites

  • Linux host machine (preferably an Ubuntu ARM host with KVM support for virtualization if we want better performance)
  • 8GB of RAM or more
  • 20GB of storage
  • No more than 30 minutes to setup! (assuming everything goes as expected)

It's not mandatory, but it will be helpful if the user have experience with networking and virtualization in Linux.

Each command line in the following guide is coupled together with:

  • a hint that specifies where it should be executed (for ex. either "host", "host, chroot" or "guest_vm").
  • an optional note helping the user by providing some further details

The guide covers the creation of both ARMv7 or ARMv8 VMs, so keep that in mind and follow the steps matching your desired setup.

3. Guide

In this section we'll cover the following:

  • 3.1 How to bootstrap the ARM VMs
  • 3.2 How to create the network configuration
  • 3.3 How to configure the VMs
  • 3.4 How to create a Docker Swarm cluster
  • 3.4 How to create a Kubernetes cluster using Kubeadm

So let's start!

3.1 Bootstrap the VMs

The following steps will guide you through the process of how to create the base VM image, how to bootstrap a filesystem, extract the Kernel and the ramfs images and configure manually whatever is needed. The VMs will be running a Linux distribution of Ubuntu Xenial 16.04.

Completing this sub-section should result in having the following files:

  • base filesystem image
  • kernel
  • ramfs
  • set of "branched" snapshot images matching the number of VMs you want to have

3.1.1 Create the VM image

Install the following packages (host)

sudo apt-get install qemu qemu-user-static qemu-utils debootstrap uml-utilities bridge-utils

Create a 10G QCOW2 image (host)

qemu-img create -f qcow2 vm_image.qcow2 10G

Mount the image (host, note: you'll need to have the nbd kernel module available )

sudo modprobe nbd max_part=8
sudo qemu-nbd --connect=/dev/nbd0 vm_image.qcow2

Create the Linux partition (host, note: press ‘n’ to create a new partition and use the default settings for the rest. When finished, use ‘w’ to write the changes.)

sudo fdisk /dev/nbd0

Create an ext4 filesystem on the newly created partition (host)

sudo mkfs.ext4 /dev/nbd0p1

Mount the partition (host)

sudo mount -t ext4 /dev/nbd0p1 /mnt

3.1.2 Bootstrap Ubuntu Xenial 16.04

A) I want an ARMv7 filesystem (host)

sudo debootstrap --verbose --arch=armhf --foreign xenial /mnt/

or

B) I want an ARMv8 filesystem (host)

sudo debootstrap --verbose --arch=arm64 --foreign xenial /mnt/

Note: Do the following only if the host is an x86_64 machine:

A) I chose ARMv7 (host)

sudo cp /usr/bin/qemu-arm-static /mnt/usr/bin/

or

B) I chose ARMv8 (host)

sudo cp /usr/bin/qemu-aarch64-static /mnt/usr/bin/

Change root (host, note: don't mind the I have no name! warning)

chroot /mnt

Run the second stage of deboostrap (host, chroot)

/debootstrap/debootstrap --second-stage

Exit and chroot again (host, chroot)

exit
chroot /mnt

3.1.3 Customize the image

Update the /etc/apt/sources.list (host, chroot)

deb http://ports.ubuntu.com/ubuntu-ports/ xenial main universe multiverse restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main universe multiverse restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security main universe multiverse restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-proposed main universe multiverse restricted

deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial main universe multiverse restricted
deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main universe multiverse restricted
deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security main universe multiverse restricted
deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-proposed main universe multiverse restricted

Apply the list (host, chroot)

apt-get update

Set a root password (host, chroot)

passwd

Create a user and set a password for that user (host, chroot)

useradd -m sniffles
passwd sniffles

Setup the following general stuff(host, chroot)

locale-gen en_US.UTF-8
hostname sniffles
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
echo "127.0.1.1 sniffles" >> /etc/hosts
apt-get update

Install additional packages (host, chroot)

apt-get install sudo git nano vim net-tools pciutils iproute2 isc-dhcp-client iputils-ping ssh openssh-server perl netcat bind9utils dnsutils libio-socket-ssl-perl libnet-ssleay-perl ldap-utils libtime-modules-perl lsb sysv-rc-conf dkms make bzip2 curl build-essential

Install the Kernel (host, chroot, note: press Enter to continue on the prompt)

apt-get install linux-generic linux-headers-generic

Remove the following package as it's not applicable (host, chroot)

apt-get remove flash-kernel

Add the new user to the /etc/sudoers file (host, chroot)

root    ALL=(ALL)       ALL
sniffles ALL=(ALL)       ALL

Edit the /etc/ssh/sshd_config file to allow root login (host, chroot)

#PermitRootLogin prohibit-password
PermitRootLogin yes

3.1.4 Extract the Kernel/initrd files

Now that we have the filesystem image ready, we want to get the kernel and the ramfs images out of it

Exit from the chroot (host, chroot)

exit 
sync

Copy out the Kernel and the ramfs images (host)

sudo cp /mnt/boot/vmlinuz* ./
sudo cp /mnt/boot/initrd.img* ./

Unmount the filesystem (host)

sudo umount /mnt

Disconnect the QCOW2 image from the nbd device (host)

sudo qemu-nbd --disconnect /dev/nbd0

From now on, we can either copy this image as many times as the number of VMs we want to create or leverage the QCOW2 snapshot features. This not only reduces the size footprint, but it helps you go back to the initial clean state in case you mess up the VM.

Do the following for each VM (host, note: change the ubuntu_image_vm1.qcow2 name for each VM)

qemu-img create -f qcow2 -o backing_file=vm_image.qcow2 ubuntu_image_vm1.qcow2

Important: After that do not use, change or edit the vm_image.qcow2 as it's the backing image for each snapshot and it will result in corrupting the images.

Now we should have the kernel, ramfs and the filesystem images ready. Good job!

3.2 Create the network configuration on the host

It's time to setup the network configuration on our host machine.

Completing it should result in having 3 TAP interfaces (tap_vm1, tap_vm2, ...) bridged together in the "br_sniffles" bridge. Assigning an IP address to the bridge makes it possible for the host (us) to access the VM network.

Note: Feel free to tweak the following if you want to change the number of VMs or the network configuration. For instance make sure there's a TAP interface created for each VM that you want to spawn.

First, disable the bridge packets being sent to the host iptables (host, note: if needed)

echo 0 > cat /proc/sys/net/bridge/bridge-nf-call-iptables

Create the network configuration (host)

echo 1. Create the bridge
sudo brctl addbr br_sniffles

echo 2. Create the TAP interfaces
sudo tunctl -u $(whoami) -t tap_vm1
sudo tunctl -u $(whoami) -t tap_vm2
sudo tunctl -u $(whoami) -t tap_vm3

echo 3. Bring up the interfaces
sudo ip link set dev tap_vm1 up
sudo ip link set dev tap_vm2 up
sudo ip link set dev tap_vm3 up

echo 4. Add the interfaces to the bridge
sudo brctl addif br_sniffles tap_vm1
sudo brctl addif br_sniffles tap_vm2
sudo brctl addif br_sniffles tap_vm3
sudo ifconfig br_sniffles 172.17.0.1/24 up

You can verify the configuration with ifconfig and brctl show

3.3 Configure the VMs (do this per VM node)

Now that we have everything, it's finally time to spin up the virtual machines!

Start each VM using the following QEMU command line:

Note: Update the VMID variable for each VM. Changing it results in updating the name of the snapshot image, the name of the TAP interface and its MAC address and the dedicated telnet ports.

Note: If your host machine is also an ARM machine with support for virtualization, it will be better to start the QEMU VMs with KVM. To do that just update the QEMU command line with the following arguments: -M virt -cpu host -enable-kvm. This will improve a lot more the performance of your virtual machines. In case you don't have the KVM module installed, you can use the following guide to install it.

A) I chose ARMv7 (host)

VMID=1
IMAGE=ubuntu_image_vm${VMID}.qcow2

qemu-system-arm -smp 2 -m 2048 -M virt \
        -kernel vmlinuz-4.4.0-132-generic \
        -initrd initrd.img-4.4.0-132-generic \
        -append 'root=/dev/vda1 rw rootwait mem=2048M console=ttyAMA0,38400n8' \
        -nographic \
        -device virtio-blk-pci,drive=disk \
        -drive if=none,id=disk,file=${IMAGE} \
        -netdev user,id=local_network \
        -device virtio-net-pci,netdev=local_network \
        -netdev tap,id=net,ifname=tap_vm${VMID},script=no,downscript=no \
        -device virtio-net-device,netdev=net,mac=52:55:00:11:11:1${VMID} \
        -monitor telnet:0.0.0.0:201${VMID},server,nowait \
        -serial telnet:0.0.0.0:200${VMID},server,nowait   

or

B) I chose ARMv8 (host)

VMID=1
IMAGE=ubuntu_image_vm${VMID}.qcow2

qemu-system-aarch64 -smp 2 -m 2048 -M virt -cpu cortex-a57 \
        -kernel vmlinuz-4.4.0-132-generic \
        -initrd initrd.img-4.4.0-132-generic \
        -append 'root=/dev/vda1 rw rootwait mem=2048M console=ttyAMA0,38400n8' \
        -nographic \
        -device virtio-blk-pci,drive=disk \
        -drive if=none,id=disk,file=${IMAGE} \
        -netdev user,id=local_network \
        -device virtio-net-pci,netdev=local_network \
        -netdev tap,id=net,ifname=tap_vm${VMID},script=no,downscript=no \
        -device virtio-net-device,netdev=net,mac=52:55:00:11:11:1${VMID} \
        -monitor telnet:0.0.0.0:201${VMID},server,nowait \
        -serial telnet:0.0.0.0:200${VMID},server,nowait

Now you can connect to the VM using telnet (host, note: use the port from the QEMU command line specified in the -serial option)

telnet localhost 2001

Enable and verify the user networking (guest_vm, note: the name of the interface might be different)

sudo dhclient enp0s2
ping www.yahoo.com

Configure and verify the VM network (guest_vm, note: the name of the interface might be different)

sudo ifconfig eth0 172.17.0.2/24 up
ping 172.17.0.1  # that's the bridge on your host machine

Optional: To make the network persistent, update the /etc/network/interfaces file like: (guest_vm)

auto enp0s2 
iface enp0s2 inet dhcp 

3.4 How to create a Docker Swarm cluster (do this per VM node)

At this point we should have the VM environment ready.

Proceed with installing Docker on each VM: (guest_vm)

curl -sSL https://get.docker.com | sh
sudo usermod -aG docker sniffles

Initialize the Docker Swarm Manager node (guest_vm, note: execute this only on the node you want to be a manager and keep in mind its IP address)

sudo docker swarm init --advertise-addr 172.17.0.2

Login to the other VMs that you plan to use as workers and add them to the swarm: (guest_vm, note: use the ID provided by your docker swarm init output)

docker swarm join \
    --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx \
    172.17.0.2:2377

3.5 How to create a Kubernetes cluster using Kubeadm (do this per VM node)

Proceed with installing Kubeadm on each VM: (guest_vm)

sudo kubeadm config images pull -v3
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubeadm

Initialize the Kubernetes Manager node (guest_vm, note: execute this only on the node you want to be a manager and keep in mind its IP address)

sudo kubeadm init --token-ttl=0

Once finished, login to the other VMs you plan to use as workers and add them to the Kubernetes cluster: (guest_vm, note: use the ID provided by your docker swarm init output)

kubeadm join 172.17.0.2:6443 --token 5o0d44.9uioh8dr81w4pjc5 \
    --discovery-token-ca-cert-hash sha256:8c350b05d0e092dec4083e3d4b9fd91c2f8edc7179cfee8eaf58211719000451

Run the following post-install configuration on the master node

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Next, we want to install the CNI plugin, i.e. Weave on the master node

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Taint the master node so it can also be used as a worker node.

kubectl taint nodes --all node-role.kubernetes.io/master-

Optional: Generate a new machine-id in case some of the nodes have conflicting ones

cat /etc/machine-id \
  && sudo rm -rf /var/lib/dbus/machine-id \
  && sudo rm -rf /etc/machine-id \
  && sudo dbus-uuidgen --ensure \
  && sudo systemd-machine-id-setup \
  && cat /etc/machine-id
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment