Skip to content

Instantly share code, notes, and snippets.

@rdimitrov
Last active May 6, 2020 00:10
Show Gist options
  • Save rdimitrov/74cb3afb081f15f818d9188c33ea96a9 to your computer and use it in GitHub Desktop.
Save rdimitrov/74cb3afb081f15f818d9188c33ea96a9 to your computer and use it in GitHub Desktop.
Serverless virtual ARM cluster with OpenFaaS

Serverless virtual ARM cluster with OpenFaaS

1. Introduction

The idea is inspired by the following blog post (https://blog.alexellis.io/your-serverless-raspberry-pi-cluster/) where the OpenFaaS framework is deployed on a set of Raspberry Pi boards configured in a cluster.

The main concept is the same, but instead of using physical boards, we'll have a set of VM nodes. The goal is to have OpenFaaS running in a virtual cluster with QEMU as a hypervisor.

Pros:

  • No additional costs for hardware (Raspberry Pi boards, Ethernet dongles, SD cards, cables, etc)
  • Scale easily (if a node is needed, just spin up another VM)
  • Leverage the benefits of virtualization (for ex. create a VM checkpoint of a clean deployment, experiment on top of it and if something fails just reload the VM from that specific checkpoint)
  • Create a heterogeneous setup (you can have x86_64, ARMv7 and ARMv8 nodes in a single environment)

Cons:

  • Lower performance if the nodes (guest VMs) cannot leverage the acceleration capabilities of the host CPU.

At the end of this tutorial the user should have a complete Serverless ARMv7/ARMv8 Docker Swarm cluster with OpenFaas.

2. Prerequisites

  • Linux host machine (preferably an Ubuntu ARM host with KVM support for virtualization)
  • 8GB of RAM or more
  • 20GB of storage
  • No more than 30 minutes to setup! (assuming everything goes as expected)

It's not mandatory, but it will be helpful if the user have experience with networking and virtualization in Linux.

Each command line in the following guide is coupled together with:

  • a hint that specifies where it should be executed (for ex. either "host", "host, chroot" or "guest_vm").
  • an optional note helping the user by providing some further details

The guide covers the creation of both ARMv7 or ARMv8 VMs, so keep that in mind and follow the steps matching your desired setup.

3. Guide

In this section we'll cover the following:

  • 3.1 How to bootstrap the ARM VMs
  • 3.2 How to create the network configuration
  • 3.3 How to configure the VMs
  • 3.4 How to deploy the OpenFaaS framework

So let's start!

3.1 Bootstrap the VMs

The following steps will guide you through the process of how to create the base VM image, how to bootstrap a filesystem, extract the Kernel and the ramfs images and configure manually whatever is needed. The VMs will be running a Linux distribution of Ubuntu Xenial 16.04.

Completing this sub-section should result in having the following files:

  • base filesystem image
  • kernel
  • ramfs
  • set of "branched" snapshot images matching the number of VMs you want to have

3.1.1 Create the VM image

Install the following packages (host)

sudo apt-get install qemu qemu-user-static qemu-utils debootstrap uml-utilities bridge-utils

Create a 10G QCOW2 image (host)

qemu-img create -f qcow2 vm_image.qcow2 10G

Mount the image (host, note: you'll need to have the nbd kernel module available )

sudo modprobe nbd max_part=8
sudo qemu-nbd --connect=/dev/nbd0 vm_image.qcow2

Create the Linux partition (host, note: press ‘n’ to create a new partition and use the default settings for the rest. When finished, use ‘w’ to write the changes.)

sudo fdisk /dev/nbd0

Create an ext4 filesystem on the newly created partition (host)

sudo mkfs.ext4 /dev/nbd0p1

Mount the partition (host)

sudo mount -t ext4 /dev/nbd0p1 /mnt

3.1.2 Bootstrap Ubuntu Xenial 16.04

A) I want an ARMv7 filesystem (host)

sudo debootstrap --verbose --arch=armhf --foreign xenial /mnt/

or

B) I want an ARMv8 filesystem (host)

sudo debootstrap --verbose --arch=arm64 --foreign xenial /mnt/

Note: Do the following only if the host is an x86_64 machine:

A) I chose ARMv7 (host)

sudo cp /usr/bin/qemu-arm-static /mnt/usr/bin/

or

B) I chose ARMv8 (host)

sudo cp /usr/bin/qemu-aarch64-static /mnt/usr/bin/

Change root (host, note: don't mind the I have no name! warning)

chroot /mnt

Run the second stage of deboostrap (host, chroot)

/debootstrap/debootstrap --second-stage

Exit and chroot again (host, chroot)

exit
chroot /mnt

3.1.3 Customize the image

Update the /etc/apt/sources.list (host, chroot)

deb http://ports.ubuntu.com/ubuntu-ports/ xenial main universe multiverse restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main universe multiverse restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-security main universe multiverse restricted
deb http://ports.ubuntu.com/ubuntu-ports/ xenial-proposed main universe multiverse restricted

deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial main universe multiverse restricted
deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates main universe multiverse restricted
deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-security main universe multiverse restricted
deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-proposed main universe multiverse restricted

Apply the list (host, chroot)

apt-get update

Set a root password (host, chroot)

passwd

Create a user and set a password for that user (host, chroot)

useradd -m openfaas
passwd openfaas

Setup the following general stuff(host, chroot)

locale-gen en_US.UTF-8
hostname openfaas
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
echo "127.0.1.1 openfaas" >> /etc/hosts
apt-get update

Install additional packages (host, chroot)

apt-get install sudo git nano vim net-tools pciutils iproute2 isc-dhcp-client iputils-ping ssh openssh-server perl netcat bind9utils dnsutils libio-socket-ssl-perl libnet-ssleay-perl ldap-utils libtime-modules-perl lsb sysv-rc-conf dkms make bzip2 curl build-essential

Install the Kernel (host, chroot, note: press Enter to continue on the prompt)

apt-get install linux-generic linux-headers-generic

Remove the following package as it's not applicable (host, chroot)

apt-get remove flash-kernel

Add the new user to the /etc/sudoers file (host, chroot)

root    ALL=(ALL)       ALL
openfaas ALL=(ALL)       ALL

Edit the /etc/ssh/sshd_config file to allow root login (host, chroot)

#PermitRootLogin prohibit-password
PermitRootLogin yes

3.1.4 Extract the Kernel/initrd files

Now that we have the filesystem image ready, we want to get the kernel and the ramfs images out of it

Exit from the chroot (host, chroot)

exit 
sync

Copy out the Kernel and the ramfs images (host)

sudo cp /mnt/boot/vmlinuz* ./
sudo cp /mnt/boot/initrd.img* ./

Unmount the filesystem (host)

sudo umount /mnt

Disconnect the QCOW2 image from the nbd device (host)

sudo qemu-nbd --disconnect /dev/nbd0

From now on, we can either copy this image as many times as the number of VMs we want to create or leverage the QCOW2 snapshot features. This not only reduces the size footprint, but it helps you go back to the initial clean state in case you mess up the VM.

Do the following for each VM (host, note: change the ubuntu_image_vm1.qcow2 name for each VM)

qemu-img create -f qcow2 -o backing_file=vm_image.qcow2 ubuntu_image_vm1.qcow2

Important: After that do not use, change or edit the vm_image.qcow2 as it's the backing image for each snapshot and it will result in corrupting the images.

Now we should have the kernel, ramfs and the filesystem images ready. Good job!

3.2 Create the network configuration on the host

It's time to setup the network configuration on our host machine.

Completing it should result in having 3 TAP interfaces (tap_vm1, tap_vm2, ...) bridged together in the "br_openfaaas" bridge. Assigning an IP address to the bridge makes it possible for the host (us) to access the VM network.

Note: Feel free to tweak the following if you want to change the number of VMs or the network configuration. For instance make sure there's a TAP interface created for each VM that you want to spawn.

First, disable the bridge packets being sent to the host iptables (host, note: if needed)

echo 0 > cat /proc/sys/net/bridge/bridge-nf-call-iptables

Create the network configuration (host)

echo 1. Create the bridge
sudo brctl addbr br_openfaas

echo 2. Create the TAP interfaces
sudo tunctl -u $(whoami) -t tap_vm1
sudo tunctl -u $(whoami) -t tap_vm2
sudo tunctl -u $(whoami) -t tap_vm3

echo 3. Bring up the interfaces
sudo ip link set dev tap_vm1 up
sudo ip link set dev tap_vm2 up
sudo ip link set dev tap_vm3 up

echo 4. Add the interfaces to the bridge
sudo brctl addif br_openfaas tap_vm1
sudo brctl addif br_openfaas tap_vm2
sudo brctl addif br_openfaas tap_vm3
sudo ifconfig br_openfaas 172.17.0.1/24 up

You can verify the configuration with ifconfig and brctl show

3.3 Configure the VMs (do this per VM node)

Now that we have everything, it's finally time to spin up the virtual machines!

Start each VM using the following QEMU command line:

Note: Update the VMID variable for each VM. Changing it results in updating the name of the snapshot image, the name of the TAP interface and its MAC address and the dedicated telnet ports.

Note: If your host machine is also an ARM machine with support for virtualization, it will be better to start the QEMU VMs with KVM. To do that just update the QEMU command line with the following arguments: -M virt -cpu host -enable-kvm. This will improve a lot more the performance of your virtual machines. In case you don't have the KVM module installed, you can use the following guide to install it.

A) I chose ARMv7 (host)

VMID=1
IMAGE=ubuntu_image_vm${VMID}.qcow2

qemu-system-arm -smp 2 -m 2048 -M virt \
        -kernel vmlinuz-4.4.0-132-generic \
        -initrd initrd.img-4.4.0-132-generic \
        -append 'root=/dev/vda1 rw rootwait mem=2048M console=ttyAMA0,38400n8' \
        -nographic \
        -device virtio-blk-pci,drive=disk \
        -drive if=none,id=disk,file=${IMAGE} \
        -netdev user,id=local_network \
        -device virtio-net-pci,netdev=local_network \
        -netdev tap,id=net,ifname=tap_vm${VMID},script=no,downscript=no \
        -device virtio-net-device,netdev=net,mac=52:55:00:11:11:1${VMID} \
        -monitor telnet:0.0.0.0:201${VMID},server,nowait \
        -serial telnet:0.0.0.0:200${VMID},server,nowait   

or

B) I chose ARMv8 (host)

VMID=1
IMAGE=ubuntu_image_vm${VMID}.qcow2

qemu-system-aarch64 -smp 2 -m 2048 -M virt -cpu cortex-a57 \
        -kernel vmlinuz-4.4.0-132-generic \
        -initrd initrd.img-4.4.0-132-generic \
        -append 'root=/dev/vda1 rw rootwait mem=2048M console=ttyAMA0,38400n8' \
        -nographic \
        -device virtio-blk-pci,drive=disk \
        -drive if=none,id=disk,file=${IMAGE} \
        -netdev user,id=local_network \
        -device virtio-net-pci,netdev=local_network \
        -netdev tap,id=net,ifname=tap_vm${VMID},script=no,downscript=no \
        -device virtio-net-device,netdev=net,mac=52:55:00:11:11:1${VMID} \
        -monitor telnet:0.0.0.0:201${VMID},server,nowait \
        -serial telnet:0.0.0.0:200${VMID},server,nowait

Now you can connect to the VM using telnet (host, note: use the port from the QEMU command line specified in the -serial option)

telnet localhost 2001

Enable and verify the user networking (guest_vm, note: the name of the interface might be different)

sudo dhclient enp0s2
ping www.openfaas.com

Configure and verify the VM network (guest_vm, note: the name of the interface might be different)

sudo ifconfig eth0 172.17.0.2/24 up
ping 172.17.0.1  # that's the bridge on your host machine

Optional: To make the network persistent, update the /etc/network/interfaces file like: (guest_vm)

auto enp0s2 
iface enp0s2 inet dhcp 

3.4 Deploy the OpenFaaS framework (do this per VM node)

At this point we should have the VM environment ready for installing OpenFaaS.

Proceed with installing Docker on each VM: (guest_vm)

curl -sSL https://get.docker.com | sh
sudo usermod -aG docker openfaas

Initialize the Docker Swarm Manager node (guest_vm, note: execute this only on the node you want to be a manager and keep in mind its IP address)

sudo docker swarm init --advertise-addr 172.17.0.2

Login to the other VMs that you plan to use as workers and add them to the swarm: (guest_vm, note: use the ID provided by your docker swarm init output)

docker swarm join \
    --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx \
    172.17.0.2:2377

And now it's time to install OpenFaaS!

Start by cloning the repository: (guest_vm, note: execute this on your manager node)

git clone https://github.com/openfaas/faas.git
cd faas

and continue by deploying the stack: (guest_vm) A) I chose ARMv7 (guest_vm)

./deploy_stack.armhf.sh

or

B) I chose ARMv8 (guest_vm, note: not up-to-date yet)

docker stack deploy func --compose-file docker-compose.arm64.yml

Wait a couple of minutes until at least 1 replica is available for each service (guest_vm)

watch 'docker service ls'

The OpenFaaS UI portal should be available at - http://172.17.0.2:8080

Note: The address might be different depending on the manager node you chose earlier.

From this point you should have everything ready, so you can start exploring and developing your Serverless functions with OpenFaaS on ARM.

Make sure to check out the official documentation for further details and examples about the OpenFaaS features - https://docs.openfaas.com/.

Last but not least, congratulations! You successfully created your Serverless virtual ARM cluster with OpenFaaS!

4. References

https://blog.alexellis.io/your-serverless-raspberry-pi-cluster/

https://blog.alexellis.io/first-faas-python-function/

https://docs.openfaas.com/

https://docs.openfaas.com/tutorials/workshop/

https://docs.docker.com/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment