Skip to content

Instantly share code, notes, and snippets.

@marvkis
Created March 23, 2024 19:38
Show Gist options
  • Save marvkis/0e36131a102cf2bca0fad028261d3579 to your computer and use it in GitHub Desktop.
Save marvkis/0e36131a102cf2bca0fad028261d3579 to your computer and use it in GitHub Desktop.
Running KubeVirt.io on a Rock5 Model B

Introduction

I was playing around with kubevirt.io (v1.2.0) on a Radxa ROCK 5 Model B. When I tried to boot a VM, I just had the qemu-kvm process eating 100% CPU with no output to the console.

I built an alternative setup based on ubuntu 22.04 and qemu worked with KVM without any problems. After some investigation I had the idea that it might be related to the (U)EFI bios used. I transferred the /usr/share/AAVMF/AAVMF_CODE.fd files from the 22.04 setup into the kubevirt compute container, started an additional qemu-kvm with -bios AAVMF/AAVMF_CODE.fd and voila - KVM booted correctly.

Findings

My findings so far:

Plumbing things together

My solution is to use a KubeVirt Hook Sidecar to patch the config to use the EFI bios from ubuntu 22.04. The 'key' snipplet is this annotation:

        hooks.kubevirt.io/hookSidecars: >
          [
              {
                  "args": ["--version", "v1alpha3"],
                  "image": "quay.io/kubevirt/sidecar-shim:v1.2.0",
                  "pvc": {"name": "kubevirt-qemu-uefi","volumePath": "/qemu-efi", "sharedComputePath": "/var/run/qemu-efi"},
                  "configMap": {"name": "efi-patcher-config-map", "key": "my_script.sh", "hookPath": "/usr/bin/onDefineDomain"}
              }
          ]

It mounts a PVC named kubevirt-qemu-uefi into /var/run/qemu-efi where I placed the AAVMF folder from 22.04. I'm using k3s with local driver and a fixed hostPath folder so this worked fine in my scenario. If you have a complex storage situation, the "PVC population container Idea from QEMU strace" could be a good idea.

The my_script.sh just patches the folder for the EFI files:

apiVersion: v1
kind: ConfigMap
metadata:
  name: efi-patcher-config-map
  namespace: virtual-machines
data:
  my_script.sh: |
    #!/bin/sh
    tempFile=`mktemp --dry-run`
    echo $4 > $tempFile
    sed -i "s|/usr/share/AAVMF/AAVMF|/var/run/qemu-efi/AAVMF/AAVMF|" $tempFile
    cat $tempFile

PS: don't forget to enable the Sidecar feature gate to bring things to fly.

Here is my complete excample:

# PV for the EFI files
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kube-virt-kubevirt-qemu-uefi
spec:
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  capacity:
    storage: 500Mi
  hostPath:
    path: /data/kube-virt/kubevirt-qemu-uefi
    type: DirectoryOrCreate
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  volumeMode: Filesystem

# PVC for the EFI files
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kubevirt-qemu-uefi
  namespace: virtual-machines
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 500Mi
  storageClassName: local-storage
  volumeMode: Filesystem
  volumeName: kube-virt-kubevirt-qemu-uefi

# ConfigMap & Script for the sidecar
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: efi-patcher-config-map
  namespace: virtual-machines
data:
  my_script.sh: |
    #!/bin/sh
    tempFile=`mktemp --dry-run`
    echo $4 > $tempFile
    sed -i "s|/usr/share/AAVMF/AAVMF|/var/run/qemu-efi/AAVMF/AAVMF|" $tempFile
    cat $tempFile

# The virtual machine.
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: testvm
  namespace: virtual-machines
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: testvm
      annotations:
        hooks.kubevirt.io/hookSidecars: >
          [
              {
                  "args": ["--version", "v1alpha3"],
                  "image": "quay.io/kubevirt/sidecar-shim:v1.2.0",
                  "pvc": {"name": "kubevirt-qemu-uefi","volumePath": "/qemu-efi", "sharedComputePath": "/var/run/qemu-efi"},
                  "configMap": {"name": "efi-patcher-config-map", "key": "my_script.sh", "hookPath": "/usr/bin/onDefineDomain"}
              }
          ]
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
        resources:
          requests:
            # 256M is the minimum for aarch64
            memory: 256M
      networks:
        - name: default
          pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo:20240323_9bd334045-arm64
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=

And you should be able to see the VM booting up:

# start the vm
virtctl start -n virtual-machines testvm
# attach to the console
virtctl console -n virtual-machines testvm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment