Skip to content

Instantly share code, notes, and snippets.

@usrbinkat
Last active April 15, 2024 06:51
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save usrbinkat/c8b56fb703328147c796bc4356b029b5 to your computer and use it in GitHub Desktop.
Save usrbinkat/c8b56fb703328147c796bc4356b029b5 to your computer and use it in GitHub Desktop.
Microk8s + Kubevirt + Multus (Fedora 36)

Kargo3.0 Bare Metal GitOps Hypervisor

WARNING: Microk8s is currently impacted by BUG #3085 please see bug workaround instructions to remediate until patch is released to stable channels!

01. Install OS

02 Configure br0

03 Enable nested Virtualization && Disable selinux (not for production)

Warning: Disable selinux at your own risk!

sudo grubby --update-kernel=ALL --args 'selinux=0 intel_iommu=on iommu=pt rd.driver.pre=vfio-pci pci=realloc'

04 Enable br_netfilter for Calico

echo "br_netfilter" | sudo tee -a /etc/modules
sudo modprobe br_netfilter

05 Update & Install Packages

sudo dnf update -y
sudo dnf install -y firewalld kernel-modules dnf-automatic kubernetes-client helm snapd dracut-squash squashfs-tools squashfuse fuse jq
sudo ln -s /var/lib/snapd/snap /snap

06 Install Binaries

curl --output /tmp/virtctl -L https://github.com/kubevirt/kubevirt/releases/download/$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | awk -F '[",]' '/tag_name/{print $4}')/virtctl-$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | awk -F '[",]' '/tag_name/{print $4}')-linux-amd64
sudo install -o root -g root -m 0755 /tmp/virtctl /usr/local/bin/virtctl

07 Disable Firewall

Warning: Disable Firewalld at your own risk!

sudo systemctl disable firewalld
sudo systemctl stop firewalld

08 Reboot

sudo shutdown -r now

09 Install Microk8s & Deploy Plugins

# Install Microk8s
# ! Currently installing latest/edge until bug #3085 is resolved
sudo snap install core
sudo snap install microk8s --channel=latest/edge --classic && sleep 15
sudo microk8s enable && sudo microk8s status -w && sleep 3
sudo microk8s start && sudo microk8s status -w && sleep 3
sudo usermod -aG microk8s $USER

# Enable Plugins
sudo microk8s enable dns && sudo microk8s status -w && sleep 3
sudo microk8s enable storage && sudo microk8s status -w && sleep 3
sudo microk8s enable community && sudo microk8s status -w
sudo microk8s enable multus && sudo microk8s status -w

# Setup KUBECONFIG
mkdir -p ~/.kube && sudo microk8s config > ~/.kube/config
sudo chown -f -R $USER ~/.kube && chmod 600 ~/.kube/config
kubectl get po -A

10 Install Containerized Data Importer

  • Not required for ephemeral vm's like vyos
curl -sL https://github.com/kubevirt/containerized-data-importer/releases/download/$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | awk -F '[",]' '/tag_name/{print $4}')/cdi-operator.yaml | kubectl apply -f -
curl -sL https://github.com/kubevirt/containerized-data-importer/releases/download/$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | awk -F '[",]' '/tag_name/{print $4}')/cdi-cr.yaml | kubectl apply -f -

11 Cert Manager

helm repo add jetstack https://charts.jetstack.io; helm repo update
helm upgrade --install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true

12 Install Kubevirt

  • Install Kubevirt
curl -sL https://github.com/kubevirt/kubevirt/releases/download/$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | awk -F '[",]' '/tag_name/{print $4}')/kubevirt-operator.yaml | kubectl apply -f -
curl -sL https://github.com/kubevirt/kubevirt/releases/download/$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | awk -F '[",]' '/tag_name/{print $4}')/kubevirt-cr.yaml | kubectl apply -f -
kubectl -n kubevirt wait kv kubevirt --for condition=Available
  • (Optional) Notable FeatureGates
cat <<EOF | kubectl apply -f -
---
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration: 
      featureGates:
        - LiveMigration
        - DataVolumes
        - ExpandDisks
        - ExperimentalIgnitionSupport
        - Sidecar
        - HostDevices
        - Snapshot
        - HotplugVolumes
        - ExperimentalVirtiofsSupport
        - GPU
EOF

13 Create Kubevirt Resources

  • Create SSH Key Secret
# Create SSH Key Secret
ls ~/.ssh/id_rsa.pub >/dev/null || ssh-keygen
kubectl create secret generic kubevirt-sshpubkey-kc2user \
    --from-file=key1=$HOME/.ssh/id_rsa.pub \
    --dry-run=client -oyaml \
  | kubectl apply -f -

kubectl get secret -oyaml kubevirt-sshpubkey-kc2user | awk '/key1:/{print $2}' | base64 -d
  • Create VM Network Attachment Definition
cat <<EOF | kubectl apply -f -
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: nadbr0
spec:
  config: '{"cniVersion":"0.3.1","name":"br0","plugins":[{"type":"bridge","bridge":"br0","ipam":{}},{"type":"tuning"}]}'
EOF

14 Create VMs

*Ubuntu 22.04 Jammy Minimal

kubectl apply -f https://gist.githubusercontent.com/usrbinkat/c8b56fb703328147c796bc4356b029b5/raw/86747680e7f8b3cb641c5464d9d4cd083bb29596/ubuntu-jammy-minimal.yaml

*Ubuntu 22.04 Jammy with xRDP Ubuntu Desktop

kubectl apply -f https://gist.githubusercontent.com/usrbinkat/c8b56fb703328147c796bc4356b029b5/raw/2cbb9883867c6ad02dd72fdbb1b10008cec1a21f/ubuntu-jammy.yaml

15 Wait for image download and cdi import

16 Execute Cmds to find IP and access serial console ttyS0

kubectl get vmi
virtctl console ubuntu-rdp

17 SSH to VM at it's IP address with ssh key used to create secret

18 Connect to VM's RDP session at it's IP address with credentials:

kc2user:kc2user

Bug #3085 Workaround

  • Perform on each node in the cluster using the appropriate $NODE_NAME variable on each node
# Enable br_netfilter module
echo "br_netfilter" | sudo tee -a /etc/modules
echo "br_netfilter" | sudo tee -a /etc/modules-load.d/snap.microk8s.conf
sudo modprobe br_netfilter

# Update Microk8s
sudo microk8s stop
sudo snap refresh microk8s --channel=latest/edge
sudo microk8s start

# Reboot Node
export NODE_NAME=node1.optiplex.home.arpa
sudo microk8s kubectl cordon $NODE_NAME
sudo microk8s kubectl drain  $NODE_NAME
sudo shutdown -r now

# Uncordon node
export NODE_NAME=node1.optiplex.home.arpa
sudo microk8s kubectl uncordon $NODE_NAME
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: jammy
labels:
os/flavor: ubuntu
os/release: jammy
spec:
running: true
template:
spec:
hostname: jammy
domain:
clock:
utc: {}
timer: {}
cpu:
threads: 2
model: host-passthrough
devices:
rng: {}
autoattachSerialConsole: true
autoattachGraphicsDevice: false
autoattachPodInterface: false
disks:
- name: jammy-disk-vda-root
bootOrder: 1
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: enp1s0
model: virtio
bridge: {}
features:
acpi:
enabled: true
smm:
enabled: true
firmware:
bootloader:
efi:
secureBoot: true
machine:
type: q35
resources:
requests:
memory: 2G
devices.kubevirt.io/kvm: "1"
terminationGracePeriodSeconds: 0
networks:
- name: enp1s0
multus:
networkName: nadbr0
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: kubevirt-sshpubkey-kc2user
propagationMethod:
qemuGuestAgent:
users:
- "kc2user"
volumes:
- name: jammy-disk-vda-root
containerDisk:
image: docker.io/containercraft/ubuntu:22.04
imagePullPolicy: IfNotPresent
- name: cloudinitdisk
cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
dhcp6: true
dhcp-identifier: mac
userData: |
#cloud-config
ssh_pwauth: true
chpasswd:
list: |
kc2user:kc2user
expire: False
users:
- name: kc2user
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo,wheel,lxd,microk8s,xrdp,docker,ssl-cert
package_upgrade: true
packages:
- docker.io
runcmd:
- "snap remove lxd"
- "ip a s"
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: ubuntu-rdp
labels:
app: rdp
flavor: ubuntu
kubernetes.io/flavor: c2m2
spec:
running: true
dataVolumeTemplates:
- metadata:
name: ubuntu-rdp-volume-vda-root
spec:
source:
registry:
url: docker://docker.io/containercraft/ubuntu:22.04
imagePullPolicy: Always
pvc:
resources:
requests:
storage: 42G
accessModes:
- ReadWriteOnce
storageClassName: microk8s-hostpath
persistentVolumeReclaimPolicy: Delete
volumeMode: Block
template:
spec:
hostname: ubuntu-rdp
domain:
clock:
utc: {}
timer: {}
devices:
disks:
- name: ubuntu-rdp-disk-vda-root
bootOrder: 1
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: enp1s0
model: virtio
bridge: {}
features:
acpi:
enabled: true
smm:
enabled: true
firmware:
bootloader:
efi:
secureBoot: true
terminationGracePeriodSeconds: 0
networks:
- name: enp1s0
multus:
networkName: nadbr0
accessCredentials:
- sshPublicKey:
source:
secret:
secretName: kubevirt-sshpubkey-kc2user
propagationMethod:
qemuGuestAgent:
users:
- "kc2user"
volumes:
- name: ubuntu-rdp-disk-vda-root
dataVolume:
name: ubuntu-rdp-volume-vda-root
- name: cloudinitdisk
cloudInitNoCloud:
networkData: |
version: 2
ethernets:
enp1s0:
dhcp4: true
dhcp6: true
dhcp-identifier: mac
userData: |
#cloud-config
ssh_pwauth: true
chpasswd:
list: |
kc2user:kc2user
expire: False
users:
- name: kc2user
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo,wheel,lxd,microk8s,xrdp,docker,ssl-cert
write_files:
- encoding: b64
content: W1JlbW90ZSBBZG1pbiBTU0ggYWNjZXNzXSAKSWRlbnRpdHk9dW5peC1ncm91cDp3aGVlbApBY3Rpb249KgpSZXN1bHRBbnk9eWVzClJlc3VsdEluYWN0aXZlPXllcwpSZXN1bHRBY3RpdmU9eWVzCg==
owner: root:root
path: /etc/polkit-1/localauthority/50-local.d/46-user-admin.pkla
permissions: '0644'
- encoding: b64
content: cG9sa2l0LmFkZFJ1bGUoZnVuY3Rpb24oYWN0aW9uLCBzdWJqZWN0KSB7CiBpZiAoKGFjdGlvbi5pZCA9PSAib3JnLmZyZWVkZXNrdG9wLmNvbG9yLW1hbmFnZXIuY3JlYXRlLWRldmljZSIgfHwKIGFjdGlvbi5pZCA9PSAib3JnLmZyZWVkZXNrdG9wLmNvbG9yLW1hbmFnZXIuY3JlYXRlLXByb2ZpbGUiIHx8CiBhY3Rpb24uaWQgPT0gIm9yZy5mcmVlZGVza3RvcC5jb2xvci1tYW5hZ2VyLmRlbGV0ZS1kZXZpY2UiIHx8CiBhY3Rpb24uaWQgPT0gIm9yZy5mcmVlZGVza3RvcC5jb2xvci1tYW5hZ2VyLmRlbGV0ZS1wcm9maWxlIiB8fAogYWN0aW9uLmlkID09ICJvcmcuZnJlZWRlc2t0b3AuY29sb3ItbWFuYWdlci5tb2RpZnktZGV2aWNlIiB8fAogYWN0aW9uLmlkID09ICJvcmcuZnJlZWRlc2t0b3AuY29sb3ItbWFuYWdlci5tb2RpZnktcHJvZmlsZSIpICYmCiBzdWJqZWN0LmlzSW5Hcm91cCgie3VzZXJzfSIpKSB7CiByZXR1cm4gcG9sa2l0LlJlc3VsdC5ZRVM7CiB9Cn0pOwo=
owner: root:root
path: /etc/polkit-1/localauthority.conf.d/02-allow-colord.conf
permissions: '0644'
package_upgrade: true
packages:
- docker.io
- policykit-1-gnome
- ubuntu-desktop
- firefox
- xrdp
runcmd:
- "snap remove lxd"
- "apt-get remove -y --allow-remove-essential apport apport-gtk python3-apport python3-problem-report shim-signed apport-symptoms python3-systemd ansible"
- "su -l kc2user -c 'gsettings set org.gnome.desktop.interface gtk-theme Yaru-dark'"
- "ip a s"
sudo microk8s enable registry

sudo mkdir -p /var/snap/microk8s/current/args/certs.d/192.168.1.2\:32000
cat <<EOF | sudo tee /var/snap/microk8s/current/args/certs.d/192.168.1.2\:32000/hosts.toml
server = "http://192.168.1.2:32000"
[host."http://192.168.1.2:32000"]
  capabilities = ["pull", "resolve"]
EOF

skopeo copy --dest-no-creds --dest-tls-verify=false docker://quay.io/containercraft/vyos:1.4-rolling docker://192.168.1.2:32000/containercraft/vyos:1.4-rolling
@echowings
Copy link

Follow this article to install microk8s and kubevirt on federa 37 or ubuntu 22.04 or debian 11. The error still there. microk8s and kubevirt didn't compatible with each other and the guy who develop kubevirt didn't fix the bug at all. Very bed experience to implement microk8s with kubevirt

{"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:86","timestamp":"2022-11-21T07:38:12.277950Z"}
{"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:89","timestamp":"2022-11-21T07:38:12.278053Z"}
{"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon: qemu:///system","pos":"libvirt.go:497","timestamp":"2022-11-21T07:38:12.279854Z"}
{"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon failed: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory')","pos":"libvirt.go:505","timestamp":"2022-11-21T07:38:12.280670Z"}
{"component":"virt-launcher","level":"info","msg":"libvirt version: 8.0.0, package: 2.module_el8.6.0+1087+b42c8331 (CentOS Buildsys \u003cbugs@centos.org\u003e, 2022-02-08-22:20:52, )","subcomponent":"libvirt","thread":"39","timestamp":"2022-11-21T07:38:12.311000Z"}
{"component":"virt-launcher","level":"info","msg":"hostname: testvm","subcomponent":"libvirt","thread":"39","timestamp":"2022-11-21T07:38:12.311000Z"}
{"component":"virt-launcher","level":"error","msg":"internal error: Child process (dmidecode -q -t 0,1,2,3,4,11,17) unexpected exit status 1: /dev/mem: No such file or directory","pos":"virCommandWait:2752","subcomponent":"libvirt","thread":"39","timestamp":"2022-11-21T07:38:12.311000Z"}
{"component":"virt-launcher","level":"info","msg":"Connected to libvirt daemon","pos":"libvirt.go:513","timestamp":"2022-11-21T07:38:12.782986Z"}
{"component":"virt-launcher","level":"info","msg":"Registered libvirt event notify callback","pos":"client.go:510","timestamp":"2022-11-21T07:38:12.787399Z"}
{"component":"virt-launcher","level":"info","msg":"Marked as ready","pos":"virt-launcher.go:74","timestamp":"2022-11-21T07:38:12.787903Z"}

@usrbinkat
Copy link
Author

@echowings IDK if this still matters to you but I just ran through the procedure on a fresh Fedora 38 box using latest versions of everything and following a plain copy/paste run through the instructions I got a working kubevirt running multiple VMs

@echowings
Copy link

@echowings IDK if this still matters to you but I just ran through the procedure on a fresh Fedora 38 box using latest versions of everything and following a plain copy/paste run through the instructions I got a working kubevirt running multiple VMs

Got it, I'll try it with Fedora 38 again.

@baycarbone
Copy link

fyi, this guide works fine with microk8s 1.28/stable and kubevirt 1.2.0 after applying this: kubevirt/kubevirt#8387 (comment)

@usrbinkat
Copy link
Author

@baycarbone and others, this POC is now being actively developed as an automated Kubevirt PaaS.

You can find more here: https://github.com/containercraft/kargo

@usrbinkat
Copy link
Author

Adding the kubevirt/kubevirt#8387 commenter's solution here for posterity

apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  certificateRotateStrategy: {}
  configuration:
    developerConfiguration: {}
  customizeComponents:
    patches:
      - resourceType: DaemonSet
        resourceName: virt-handler
        patch: '{"spec": {"template": {"spec": {
            "volumes": [
              {"name": "kubelet-pods", "hostPath": {"path": "/var/snap/microk8s/common/var/lib/kubelet/pods"}},
              {"name": "kubelet-pods-shortened", "hostPath": {"path": "/var/snap/microk8s/common/var/lib/kubelet/pods"}},
              {"name": "device-plugin", "hostPath": {"path": "/var/snap/microk8s/common/var/lib/kubelet/device-plugins"}}
            ],
            "containers": [{
              "name": "virt-handler",
              "volumeMounts": [
                {"name": "kubelet-pods", "mountPath": "/var/snap/microk8s/common/var/lib/kubelet/pods", "mountPropagation": "Bidirectional"},
                {"name": "device-plugin", "mountPath": "/var/snap/microk8s/common/var/lib/kubelet/device-plugins"}
              ]
            }]
          }}}}'
        type: strategic
    flags:
      handler:
        kubelet-pods-dir: /var/snap/microk8s/common/var/lib/kubelet/pods
        kubelet-root: /var/snap/microk8s/common/var/lib/kubelet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment