Configure script: configure.sh
git clone https://github.com/intel/nemu
cd nemu
git checkout -b topic/virt-x86 origin/topic/virt-x86
mkdir build-x86-64
cd build-x86-64
./configure.sh
make -j `nproc`
SRCDIR=/your/path/to/nemu/
make docker-image-debian-arm64-corss
docker run --rm -it -v $SRCDIR:$SRCDIR -w $SRCDIR -u $(id -u) qemu:debian-arm64-cross
We can test the NEMU virt machine type through either direct kernel boot (a.k.a. nofw
) or through a regular cloud or server image as long as it supports EFI boot.
Add:
-device sysbus-debugcon,iobase=0x402,chardev=debugcon -chardev file,path=/tmp/debug-log,id=debugcon
to the qemu command line.
I use a 4.18.0 kernel with a custom .config
See the busybox initramfs build instructions, and then configure the above kernel build to point to your local rootfs.
#!/bin/bash
# -*- mode: shell-script; indent-tabs-mode: nil; sh-basic-offset: 4; -*-
# ex: ts=8 sw=4 sts=4 et filetype=sh
sudo /home/samuel/devlp/hypervisor/nemu/build-x86-64/x86_64-softmmu/qemu-system-x86_64 \
-nographic \
-nodefaults \
-L . \
-net none \
-machine virt,accel=kvm,kernel_irqchip,nofw \
-smp sockets=1,cpus=4,cores=2,maxcpus=8 -cpu host \
-m 1G,slots=3,maxmem=4G \
-kernel /home/samuel/devlp/kernels/nfc/arch/x86/boot/compressed/vmlinux.bin -append 'console=hvc0 single iommu=false root=/dev/ram0' \
-device virtio-serial-pci,id=virtio-serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev stdio,id=charconsole0 \
-monitor telnet:127.0.0.1:55555,server,nowait
Warning Most cloud or server images kernels do not have HW reduced ACPI enabled so they won't support CPU and memory hotplug with our HW reduced ACPI virt machine type.
We have tested the latest Clear Linux cloud image with our custom OVMF binary.
#!/bin/bash
# -*- mode: shell-script; indent-tabs-mode: nil; sh-basic-offset: 4; -*-
# ex: ts=8 sw=4 sts=4 et filetype=sh
sudo /home/samuel/devlp/hypervisor/nemu/build-x86_64/x86_64_virt-softmmu/qemu-system-x86_64_virt \
-bios $HOME/devlp/hypervisor/edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF.fd \
-nographic \
-nodefaults \
-L . \
-net none \
-machine virt,accel=kvm,kernel_irqchip \
-smp sockets=1,cpus=4,cores=2,maxcpus=8 -cpu host \
-m 1G,slots=3,maxmem=4G \
-kernel /home/samuel/devlp/kernels/linux-nemu/arch/x86/boot/bzImage -append 'console=hvc0 iommu=false root=/dev/vda3 rw rootfstype=ext4 data=ordered rcupdate.rcu_expedited=1 tsc=reliable no_timer_check reboot=t noapictimer' \
-device virtio-serial-pci,id=virtio-serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev stdio,id=charconsole0 \
-drive file=/home/samuel/devlp/hypervisor/nemu/clear-cloud/clear-24570-kvm.img,if=virtio,format=raw \
-monitor telnet:127.0.0.1:55555,server,nowait
Hotplug is done through the QEMU monitoring socket (-monitor telnet:127.0.0.1:55555,server,nowait
from the above command lines).
The socket is reachable through port 23 (telnet):
# telnet 127.0.0.1 55555
Plug
(qemu) device_add host-x86_64-cpu,id=core4,socket-id=1,core-id=1,thread-id=0
Unplug
(qemu) device_del core4
Plug
(qemu) object_add memory-backend-ram,id=mem1,size=1G
(qemu) device_add pc-dimm,id=dimm1,memdev=mem1
Unplug
(qemu) device_del dimm1
Plug
(qemu) device_add virtio-net-pci,id=virtio-net1
Unplug
(qemu) device_del virtio-net1
# cd tools/CI/
# sudo -E ./minimal_ci.sh -cloudinit ~/devlp/hypervisor/nemu/tools/CI/cloud-init/ -hypervisor ~/devlp/hypervisor/nemu/build-x86-64/x86_64-softmmu/qemu-system-x86_64 -workloads ~/workloads/
We'll take the DSDT as an example, but this can be applied to all ACPI tables:
- Install the acpica-tools package from your distro
- From the guest command line:
cat /sys/firmware/acpi/tables/DSDT > dsdt.dat
- Copy dsdt.dat back to the host, for example:
scp dsdt.dat hostusername@host-tun-ip:~/
- From the host, disassemble the table
iasl -d dsdt.dat
iasl -tc dsdt.dsl