Each bhyve virtual machine (guest) uses a machine image. A machine image is sometimes called a guest image. Bhyve machine images are quite similar to KVM machine images. In fact, it is sometimes possible to create hybrid images that work with bhyve or KVM.
Bhyve is different from KVM in a few ways that are quite relevant to machine images. In particular:
- Bhyve supports UEFI and emulates legacy BIOS via a Compatibility Support Module (CSM). The CSM is not as full-featured as SeaBIOS used with KVM.
- Network configuration via a per-instance DHCP server is not supported.
The following sections describe the relevance of these differences and how to overcome them in many cases.
The first thing that runs in a bhyve VM is the boot ROM (aka bootrom). Two boot ROMs are supported: uefi-rom.bin
for UEFI support and uefi-csm-rom.bin
for BIOS support via a Compatibility Support Module (CSM).
The UEFI and BIOS boot processes are different in important ways. They each put the boot loader at different places and have different partitioning requirements. While the requirements of each do not preclude the use of the other, it is uncommon for an OS installer to configure the system to simultaneously support both standards. Rather, the installer configures the system to support the standard used during installation. This means that instances based on a particular image must use the same boot ROM as was used during image creation.
While being able to support more more recent technology is good, there are trade offs. The BIOS emulation performed by the CSM is not as full-featured as SeaBIOS. The most critical limitation is that the CSM lacks robust support for graphical consoles. This means that machine images that require a graphical console must boot using UEFI (not BIOS) interfaces.
By default, instances are created with BIOS support. This can be overridden by the bootrom
property in the vmadm
payload or in the image manifest. Valid values are uefi
and bios
. When an image is created from an instance that specifies bootrom
, the image will also have bootrom
set to the same value.
While the VM console is not exposed via Triton, it may be used by operators while creating images and/or while debugging problems with instances. In the future, the VM console may also be available via Triton. For these reasons, the console support in images is important.
It is recommended that guest operating systems are configured to prefer the serial console. It is important to use only the first serial console, known in various operating systems as COM1
, ttyS0
, ttya
, and probably other names. For operating systems that use GRUB 2, the following entries should appear in /etc/default/grub
(or /etc/default/grub.d/<NN>-<mumble>.cfg
on Ubuntu).
GRUB_TERMINAL="serial console"
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
After modifying this file, it is important to run grub2-mkconfig [options]
(or update-grub2
on Ubuntu).
Graphical console support is quite limited. Reliance on it is discouraged.
Graphical consoles are accessible via some VNC clients, subject to the limitations of the bhyve VNC server. In particular, the bhyve VNC server does not support passwords, encryption, resizes, some color depths, etc. The negotiation of these features is incompatible with some VNC clients. A VNC client that is known to work is VNC Viewer from RealVNC.
While Windows does have some support for managing the system via a serial port, the installer has some pesky reliance on the graphical console. As such, bhyve Windows images and instances require a graphical console. As discussed in the UEFI and BIOS Emulation section, graphical consoles are not supported with the BIOS emulation offered by the CSM module. Thus, Windows needs to use UEFI during image creation. This implies that Windows instances will also require UEFI.
Because most Intel-based operating systems assume that the graphical console will be better supported than a serial console, no special configuration is needed to use the graphical console.
In KVM, the network interfaces may be configured automatically by setting them to use DHCP. The DHCP packets are intercepted by QEMU, which contains an embedded DHCP server. This functionality is sacrificed with bhyve, as a trade-off for higher performance networking.
There are three ways to handle network configuration in bhyve guests. All of these mechanisms are compatible with KVM as well.
cloud-init can handle a variety of configuration tasks within virtual machines. As of cloud-init 18.3, it can reliably use the metadata API to fetch the network configuration and translate it into various formats found on the major Linux distributions.
If the distribution you care about does not have version 18.3 or later of cloud-init, you may build it with these instructions.
Aside from installing the cloud-init package, it is important to also configure it so that it only uses the SmartOS data source. Failure to do so will result in long delays as it tries to probe other data sources which have long time outs. This configuration is accomplished with the following in /etc/cloud/cloud.cfg.d/90_smartos.cfg
.
datasource_list: [ SmartOS ]
In some cases, cloud-init is not a practical consideration. An obvious case is for operating systems like Windows, which cloud-init does not support. Another case may be minimized images that strive to eliminate Python. In these cases, an alternate configuration mechanism is needed.
The expected technique in this case is to use mdata-get
to retrieve sdc:nics
, sdc:routes
, and sdc:resolvers
. Operating-specific commands or APIs should then be used to transform this data into a running configuration. An example of this for Windows is enable_networking.ps1
.
Static configuration is not really practical in the typical Triton data center. Standalone SmartOS installations may find this scheme useful. With this approach, the machine image contains no network configuration. When an instance created from that machine image is booted for the first time, the administrator will log in at the console to perform static network configuration.
A hybrid machine image is a machine image that works with both KVM and bhyve. There are three key considerations as to whether a particular image will be compatible with bhyve and kvm.
- It must use a BIOS boot ROM. This may be explicit in the image via
bootrom: "bios"
or implicit by leavingbootrom
unspecified. - The guest OS must configure networking using the metadata service. This will typically be done via cloud-init or a boot-time script, as described in the Network Configuration section.
- The device enumeration that happened during image creation must be compatible with the device enumeration that will happen in each instance. In particular, if the guest OS should be tolerant of the root disk being at a different physical path than it was in a previous boot. Operating systems that use ZFS root are likely to be the most intolerant in this regard.
To determine whether an image is a hybrid image, examine manifest.requirements.brand
. If it is not set, it is a hybrid image. If it is set, it is not a hybrid image and may only be used with the specified brand. This restriction is enforced by XXX, but not by vmadm
.
# imgadm get ac99517a-72ac-44c0-90e6-c7ce3d944a0a | json manifest.requirements.brand
kvm
Linux images created by Joyent starting in October, 2018 are hybrid unless otherwise noted. Windows images are not hybrid because KVM requires BIOS boot and bhyve requires UEFI boot.
XXX - The example above shows that our newest Ubuntu image is not a hybrid image. We need to fix manifest.requirements.brand
on that image and probably others.
The following guest operating systems are supported with bhyve:
- CentOS 6.x beginning with CentOS 6.9. Requires custom build of cloud-init.
- CentOS 7.x beginning with CentOS 7.4. Requires custom build of cloud-init.
- Debian 8. Requires custom build of cloud-init.
- Ubuntu 16.04 beginning with Ubuntu 16.04.5
- Ubuntu 18.04
- Windows Server 2012r2. Requires sdc-vmtools-windows.
- Windows Server 2016. Requires sdc-vmtools-windows.
Operating systems not listed may "just work." Support for other operating systems will be added as needs dictate.