virglrenderer is a library that gives emulators the necessary tools to implement a virtio-gpu device, in particular one with 3D support. See capability sets below for a summary of the APIs virglrenderer can implement for the guest.
It directly implements the logic behind some 3D commands like GET_CAPSET
, CTX_CREATE
, CTX_SUBMIT_3D
, CREATE_RESOURCE_BLOB
, and though it closely follows the semantics of virtio-gpu in most cases, it is in theory independent of virtio or any other transport.
Main user is qemu, but there also appears to be a standalone virtio-gpu vhost-user implementation that uses virglrenderer in qemu/contrib.
virglrenderer's public header is at src/virglrenderer.h
. This document attempts to outine the public API and contract, but you should look at the header for an authoritative source of truth since semantics described here could be slightly wrong or have changed, as the API is still evolving.
A small fraction of the APIs described below are labeled unstable and subject to break; you'll need to define VIRGL_RENDERER_UNSTABLE_APIS
before including virglrenderer.h
to access them.
The public API is not thread safe and must be called from a single thread, and callbacks (with the exception of async fence callbacks, described below) will not be invoked from other threads.
virglrenderer uses global state, so only one instance can operate in a process. Before calling any other method, initialize the renderer by calling virgl_renderer_init
with the following parameters:
- A
void *
cookie that will be passed to callbacks. - Set of global flags, discussed throughout the document. Most allow enabling or disabling features exposed to the guest (see capability sets), others tweak the synchronization (see fencing), and others manage access to the underlying GPU resources (see GPU resource management).
- Set of optional callbacks. Some of them allow virglrenderer to notify you of events (see fencing), and others allow you to supply resources to virglrenderer yourself, such as setting up the GL context or opening a DRM node (see GPU resource management). The struct is versioned to allow new fields to be added without breaking ABI.
Like most functions, the return value is 0 on success and an errno code on failure. Example of use:
void *cookie = my_state;
int ret;
int virgl_flags = VIRGL_RENDERER_USE_EGL | VIRGL_RENDERER_USE_SURFACELESS;
struct virgl_renderer_callbacks virgl_cbs;
memset(&virgl_cbs, 0, sizeof(virgl_cbs));
virgl_cbs.version = VIRGL_RENDERER_CALLBACKS_VERSION;
// TODO: set callbacks here
if (ret = virgl_renderer_init(cookie, virgl_flags, &virgl_cbs)) {
fprintf(stderr, "failed to initialize virgl renderer: %s\n", strerror(ret));
return 1;
}
You can now call the rest of the methods as needed. It's important to call the non-blocking method virgl_renderer_poll()
periodically, in order to carry out work such as checking for completed fences (see next section).
When the virtual GPU stops operating, call virgl_renderer_cleanup(NULL)
to clean up resources (its argument is unused). To reset to a clean post-initialization state (for example, when the device is reset by the guest) use virgl_renderer_reset()
.
As usual for 3D APIs, virglrenderer fulfills commands asynchronously. Methods such as virgl_renderer_submit_cmd()
only perform some validation, queue the command for execution and return immediately. To facilitate synchronization, virglrenderer exposes an API to create fences, i.e. requests to be notified when all GPU operations (up to the point at which the fence was created) have been carried out.
When initializing virglrenderer, supply a write_fence
callback. Then to request a fence, call virgl_renderer_create_fence(fence_id, ctx_id)
where fence_id
is a numeric identifier you can freely choose that will be passed back to the write_fence
callback to allow discerning which of the pending fences is being notified. For historic / compatibility purposes, the second argument is unused and the first argument is cast to uint32_t
, which is what the callback receives:
void write_fence(void *cookie, uint32_t fence);
Fences are notified in the same order they were created. Note however that, if at the time of checking for fulfilled fences, many of them have fulfilled, the callback will only be called 1 time, with the identifier of the last of these fences in creation order.
The previous API registers global fences, i.e. fences that cover all GPU commands in flight. The virgl_renderer_context_create_fence()
method allows registering fences on a specific context (ctx_id
), and on a specific timeline within that context (ring_idx
). Fences created this way are notified through a different write_context_fence
callback, which also gets passed these 2 parameters verbatim:
void write_context_fence(void *cookie, uint32_t ctx_id, uint32_t ring_idx, uint64_t fence_id);
Commands can then be submitted on a specific timeline (identified by a ctx_id
/ ring_idx
tuple), and this allows creating fences that only wait for a subset of the in-flight commands. Fences will only be delivered in order relative to fences on the same timeline.
Being a newer API, virgl_renderer_context_create_fence()
has an additional flags
parameter, which currently only accepts VIRGL_RENDERER_FENCE_FLAG_MERGEABLE
. This flag is implied in the API for global fences, and as explained above, it allows notifications for multiple fences (on the same timeline) to be coalesced into a single notification for the newer fence.
As explained before, virglrenderer requires virgl_renderer_poll()
to be called often to carry out periodic work. Part of that work is (1) checking for fulfilled fences and (2) notifying them by invoking the supplied callback.
Passing the VIRGL_RENDERER_THREAD_SYNC
flag causes some of the work of virgl_renderer_poll()
, in particular checking for fulfilled fences, to be offloaded to a separate (persistent) thread that virglrenderer spins when initializing. Note that the second part (dispatching callbacks) is still done by virgl_renderer_poll()
since callbacks are invoked from the same thread. Setting the environment variable VIRGL_DISABLE_MT
causes the flag to be ignored. To check if thread synchronization is in effect, verify that virgl_renderer_get_poll_fd()
returns a non-negative result.
In addition to offloading work, because the work thread can use blocking calls to the underlying APIs to wait for fences rather than periodic polling, thread synchronization increases efficiency. It also allows you to wait for a fence notification using standard poll
or select
APIs: virgl_renderer_get_poll_fd()
returns an FD you can poll to get notified when virglrenderer could have work to do, meaning you should call virgl_renderer_poll()
. Ownership of the returned FD remains with virglrenderer, which will close it when cleaning up.
Note that even when thread synchronization is in effect, virgl_renderer_poll()
still has to check for notifications from the thread and (if needed) dispatch necessary callbacks. The VIRGL_RENDERER_ASYNC_FENCE_CB
initialization flag causes callbacks to be dispatched directly from the work thread, which in most cases allows notifications to be delivered without delay. Note that even in this case you must still call virgl_renderer_poll()
periodically as work may still need to be performed on the main thread.
A single virtio-gpu device can offer support for more than one GPU API (see VIRTIO_GPU_CMD_GET_CAPSET_INFO
for officially allocated capset IDs), and through the VIRTIO_GPU_F_CONTEXT_INIT
feature it allows contexts of different APIs to be created and used concurrently. Though virglrenderer was originally created to implement the VirGL interface, it now offers other APIs if support for them is compiled at build time and the appropriate flags are passed to virgl_renderer_init()
.
Supported capabilities to date are:
-
VirGL: Exposes OpenGL through encoding / semantics modelled around Gallium driver commands, making it easier to implement a Mesa driver for the guest (which is the VirGL Mesa driver). This is enabled by default, and can't be left out of the build, but can be disabled at runtime through the
VIRGL_RENDERER_NO_VIRGL
flag. It's exposed through theVIRTIO_GPU_CAPSET_VIRGL
andVIRTIO_GPU_CAPSET_VIRGL2
virtio-gpu capability sets.- Video: virglrenderer can also expose accelerated video decoding / encoding through VA-API. This isn't a separate virtio-gpu capability set, instead it's carried as an optional part of VirGL (support for it is advertised inside the VIRGL capability blob, see below). This is currently an unstable API. Pass
-Dvideo=enabled
when building, and flagVIRGL_RENDERER_USE_VIDEO
on initialization. See this comment for more info about the general architecture.
- Video: virglrenderer can also expose accelerated video decoding / encoding through VA-API. This isn't a separate virtio-gpu capability set, instead it's carried as an optional part of VirGL (support for it is advertised inside the VIRGL capability blob, see below). This is currently an unstable API. Pass
-
Venus: Exposes the Vulkan API. Pass
-Dvenus=enabled
(venus-validate
is also relevant for development) when building, and theVIRGL_RENDERER_VENUS
flag on initialization. It's exposed through theVIRTIO_GPU_CAPSET_VENUS
virtio-gpu capability set. -
DRM: Exposes low-level DRM operations directly. Pass
-Ddrm=enabled
when building, and theVIRGL_RENDERER_DRM
flag on initialization. It's exposed through theVIRGL_RENDERER_CAPSET_DRM
(not yet released in the virtio spec) virtio-gpu capability set. This is designed to host many platforms, but right now it only has experimental support for MSM chips (pass-Ddrm-msm-experimental=enabled
).
To discover capabilities of a virtio-gpu device, the guest first asks for the capability sets it supports through a series of VIRTIO_GPU_CMD_GET_CAPSET_INFO
commands. virglrenderer exposes this info to the emulator through the virgl_renderer_get_cap_set()
method.
After enumerating the supported capability sets, it fetches each of them through the VIRTIO_GPU_CMD_GET_CAPSET
command. The result is a blob of data (encoded in capset-specific ways) describing the precise capabilities supported by that API. virglrenderer exposes this info to the emulator through the virgl_renderer_fill_caps()
method. For more details, see the virtio spec.
The capability set also defines the encoding of the opaque command stream that flows from guest to host through VIRTIO_GPU_CMD_SUBMIT_3D
.
VirGL has several modes of operation, which govern how it manages resources (context setup, buffer allocation, etc.):
-
Fully virglrenderer managed: This uses one of several "winsys" backends. The backend choice and its operation can be customized by both buildsystem flags and initialization flags.
-Dplatforms=foo,bar
allows choosing the platforms (glx
,egl
orauto
) to obtain OpenGL access, and a specific one can be selected at runtime through theVIRGL_RENDERER_USE_EGL
orVIRGL_RENDERER_USE_GLX
initialization flags. Independently of the backend, if GBM is available at build time, it will be used for buffer allocation (see also theminigbm_allocation
option). If EGL is chosen,VIRGL_RENDERER_USE_SURFACELESS
can be passed to choose a surfaceless platform.- Sandboxed operation: If the process is sandboxed, the user should select EGL using the initialization flag above (to bypass discovery) and supply a
get_drm_fd
callback that will be invoked when virglrenderer needs to get an open file to the DRM render device node.
- Sandboxed operation: If the process is sandboxed, the user should select EGL using the initialization flag above (to bypass discovery) and supply a
-
Custom backend: To bypass the provided backends and take control of the logic to manage the underlying API contexts, the user may supply the
create_gl_context
,destroy_gl_context
,make_current
(and optionallyget_egl_display
) callbacks.
Venus doesn't offer that control, but has a "proxy mode" where it will forward the commands to another process called the render server for its execution, to improve security. It needs no special build options, and can be enabled at runtime through the VIRGL_RENDERER_RENDER_SERVER
flag. By default this makes virglrenderer spawn a render server automatically, taking care to pass it an IPC socket to communicate over. With sandboxed operation, the user can supply a get_server_fd
callback that returns the FD of a socket connected to a running server. Proxy mode is currently unsupported by VirGL.
The code for the render server lives under the server
directory, and supports several degrees of isolation among different contexts (see the render-server-worker
option).
When not using the proxy backend, Venus and the rest of capabilities support sandboxing through the get_drm_fd
callback, which is why that callback may get called multiple times.
Hi mildsunrise,
thanks a lot for this information! Do we still need those build flags for the most recent builds of VirGL, qemu, virtio or is this expected to just work? Which would be the minimum required versions if that is the case?
I'm talking about Venus and vaapi support especially.