Skip to content

Instantly share code, notes, and snippets.

@Keenuts
Last active Mar 6, 2021
Embed
What would you like to do?
GSoC 2018 | Vulkan-ize VirGL

GSoC 2018 | Vulkan-ize VirGL

GSoC 2018 is coming to an end. This GIST will present you this project in its current state (August 10, 2018).

Current state

We can run a sample vulkan compute application, R/W data to/from the server, and execute a compute shader.

Project links

The best way to test this project is to clone the helper repository. A run-demo.sh script will clone and compile sub-projects and run the demo.

https://github.com/Keenuts/vulkan-virgl

The project code is split into several parts:

  • VirglRenderer fork (addition: Vtest front-end, and Vulkan extension to vrend_renderer)
  • MESA fork (addition: Vulkan ICD for vtest)
  • Vulkan test application
  • VirglRenderer second fork (Repo used to emit PR to VirglRenderer. This repository is a therefore late and diverges)

https://github.com/Keenuts/virglrenderer
https://github.com/Keenuts/mesa
https://github.com/Keenuts/vulkan-compute
https://gitlab.freedesktop.org/Keenuts/virglrenderer

For the ones interested, here is an blog post about this project:
https://www.studiopixl.com/2018-07-12/vulkan-ize-virgl.html

Note: VirglRenderer is designed to bring 3D acceleration to QEMU's guests. However, for this first implementation,
choice has been made to only use it with Vtest. Vtest uses an UNIX socket to send commands between a client (the Vulkan/OpenGL application) and the server (Virglrenderer).

There are currently 3 PR submitted.

The Vtest part has been removed from these PR.

Vtest will change soon (More VirtIO like behaviors). Since this modification will imply a rewrite for this part, it's not submitted.

The memory sharing and command buffer creation is not submitted either.

Memory sharing is being discussed, and command creation is only valid for this PoC.

Even though these parts does not appear on a PR, they remain part of the PoC project presented on this email: https://lists.freedesktop.org/archives/virglrenderer-devel/2018-July/001143.html

How it works

ICD loader and instance creation

When a Vulkan application starts, it queries the ICD. Once both parts agreed on an API & ICD version to use, Vulkan becomes usable. It's during this first step that we create the VirglRenderer instance. Before sending the final green-light to the application, the ICD connects to VirglRenderer. If the connection fails, we can simply report as a non-compatible Vulkan ICD.

When the Vtest server receives a new connection, it forks. The new child process will host our Vulkan instance. Using this model, every client side vulkan ICD instance is associated with it's own server vulkan instance

Once connected, our ICD will fetch informations about physical devices, and cache them. It can now report as a valid ICD driver which supports Vulkan 1.1.

A call to a Vulkan function is not direct. When calling vkCreateDevice by example, you'll call the icd-loader. This loader will either call validation layers, or the correct ICD entry-point. Calling the correct entry-point is made in two steps:

  • Query the functions' address using vk_icdGetInstanceProcAddr
  • call the function

The loader can query function that won't be used. Thus for non-implemented functions, querying the address is not handled as en error. Instead, the ICD replies with the address of a dummy function. Aborting only if the non-implemented function is used.

VkDevice & object creation

When creating a VkDevice on the client, a VkDevice is created on the server. Along this VkDevice, VirglRenderer will store a hash-table containing vk objects. Creating a Vulkan device is therefore similar to a virgl-context creation.

When destroyed, the device will also destroy all the object attached to it. Object destruction can also be triggered by the client by calling a vkDestroy* function.

Object creation is mainly API-forwarding. Adding some intelligence here would go against the idea of Vulkan IMO.

Memory sharing

Usually, a Vulkan application will create a VkMemory object, ask Vulkan to map it, and directly read or write in it.

In our current scenario, client and server are two distinct userland processes. On the server side, Vulkan will map the memory, but we do not have any information on this pointer. Thus we cannot share it directly to the client.

To avoid some headaches with memory consistency issues, the ICD will alter a bit in the memory properties exposed. The VK_MEMORY_PROPERTY_HOST_COHERENT_BIT will be set to 0. This forces the application to explicitly mark memory updates. When a flush command is issued, the memory content is sent through the UNIX socket and written to the mapped VkMemory object. Behavior is similar when Invalidate is called.

There is no memory manager what-so-ever. When a flush is called, the server will map the memory, write to it, flush if needed, then unmap.

Commands

Command buffer related commands are only executed on the client. The ICD will keep track of the different modifications made to a command buffer. When vkEndCommandBuffer is called, the state is sent to the server, and a similar command buffer is created. For now, only some compute-related features are supported.

Command submission and fence polling is forwarded to the server.

How to test it

There is a main repo used to build and test it rapidly. In it, a bash script and a dockerfile (+ readme, todo)

The bash script in itself should be enough. But if the compilation fails for a reason, the dockerfile could be used.

The README provided should be enough to make the sample app run.

What needs to be done

  • Rewrite the vtest part to support new VirtIO like behavior (Should be merged soon)
  • Rewrite the memory transfer part (methods to share memory pages are being discussed right now)
  • Try to merge the vrend part to VirglRenderer (see the VirglRenderer repo on freedesktop)
  • Supports the whole Vulkan API...
@wqlxx
Copy link

wqlxx commented Dec 4, 2018

hi,Can you please share the PPT which you used in this video

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment