Skip to content

Instantly share code, notes, and snippets.

@kainino0x
Last active September 13, 2019 17:39
Show Gist options
  • Save kainino0x/3cb053f1b9d09fc065e802e36b497d3c to your computer and use it in GitHub Desktop.
Save kainino0x/3cb053f1b9d09fc065e802e36b497d3c to your computer and use it in GitHub Desktop.
partial interface GPUDevice {
    GPUSurface createSurface(GPUSurfaceDescriptor descriptor);
};

interface GPUSurface : GPUTexture {
    ImageBitmap transferToImageBitmap();
};

dictionary GPUSurfaceDescriptor : GPUObjectDescriptorBase {
    ImageBitmapRenderingContext context;
    GPUExtent3D size;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10; // GPUTextureUsage.OUTPUT_ATTACHMENT
};

Either context or size is required (not both). If context is specified, then its canvas's size is used. The surface may be optimized for display to that canvas.

Note: For example, this may mean the surface points at an image in an IDXGISwapChain created for this canvas. However, such optimizations are not observable. An ImageBitmap created from a GPUSurface can be used just like any other ImageBitmap, regardless of the context specified. The ImageBitmapRenderingContext can still receive any ImageBitmap.


Internally, any necessary copies-on-write or moves-on-write will occur.

For example, on D3D12/DXGI, with a canvas that has had control transferred to an OffscreenCanvas, and an ImageBitmapRenderingContext ibrc created from it:

  • A GPUSurface s could be allocated inside a swap chain buffer.
  • An ImageBitmap ib created from s would still point at the same allocation.

If this is done, then:

  • Transferring ib into ibrc would cause a swap chain present.
  • Transferring another unrelated ImageBitmap, ib2, into ibrc would cause ib to be moved into a new backing store and ib2 to be copied into the swap chain and presented.
  • Consuming ib in another way would cause the swap chain buffer to be freed for reuse by another surface.
  • Using ib in a non-consuming way would just read-back from its allocation in the swap chain buffer.
  • Creating another GPUSurface s2 would do a normal allocation (not inside the swap chain buffer, because that space is already used).

Any deoptimization cases could issue warnings.

Also note that any canvas which is synchronized with DOM rendering (i.e. any non-transferred-to-offscreen canvas) cannot be presented via IDXGISwapChain, because IDXGISwapChain::Present does not guarantee which frame the result will appear on.

Possible documentation bugs

[1] refers to the "swap chain's zero-index buffer", which seems to be wrong with D3D12 since it doesn't renumber the swap chain buffers on present.

[3] says that only FLIP_SEQUENTIAL can be used, but we are using DISCARD already in Dawn.

@austinEng
Copy link

Austin's alternative idea:

interface GPUSurface {
    requestNextTexture(callback: (GPUTexture) => void);
    ImageBitmap transferToImageBitmap(GPUTexture);
};

requestNextTexture is sort of like requestAnimationFrame in that it calls the provided callback when the GPUSurface has the next swapchain image available. If an application does not present frames and the GPUSurface is backed by a native swapchain, then it will not call requestNextTexture until swapchain images are presented or detached.

Unlike requestAnimationFrame, requestNextTexture may call the callback inline immediately if there is a swapchain image available.

Advantages
Applications can always stay on the fast path for presentation. The requestNextTexture callback allows applications to record frames ahead of time and decouples GPU command execution from presentation.

Disadvantages
It's harder for an application to do rendering that is not tied to a swapchain. ex.) Rendering to 10 ImageBitmaps to transfer to image elements.

Simple usage:

function frame() {
  surface.requestNextTexture(texture => {
     // Do rendering...
     
     // Present
     const imageBitmap = surface.transferToImageBitmap(texture);
     context.transferFromImageBitmap(imageBitmap);

     requestAnimationFrame(frame);
  });
}

requestAnimationFrame(frame);

Advanced Usage:

const frames = [];

function presentNextFrame() {
    context.transferFromImageBitmap(frames.pop());
    if (frames.length > 0) {
        requestAnimationFrame(presentNextFrame);
    }
}

function enqueueFrame(texture) {
    const imageBitmap = surface.transferToImageBitmap(texture);
    frames.push(imageBitmap);
    if (frames.length == 1) {
        requestAnimationFrame(presentNextFrame);
    }
}

function submitFrame(texture) {
  if (useWorkers) {
    doAsyncRendering().then(() => {
       // Present
       enqueueFrame(texture);
       surface.requestNextTexture(submitFrame);
    });
  } else {
    // Do rendering...
    
    // Present
    enqueueFrame(texture);
    surface.requestNextTexture(submitFrame);
  }
}

surface.requestNextTexture(submitFrame);

@fserb
Copy link

fserb commented Sep 13, 2019

I'm not sure how I feel about adding a new RAF-like feature. It's really really hard to provide this signal without being attached to the document's animation (and probably will be a worse signal than RAF).

I like the initial proposal. There are a few cases you are missing here:

  1. What happens when the canvas element resizes? Do you need to say "its canvas's size at creation time is used"? What is the process to update the size?

  2. About this:

Also note that any canvas which is synchronized with DOM rendering (i.e. any non-transferred-to-offscreen canvas)
cannot be presented via IDXGISwapChain, because IDXGISwapChain::Present does not guarantee which frame the result will appear on.

TransferedToOffscreen canvases are also synchronized with the DOM rendering (for some value of "synchronized"). I don't know if IDXGISwapChain is a special thing, but probably nothing that you can do will guarantee which frame the result will appear on. There will always be a compositing step, and you should embrace that on your definition. Both for Offscreen and regular Canvas, IDXGISwapChain could mean "make the frame ready to be composited".

Of course, we have desynchronized (hardware surfaces) canvas, in which your Present will present immediately. But this should be a property of the canvas, not of WebGPU, I think.

@kainino0x
Copy link
Author

  • What happens when the canvas element resizes? Do you need to say "its canvas's size at creation time is used"? What is the process to update the size?

Yes, I should say "at creation time". Other than that I think it's pretty straightforward - since you create a new surface every frame (not sure if that was clear above), you just end up with an ImageBitmap of the old canvas size and what happens is exactly the same as what usually happens when you present an improperly sized ImageBitmap to an ImageBitmapRenderingContext.

@fserb
Copy link

fserb commented Sep 13, 2019

Sure, I meant... what if there's a resize and the developer WANTS to use the new size? Does it create a new GPUDevice? You need a way to do this resize that is not super complicated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment