Skip to content

Instantly share code, notes, and snippets.

@mattdesl
Last active March 27, 2023 15:30
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mattdesl/7851b122a4d6d1d0d05d1f51bf3cd45f to your computer and use it in GitHub Desktop.
Save mattdesl/7851b122a4d6d1d0d05d1f51bf3cd45f to your computer and use it in GitHub Desktop.

WebGPU in Deno and the browser (with canvas-sketch)

This is a canvas-sketch demo that can render an image with WebGPU in both the browser and deno.

requirements

This requires deno 1.8 or higher (tested on 1.31.3) and a recent version of node/npm to install canvas-sketch-cli. If you want to run the browser version, you'll need a browser with WebGPU support, which is most likely going to be Chrome Canary with WebGPU enabled.

setup

Copy the deno.json and webgpu-draw.js files into a new folder. In your terminal, cd into that new folder.

run with deno

Run the following with deno from the folder you created:

deno run --unstable --allow-write=. webgpu-draw.js     

(WebGPU is currently an unstable API in deno)

This will write test.png file to the current directory.

run in the browser

To run in the browser, run:

npx canvas-sketch-cli webgpu-draw.js --open

This will install the latest canvas-sketch-cli, auto-install canvas-sketch and fast-png (not needed for browser) and show the image with WebGPU.

notes

  • This script should work with other bundlers like esbuild (canvas-sketch client-side API does not need the CLI to work)
  • The client-side API should be canvas-sketch@0.7.7 or higher to fix a bug preventing raw WebGPU contexts
  • This could be wrapped up into a nicer library/tool that crosses both server and client use cases
{
"imports": {
"fast-png": "npm:fast-png",
"canvas-sketch": "npm:canvas-sketch"
}
}
import canvasSketch from "canvas-sketch";
import fastpng from "fast-png";
const settings = {
dimensions: [1024, 1024],
};
const sketch = ({ context, data }) => {
const { device } = data;
const presentationFormat = navigator.gpu.getPreferredCanvasFormat();
context.configure({
device,
format: presentationFormat,
alphaMode: "opaque",
});
const shaderCode = `
struct VertexOut {
@builtin(position) position : vec4<f32>,
@location(0) @interpolate(linear) color : vec4<f32>,
};
@vertex
fn vs_main(@builtin(vertex_index) in_vertex_index: u32) -> VertexOut {
let x = f32(i32(in_vertex_index) - 1);
let y = f32(i32(in_vertex_index & 1u) * 2 - 1);
var output : VertexOut;
output.position = vec4<f32>(x, y, 0.0, 1.0);
output.color = vec4<f32>(x*0.5+0.5, y*0.5+0.5, 1.0, 1.0);
return output;
}
@fragment
fn fs_main(fragData: VertexOut) -> @location(0) vec4<f32> {
return vec4<f32>(fragData.color.r, fragData.color.g, fragData.color.b, 1.0);
}
`;
const shaderModule = device.createShaderModule({
code: shaderCode,
});
const pipeline = device.createRenderPipeline({
layout: "auto",
vertex: {
module: shaderModule,
entryPoint: "vs_main",
},
fragment: {
module: shaderModule,
entryPoint: "fs_main",
targets: [
{
format: presentationFormat,
},
],
},
});
return () => {
const commandEncoder = device.createCommandEncoder();
const textureView = context.getCurrentTexture().createView();
const renderPassDescriptor = {
colorAttachments: [
{
view: textureView,
clearValue: {
r: 0,
g: 0,
b: 0,
a: 1,
},
loadOp: "clear",
storeOp: "store",
},
],
};
const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor);
passEncoder.setPipeline(pipeline);
passEncoder.draw(3, 1, 0, 0);
passEncoder.end();
device.queue.submit([commandEncoder.finish()]);
};
};
// canvas-sketch runner, agnostic to web & deno
(async () => {
const isWeb = typeof Deno === "undefined";
const adapter = await navigator.gpu.requestAdapter();
const device = await adapter.requestDevice();
const createDenoContext = () => {
// following does not exist in deno
navigator.gpu.getPreferredCanvasFormat = () => "rgba8unorm-srgb";
// add some other utils...
const canvas = {
toBuffer: async () => {
const dimensions = {
width: canvas.width,
height: canvas.height,
};
const encoder = device.createCommandEncoder();
copyToBuffer(encoder, texture, outputBuffer, dimensions);
device.queue.submit([encoder.finish()]);
const pixels = await createPixelBuffer(outputBuffer, dimensions);
return fastpng.encode({
...dimensions,
data: pixels,
});
},
};
let texture, outputBuffer;
const context = {
getCurrentTexture() {
if (!texture) {
const r = createCapture(device, canvas);
texture = r.texture;
outputBuffer = r.outputBuffer;
}
return texture;
},
canvas,
configure: () => {},
};
return {
canvas,
context,
};
};
const env = {
...(isWeb
? {
context: "webgpu",
}
: createDenoContext()),
data: {
adapter,
device,
},
};
const manager = await canvasSketch(sketch, {
...settings,
...env,
});
if (!isWeb) {
const buffer = await manager.props.canvas.toBuffer();
Deno.writeFileSync("test.png", buffer);
}
})();
///////////////////
////// Utils //////
///////////////////
function getRowPadding(width) {
// It is a webgpu requirement that BufferCopyView.layout.bytes_per_row % COPY_BYTES_PER_ROW_ALIGNMENT(256) == 0
// So we calculate padded_bytes_per_row by rounding unpadded_bytes_per_row
// up to the next multiple of COPY_BYTES_PER_ROW_ALIGNMENT.
// https://en.wikipedia.org/wiki/Data_structure_alignment#Computing_padding
const bytesPerPixel = 4;
const unpaddedBytesPerRow = width * bytesPerPixel;
const align = 256;
const paddedBytesPerRowPadding =
(align - (unpaddedBytesPerRow % align)) % align;
const paddedBytesPerRow = unpaddedBytesPerRow + paddedBytesPerRowPadding;
return {
unpadded: unpaddedBytesPerRow,
padded: paddedBytesPerRow,
};
}
function createCapture(device, dimensions) {
const { padded } = getRowPadding(dimensions.width);
const outputBuffer = device.createBuffer({
label: "Capture",
size: padded * dimensions.height,
usage: GPUBufferUsage.MAP_READ | GPUBufferUsage.COPY_DST,
});
const texture = device.createTexture({
label: "Capture",
size: dimensions,
format: "rgba8unorm-srgb",
usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_SRC,
});
return { outputBuffer, texture };
}
function copyToBuffer(encoder, texture, outputBuffer, dimensions) {
const { padded } = getRowPadding(dimensions.width);
encoder.copyTextureToBuffer(
{
texture,
},
{
buffer: outputBuffer,
bytesPerRow: padded,
rowsPerImage: 0,
},
dimensions
);
}
async function createPixelBuffer(buffer, dimensions) {
await buffer.mapAsync(1);
const inputBuffer = new Uint8Array(buffer.getMappedRange());
const { padded, unpadded } = getRowPadding(dimensions.width);
const outputBuffer = new Uint8Array(unpadded * dimensions.height);
for (let i = 0; i < dimensions.height; i++) {
const slice = inputBuffer
.slice(i * padded, (i + 1) * padded)
.slice(0, unpadded);
outputBuffer.set(slice, i * unpadded);
}
buffer.unmap();
return outputBuffer;
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment