createCanvas(100, 100, WEBGL)
shader = createShader(vert, frag)
filter(shader) // error, only expects THRESHOLD|GRAY|OPAQUE...
createCanvas(100, 100, WEBGL)
loadPixels()
oldPixels = pixels
shader = createShader(vert, frag)
filter(shader) // this will be feature parity with desktop processing
shader.setUniform(...)
loadPixels()
newPixels = pixels
// oldPixels !== newPixels
// later
filter(GRAY)
/*
* @param {Constant|p5.Shader} either THRESHOLD, GRAY, ...
* or a shader object that's been made
* from vert/frag files or strings
*/
filter(filterConstantOrShaderObj) {...}
* WEBGL mode only, at least for now
Only one graphics layer, the main p5.RendererGL
that the user sees and draws on:
sketch/drawing ┌────────────────────┐
functions │ renderer │ where all the
> > > > > > > > > > │ coupled with │ rendering logic
USER │ the canvas │ happens
< < < < < < < < < < │ and the user │
view │ inputs │
pixels ┕────────────────────┘
Using current architecture and doing filters on the main graphics layer
sketch/drawing ┌────────────────────┐
functions │ renderer │
> > > > > > > > > > │1.sketch/drawing │
USER │ functions applied to
< < < < < │ main canvas pixels│
^ pixels │ │ problem: pixels have to be processed
then ^ are │2.grab pixels │ somewhere behind the scenes
processed ^ drawn │ for post processing
pixels are ^ │ │
drawn ^ │ │
^ < < < < < < < < < < < │3.draw new pixels │ bad: two drawing operations
│ onto this same │ in one frame / draw loop
│ canvas │
┕────────────────────┘
A second graphics layer is required somewhere in order to enable post processing, whether that's through p5 objects or webgl textures.
Post-processing by copying pixels to a secondary p5 renderer, like a p5.Graphics:
sketch/drawing
functions ┌────────────────────┐
│ first renderer │ main rendering logic
USER > > > > > > > > > > │ invisible │ onto its own canvas
│ graphics context │
^ ┕────────────────────┘
^ v readPixelsWebGL()
view ^ v copy pixels
processed ^ v
pixels ^ ┌────────────────────┐
^ │secondary renderer │ where new
< < < < < < < < < < < < │ visible canvas │ post-processing
│ │ occurs
┕────────────────────┘
-
Where does the secondary renderer live in the code? As a property of the first renderer, eg
_renderer.secondaryGraphics
? -
Will settings like camera, image mode, positioning, antialiasing be easily preserved between renderers?
-
Performance might suffer, passing pixels from CPU to GPU
Using a framebuffer as the primary renderer, taking advantage of webgl texture targets and staying in the GPU:
sketch/drawing
functions ┌────────────────────┐
│ framebuffer renderer gl.bind(fbo.texture)
USER > > > > > > > > > > │ │
│ webgl texture │ // apply user drawing
^ │ v │
^ │ v │ // run post processing shaders
view ^ │ v │
processed ^ │ v │ gl.bind(mainCanvas/null)
pixels ^ │ v │ render fbo.texture
^ │ │
< < < < < < < < < < < < │ visible canvas │
│ │
┕────────────────────┘
See a rough demo, changing line 67 to see the other layer: https://editor.p5js.org/jwong/sketches/plBIMghYJ
-
Where does the p5.Framebuffer live? Maybe
_renderer.mainFramebuffer
? Compare with existing_renderer.framebuffers[]
. -
Framebuffer already manages and preserves most (all?) settings like camera and antialiasing from its parent renderer, so less challenge there
-
Backwards compatibility; will it break any existing features?
-
Arguing for accessibility
- Does this help any other features besides filter(shaders)?
- Worth the architecture change?
-
Creating unit tests