Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@Boscop
Last active October 24, 2021 18:54
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Boscop/07f8c73dc675c0e440105e303f8855d0 to your computer and use it in GitHub Desktop.
Save Boscop/07f8c73dc675c0e440105e303f8855d0 to your computer and use it in GitHub Desktop.

ShaderBoi

ShaderBoi is a powerful rendering tool intended for producing immersive music videos, live streaming / VJing and art installations.
It loads shaders from shadertoy and has many features that no other tool has, such as the ability to produce 360° 8k video with ambisonic (spatial) audio, without any need for post-editing in another video editor.
It's your one-stop shop for next-level audiovisual digital art production, a whole pipeline in one tool!

Use cases

  • VJing with shadertoy shaders
  • shader live coding
  • 360° music video production with normal or ambisonic audio
  • 360° live streaming (shader live coding or VJing)
  • also normal (non-360°) music video production / live streaming

Who can benefit from this tool?

  • shader artists who are familiar with shadertoy's shader format
  • shader artists who want to get into VJing
  • musicians who want to add synesthetic visuals to their music before uploading to Youtube
  • DJs and live musicians who want to add dynamic live visuals to their performances
  • digital artists who do installations (with or without webcam post-processing)

Features

  • audio input from file, midi input from file or port (e.g. from DAW via virtual midi port like loopMIDI)
  • audio & midi input as textures & uniform parameters: shaders can react to every detail of the music
  • video rendering, including 360° rendering in equirectangular projection (suitable for Youtube). If you have fast internet, you can also livestream your audiovisual performances in 360° with this tool!
  • no need for post-editing: it does complete video finalization, encoding & embedding audio, optimizing video for Youtube, injecting metadata
  • stereoscopic rendering (separate image for each eye) to produce VR videos with depth perception
  • support for ambisonic audio (Youtube supports first-order ambisonics plus head-locked stereo)
  • custom resolution, up to 8K (Youtube only supports 8K)
  • shadertoy compatibility: loads shaders directly from shadertoy. (tested with complex multipass shaders)
  • auto caching of fetched shaders, force refetch with -c
  • cli subcommand to fetch & unpack shader to local folder for offline editing to get auto live reloading upon file saving!
  • upon auto-reloading shader: if there's an error, it pretty prints the shader error with highlighted source line. previous shader version keeps running instead
  • cli subcommand to repack shader after offline editing for re-import into shadertoy
  • shaders can get audio/FFT input the same way as on shadertoy through a texture. the FFT is computed the exact same way
  • in addition to the standard iChannel input assets, you can assign custom local inputs: webcams, video files, cubemaps, images. for webcams you can specify the desired combination of resolution and framerate
  • midi input through a texture similar to the audio/FFT texture
  • audio peak (beat) is detected via envelope follower and can be mapped to any shader or scene parameter
  • video recording in H.264, H.265 with arbitrary CRF or lossless with FFV1
  • etc.

Installation

  1. Just extract the zip anywhere you like. (But it can't be a write-protected location, because ShaderBoi needs to be able to write to its own directory for caching shaders and assets.)
  2. Add the folder where SBoi.exe is located to your PATH environment variable. The easiest way to manage your environment variables is Rapid Environment Editor (no affiliation but I use it myself, it's free).
  3. If you want to make videos (or use audio files/webcams/videos as input), you need ffmpeg in your PATH (and ffprobe, which is usually bundled with ffmpeg). The easiest way to install ffmpeg on Windows is via scoop install ffmpeg. (This also installs ffprobe.)
    If you don't want to install scoop, you can download ffmpeg binaries directly from here or here.
    Remember to add the folder where ffmpeg.exe and ffprobe.exe are located to your PATH environment variable as well. (It's not necessary when installing via scoop, it does it automatically.)
    If you want to use the high-quality libfdk_aac encoder for your Youtube videos, you need a build of ffmpeg that includes the fdk codec. The one installable via scoop doesn't include it. But this build does.

Note: If you get a warning from your antivirus software about the file ShaderBoi/spatialmedia/sm, it's a false positive. (This file was created by pyinstaller from a python script to make it work without python, it's purpose is to add spatial metadata to the recorded video files. If your antivirus software removes this file, ShaderBoi still works, but cannot add spatial metadata to the 360° videos that you render with it. You could add the metadata manually though.)

Quick Start (using a single shader)

ShaderBoi is very powerful but there is no real GUI (apart from keyboard shortcuts in the window), everything is configured through command line flags (or a written project file).
The normal workflow is to first use ShaderBoi with command line flags, and then if your project becomes more complex, you'll generate a project file (that matches your command line flags) and you'll run that from then on, because project files allow you to use features that the CLI doesn't (like chaining shaders, custom asset inputs and parameter mapping).
Don't worry if ShaderBoi doesn't start when you double click the exe, you have to start it through the command line:
Open windows explorer and go to the folder where SBoi.exe is located. Hold SHIFT and right-click with your mouse into a free area inside that folder (NOT on any file). By holding SHIFT, the context menu will have an entry called "Open PowerShell window here" (or "Open command window here"), click that. (Inside the console, you're now in the same folder where SBoi.exe is located.)
Now run sboi help to confirm that it's working. (But I really recommend adding the folder where SBoi.exe is located to your PATH environment variable, so you don't need to be in its folder, it will work regardless of your current directory in the console.)
Note: Although the software is called ShaderBoi, the executable is named SBoi.exe for convenience when typing it in the console: You don't have to write the .exe part, and the console is not case-sensitive, so you can always just type sboi for all ShaderBoi invocations.
Now, copy & paste the following command to see a shader in action:

sboi -i https://www.shadertoy.com/view/ts2yWm

You run a single shader by invoking sboi -i <shader> where shader is the shader's URL on shadertoy.
You can also try a VR shader (must be one that has a mainVR function in addition to mainImage), e.g.:

sboi -i https://www.shadertoy.com/view/llj3Rz --vr

You can press Space to toggle between first-person perspective and equirectangular projection.

Try this shader to see visualized audio input:

sboi -i https://shadertoy.com/view/llycWD -a audio.mp3

You can also try this one (it just visualizes the wave and FFT in a basic way):

sboi -i https://www.shadertoy.com/view/Xds3Rr -a audio.wav

Currently ShaderBoi can take audio input only from a file (but live audio input is planned). The supported file formats are mp3 and wav. When you render video, use wav, because it will be encoded, so use the highest quality source audio.
If your audio file is in a different format like m4a or ogg, convert it to wav with ffmpeg like this:

ffmpeg -i audio.m4a audio.wav

Midi input works live (through a port) or from file. You use the same flag (-m) for both, it recognizes if it's a port name or a file path.
E.g. to visualize midi input, you can run this shader:

sboi -i tlXyDf -m file.mid

Shader URLs have the format https://www.shadertoy.com/view/<shader_id>.
Instead of passing the full shader URL to ShaderBoi, you can pass only the id if it's more convenient for you. In this case, tlXyDf is the shader id.

To visualize midi note trails over time (in the most basic way), you can run this shader:

sboi -i WsKyRc -m file.mid

Note: If you open these two midi shaders on the shadertoy website, you'll notice they won't look like they do in ShaderBoi, that's because shadertoy doesn't support midi input, it's an extension in ShaderBoi.
(More info about midi input.)

Editing shaders

When you create a shader on shadertoy for the purpose of using it with ShaderBoi, I recommend saving it as Unlisted if it's not supposed to be viewed standalone on shadertoy.
ShaderBoi caches the shaders it fetches from shadertoy in its local cache folder.
If you're working on a shader on shadertoy while having ShaderBoi running, you can refetch it from shadertoy by pressing F5. But after pressing the Save button (or Ctrl+S) on shadertoy, you need to wait until you see the border of the shader editor flashing blue. It can take about 10 seconds or even longer!
If you press F5 in ShaderBoi before your shader finished being saved to shadertoy's db, it will still get the old version, just keep that in mind.
Instead of refetching at runtime with F5, you also cause a refetch when starting ShaderBoi, by passing the -c flag (which stands for "clear cache", but actually it will only clear & refetch shaders that you use in this invocation, it won't erase the whole cache).

Live coding / auto-reloading

You probably already noticed that it takes many seconds from pressing the Save button (or Ctrl+S) on your shader until the HTTP request completes. This is due to shadertoy's backend. I can't do anything about that, but I added the ability to do local editing of your shaders with auto-reloading:
First, download & unpack your shader like this:

sboi dl <shader_id_or_url> [<out_dir>]

If you don't specify the output directory, the shader will be extracted to the newly created directory extracted_shaders/<id> (it will tell you). Each shader pass is in its own .frag file. Now you can open them in your favorite editor and make use of auto-reloading by starting ShaderBoi with the path to this folder instead of the shader id:

sboi -i extracted_shaders/<id>

Now, whenever you save one of the .frag files, ShaderBoi will automatically detect it and reload it.
Any shader compilation errors will be shown in the console with pretty printing like this:

error[C7563]: assignment to uniform iResolution
- llj3Rz Image:245:1
245 |   vec2 p = (--iResolution.xy + 2.0 * fragCoord.xy) / iResolution.y;
    |

It shows the id of your shader (if you have given an alias name to the shader with this id, it will also show that name), as well as the pass name and line number.

Command line options

Run sboi help to see all available command line options. There are a lot.
They can be combined, here is an example:

sboi -i <id> --vr -m midi.mid -a audio.mp3 -s hd -r 25

This runs the given VR shader with midi & audio input at FullHD resolution at 25 fps (the default would be 30).
You can also use -m with a midi port name like -m "loopMIDI Port" for live midi input when VJing or streaming (make sure to send midi from your DAW to this port). The midi is processed the same way, except with live midi input the note lengths (stored in the midi texture) cannot not be known.

CLI subcommands

Each of the following subcommands has its own arguments that can follow after it.
The syntax is sboi <normal args..> <subcommand> <subcommand's args>.
You can see each subcommand's possible arguments with sboi help <subcommand>.

subcommand purpose
rec record video
fin finalize video for Youtube upload by adding audio. Video won't be re-encoded. Audio will be encoded to AAC, so better use a wav input audio file. For ambisonic audio, use a multi-channel wav file, it will encode as 24-bit PCM for best quality.
inj inject spatial metadata into video that is already finalized (contains audio).
dl fetch shader from shadertoy and unpack it to local folder for offline editing to get auto live reloading upon file saving.
ul upload given shader to shadertoy (to save changes after local editing).
pkg package unpacked shader back into one json file, for manually importing it on shadertoy. Install the shadertoy browser extension so that you can simply import this json file by clicking the Import button that the extension adds. But CLI upload via ul is more convenient.
new create a new shader on shadertoy (with new id), download and unpack it. Equivalent to manually creating a new shader on shadertoy followed by dl.
genproj create new folder containing a project.ron file based on the given args (requires at least specifying a shader with -i). This allows you to move from "configuring via CLI args" to "configuring via project file". Several features are not exposed to the CLI and are only available in a project file.

Note: The commands new and ul can only work if you let ShaderBoi login to shadertoy on your behalf. This requires storing your login credentials in ShaderBoi's global config file, named ShaderBoi.ron:

#![enable(implicit_some)]
(
	shadertoy_login: (
		username: "user",
		password: "pw",
	),
	// other config fields..
)

This file can be in the current working dir, or any parent of the folder where ShaderBoi's executable is located (or an arbitrary location if you pass it via --cfg <path>). (I know it's not the best idea to store credentials in plain text on a disk, I'm thinking of a better way to handle this, e.g. by adding CLI flags to pass them on the fly.) Btw, ShaderBoi doesn't leak your credentials, it doesn't contact any servers other than shadertoy and uses HTTPS to do the requests to shadertoy. If you don't want to let ShaderBoi login for you, you can still save your local shader edits via pkg and manual import of the json file on shadertoy.

Recording Video

E.g. to record a 360° video in 8k resolution and auto-finalize (-f) after recording and inject 360° metadata (-j), run this:

sboi -i <id> --vr -m midi.mid -a audio.wav -s 8k -r 30 rec -f -j

There are many more CLI options, e.g. to change the CRF or AAC bitrate, all options have reasonable defaults.
In 360° video playback you only see a quarter of the horizontal and half of the vertical resolution, so 8k will look more like FullHD normally looks. So I always recommend rendering 360° videos at 8k. (Youtube doesn't support higher resolution.)
When recording, it will render as fast as possible, even faster than real-time if your GPU is fast enough.
After recording finished, check the newly created ffmpeg.log file for any errors during encoding (usually everything went fine and the file is empty).
Your video is automatically optimized for Youtube.
Check it in VLC player (version 3 also supports 360° videos) to make sure it looks correct, especially to ensure that you didn't forget to inject the metadata.
Upload it to Youtube, leave it private while it's being processed. For a large video, it can take a long time until it's fully processed. The 360° playback will only look correct after processing finished.
Of course you can also produce normal (non-360°) videos, by omitting the --vr and -j flags.
You can also do the rendering, the finalizing (embedding audio) and injection steps separately with the appropriate CLI subcommands.

Logging Levels

You can set the logging level via environment variable BOI_LOG.
E.g. in cmd.exe: set BOI_LOG=trace. The levels are: error, warn, info, debug, trace. Default is info.

Keyboard shortcuts

shortcut action
Esc terminate. when video is being recorded, you need to press Ctrl+Esc to terminate. (this is to prevent accidental cancellation of recording.) Note: Whether you terminate normally or with Ctrl+C, the current video frame will finish being rendered and encoded into the video file. Your video file will not get corrupted.
F11 toggle fullscreen
Ctrl+F11 switch to the next available monitor
K pause/continue (same shortcut as on youtube)
L skip forwards 10s while paused (same shortcut as on youtube)
J skip backwards 10s while paused (same shortcut as on youtube)
. go to next frame when paused (same shortcut as on youtube)
, go to previous frame when paused (same shortcut as on youtube)
Space toggle between equirectangular projection and first-person perspective (only when running VR shader)
Up/Dn arrow zoom in/out when not in first-person perspective (decrease/increase fov)
1-5 set stereo rendering mode in this order: mono, top-bottom, left-right, right-left, anaglyph
W,A,S,D pan camera around
Q,E rotate camera left/right (roll)
F5 reload specified shaders (shaders_to_reload) from shadertoy after editing there (when not using local editing with auto-reload)
F12 take screenshot (will auto-increase file name, never overwrite)

Going deeper (multi-shader projects, custom iChannel inputs, param mapping, video recording)

Using project files

In the quick start section you saw how to run a single shader with ShaderBoi.
The alternative is running a project file:
When running a project file (with -i project.ron) instead of a single shader (with -i <shader_id>), a lot more is possible, such as:

  • using multiple shaders (layering as chains, and routing of chains)
  • using custom inputs (images, videos, webcams, cubemaps) to override iChannel inputs of shader chains
  • parameter mapping, e.g. mapping midi CCs to your custom shader params
  • specifying other project-specific settings, such as using higher-resolution textures for shaders

Project files contain some settings that can be overridden by CLI arguments, such as -r for frame rate, -s for resolution or --vr for rendering the output in equirectangular projection.
The recommended workflow is to generate a project file using the genproj subcommand.
You can pass the same CLI arguments when invoking genproj as when running a single shader, it will generate a project file where those settings that have corresponding CLI args are already filled in by the args that you passed. Then you don't start from an empty file and can just edit it to suit your project-specific needs.
The generated project file contains an unused (outcommented) chain & custom inputs to demonstrate the routing and input overriding with custom inputs. Modify it to suit your project.
Note: Chains can have multiple inputs. Each input can be either one of your custom inputs or one of your chains (it can also be None, meaning it won't be overridden). It's also possible to route a chain into itself this way (for feedback effects), it will then get its output from last frame.
Project files use the RON format, which is more human-readable & editable than JSON and supports comments (even nested ones).
You can find a suitable packages for your editor to get syntax highlighting, keyboard shortcuts for toggling comments etc.
When piping the output of genproj into a file, make sure you use a filename with .ron extension, so that your editor recognizes it as a RON file and uses its RON syntax mode.
You can then run this project file the same way as you would run a single shader, by passing its path with -i:

sboi -i project.ron

Using a config file for global settings

TODO

Chaining shaders

Each frame, all chains are processed in the order in which they are defined in the project file.

Specifying chain inputs

You can specify inputs that will be passed to a chain like this:

        (
            id: "my_cam_post_fx",
            inputs: [
                ( id: "my_cam" ),
            ],
            layers: [
                // ...
            ],
        ),

This will assign my_cam to chain input 0.
Each chain has 4 inputs that will be passed on to every layer, except input 0 which will be replaced by each layer's output, so that each layer can actually get the previous layer's output. The other chain inputs will be passed on unchanged.
Chain inputs (and outputs) can be of type texture or cubemap. Images, videos and webcams are of type texture whereas cubemaps are of type cubemap. (Currently it's not possible to use custom volumes (3d textures) as inputs.)
The above is equivalent to:

            inputs: [
                ( id: "my_cam" ),
                None,
                None,
                None,
            ],

None means nothing is routed to this chain input.
The inputs array is automatically padded to length 4.
So if you had wanted to assign my_cam to chain input 1, you would write:

            inputs: [
                None,
                ( id: "my_cam" ),
            ],

Which would be equivalent to:

            inputs: [
                None,
                ( id: "my_cam" ),
                None,
                None,
            ],

Chains can produce cubemap outputs if the last shader in the chain has a mainVR function (meaning it's a VR shader).
Note: Shaders with mainVR function always also have a mainImage function, so shaders that can produce cubemap output can always also produce texture output!
To request cubemap output from a chain when using that chain's output as input to another chain, write it like this:

            inputs: [
                ( id: "my_chain", cubemap: true ),
            ],

If you want to use a chain's cubemap output as the global output (to produce a 360° video), write:

    output: ( id: "my_chain", cubemap: true ),

This will render it in equirectangular projection.
This is equivalent to passing the --vr flag, which you can use to override this project setting.
You can combine 360° rendering with stereoscopic rendering by passing the --stereo arg, e.g. --stereo=top-bottom gives the most quality when rendering a video, since it makes the pixels squares:
Due to the nature of the equirectangular projection, it divides the effective vertical resolution by 2 and the horizontal resolution by 4. Top-bottom stereo mode divides the vertical resolution by 2 again, so you get square pixels. You can still use --stereo=left-right for previewing your rendering with a VR headset before doing the final render though.
If you don't have a VR headset but you have red-cyan glasses, you can use --stereo=anaglyph.
If you don't have that either, you can use --stereo=right-left and view it by crossing your eyes.
If you don't pass the --stereo arg, it defaults to monoscopic rendering.

Overriding iChannel inputs

To use a chain input as override for your shader iChannel, you need to specify in your shader pass which chain input should be routed to which iChannel input.
In a shader pass, write:

//! INPUT <iChannel>[:<input_idx>] (, <iChannel>[:<input_idx>])*

E.g.

//! INPUT 2:0

This means that iChannel 2 of this pass will take chain input 0.
If you omit :<input_idx>, it will automatically increment by one. E.g.

//! INPUT 1, 2, 3

is equivalent to:

//! INPUT 1:0, 2:1, 3:2

The most common case is that you just want to take the previous layer's output as input (which will be on chain input 0), so you'd write

//! INPUT n

Where n is the iChannel<n> where you want to take the input in.
(Note: This would be equivalent to //! INPUT n:0.)

Shader outputs with transparency

TODO

Custom iChannel input sources

In your project file you can define custom input sources that you can use to override iChannel inputs.

Images

Image inputs are specified like this:

    inputs: [
        ( id: "my_img", src: Img(r"folder\img.jpg") ),
    ],

Both relative paths and absolute paths work.
Note: If a path contains backslashes, you need to prepend it with r like here, to make it a raw string. Otherwise, you'd have to escape each backslash with another backslash (like Img("C:\\folder\\img.jpg") which would be inconvenient).

Cubemaps

Cubemap inputs are specified like this:

    inputs: [
        ( id: "my_cubemap", src: Cubemap(r"path\prefix\to\cubemap\face\images") ),
    ],

Cubemaps consist of 6 image files, one for each face. For cubemaps you don't need to specify all 6 paths, instead you specify a path prefix that must match exactly 6 image files. This can also be the path to a folder that contains the 6 image files, since that works as a valid prefix (as long as that folder only contains 6 image files). The image files must have suffixes to indicate which cubemap face they are.
Different suffixes are supported:

face suffixes
PositiveX posx, px, right, r, 0
NegativeX negx, nx, left, l, 1
PositiveY posy, py, up, u, 2, top
NegativeY negy, ny, down, d, 3, bottom
PositiveZ posz, pz, front, f, 4
NegativeZ negz, nz, back, b, 5

E.g. if your files are in folder C:\foo\bar and are named front.jpg, back.jpg etc., you would write Cubemap(r"C:\foo\bar").

Video files

Video inputs are specified like this:

    inputs: [
        ( id: "my_vid", src: Vid(( path: r"folder\vid.mp4", size: (1920, 1080) )) ),
    ],

The size field can be omitted. It's useful if you want to rescale the video.

Note: You can also use an animated GIF as video input, or you can convert it to a video file first, using ffmpeg.

Webcams

Webcam inputs are specified like this:

    inputs: [
        ( id: "my_cam", src: Cam(( name: "Camera Name", size: (1920, 1080), fps: 30, idx: 0 )) ),
    ],

The idx field can be omitted and defaults to 0. It's useful if you have multiple webcams with the same name and you want to refer to the nth webcam with the given name.
The size and fps fields are mandatory: Since webcams support multiple combinations of resolution and fps, you must specify the one you want.
On windows you can query your available webcam names like this:

ffmpeg -list_devices true -f dshow -i dummy

Then you can query the available combinations of resolution and fps like this:

ffmpeg -f dshow -list_options true -i video="Camera Name"

Midi input

There are 2 ways to use midi input.

  1. Defining mappings in the project file for custom shader params
  2. Through an iChannel texture, similar to the audio/FFT texture:
    On shadertoy, assign the audio track named X'TrackTure to your iChannel where you want to receive the midi texture (it's treated as special placeholder for the midi texture, since shadertoy has no native way to assign it).
    The midi texture has this format:
    width: 128, height: 5 * 16 (5 rows per midi channel)
    [0][pitch]: velocity of last NoteOn event. 1 => 127, is never 0
    [1][pitch]: seconds since last NoteOn event.
    [2][pitch]: seconds since NoteOff. < secs since NoteOn => note is off. < 0 => remaining time until NoteOff
    [3][cc]: cc values. currently only the first 120 ccs are tracked. remaining texels are 0
    [4]: [0]: program, [1]: pitch-bend in st., [2] channel aftertouch, [3..] poly aftertouch
    

Audio input

Shaders can get audio/FFT input just like on shadertoy through a texture. The FFT is computed the exact same way.
Note: Shadertoy only writes the lower half of the FFT to the texture (the lower 512 bins of the available 1024 bins), so you only have access to the lower half of the spectrum. ShaderBoi behaves the same, to stay compatible with shadertoy.

Ambisonic Audio

TODO

Parameter Mapping

Specifying Mappings

You can specify mappings for each layer in a chain like this:

    chains: [
        (
            id: "<chain_name>",
            layers: [
                (
                    fx: "<shader_id>",
                    cfg: {
                        "<param_name>": "<expr>",
                    },
                ),
            ],
        ),
    ],

E.g.

                    cfg: {
                        "param_a": "Ch0Vel64",
                        "param_b": "1 - Ch0Cc0",
                        "param_c": "Peak",
                        "param_d": "powi(1 - MidiBeat, 4)",
                    },

As you can see, arithmetic expressions are possible.
The following symbols are available in arithmetic expressions:

symbol description
PI self-explanatory
E self-explanatory
mod(x, y) x % y
min(x, y) self-explanatory
max(x, y) self-explanatory
pow(x, y) self-explanatory
powi(x, y) faster pow when exponent is int (arg is float but will be casted to int)
sqrt(x) self-explanatory
clamp(x, min, max) self-explanatory
mix(x, y, a) self-explanatory
step(edge, x) self-explanatory
smoothstep(edge0, edge1, x) self-explanatory
sin(x) self-explanatory
cos(x) self-explanatory
tan(x) self-explanatory
exp(x) self-explanatory
ln(x) self-explanatory
abs(x) self-explanatory
signum(x) self-explanatory
floor(x) self-explanatory
ceil(x) self-explanatory
fract(x) fractional part of x
in_range(a, b, x) returns 1 if x is in the range [a, b] (also when equal to b)
if(cond, a, b) if function that eagerly evaluates "if" & "else" terms.
cmp(x, y) Comparator function on two arguments. Returns -1 if the first argument is less than the second, 1 if the first argument is greater, and 0 if they are equal.

Mapping Sources

Symbolic variables available in expressions assigned to params:

  • Ch{ch}Cc{cc}: midi channel ch, value of cc, normalized to range [0, 1]
  • Ch{ch}Vel{pitch}: midi channel ch, velocity of pitch, normalized to range [0, 1]
  • Peak(¹): audio peak from envelope follower, this corresponds to beat if your kick is on the beat, max 1.0
  • MidiBeat(²): normalized time since beginning of beat (0.0 on the beat, 0.5 on the off-beat)
  • MidiBar(²): normalized time since beginning of bar (0.0 on each bar boundary, 0.5 at half)
  • BarTime(²): fractional time (in bars) since starting the process

(¹): only useful when using audio input
(²): only useful when using midi input from file. When using midi port, value will always be 0.

Mapping Targets

  • Every shader pass can define its own parameters, simply by defining a variable in this format:
    float <name> = <default value>; //! PARAM
    Each parameter must be on a separate line. If multiple passes in the same shader define the a parameter of the same name, they will both be mapped to the expression that is assigned to that parameter (so they will have the same value if there is a mapping, otherwise they will have their own (possibly different) default value).

Producing videos for Youtube

Youtube supports the following:

The video will only play correctly on YT if it has the right metadata, so remember to pass the right flags, like --ambix when you have ambisonic audio!
E.g. to record a stereoscopic 360° video in 8k resolution at 30 fps with ambisonic audio and auto-finalize (-f) after recording and inject 360° metadata (-j), run this:

sboi -i <id> -m midi.mid -a audio.wav -r 30 -s 8k --vr rec --ambix --finalize --inject

(Instead of running a single shader with -i <id> you can also pass the path to a project file.)
For ambisonic audio, the audio file must be a multi-channel wav file, 48 kHz sample rate recommended.
First-order ambisonics in ambix format: W, Y, Z, X
If it has an additional head-locked stereo track: W, Y, Z, X, L, R
Note: VLC can't play this kind of video yet, don't worry if it crashes.
If you have a VR headset, you can view the video in Vive Cinema.
Upload the video to Youtube, leave it private or unlisted while it's being processed. For a large video, it can take a long time until it's fully processed. The 360° playback will only look correct after processing finished.
Note: You might be tempted to render the video at 60 fps but keep in mind that it will severely reduce the number of people who can watch it at 8k (on streaming sites like YT), due to the increased need for bandwidth and processing power. In 360° video playback you only see a quarter of the horizontal and half of the vertical resolution, 8k will look more like FullHD normally looks, so you should make it easy for people to view your video in 8k.

Going Further

Producing Ambisonic Audio

Free VSTs

Learning Material

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment