Skip to content

Instantly share code, notes, and snippets.

@aduffey
Last active April 25, 2025 22:54
Show Gist options
  • Save aduffey/d7468b8068d6124641ff0762c2b373e8 to your computer and use it in GitHub Desktop.
Save aduffey/d7468b8068d6124641ff0762c2b373e8 to your computer and use it in GitHub Desktop.
Scanline simulation math

To model a scanline, we can first start by modeling a CRT's spot and then model its translation across the screen.

What should we use to model the spot? Something based on a two-dimensional gaussian ($e^{-x^2}e^{-y^2}$) is the obvious answer, but that has a couple of downsides:

  1. A gaussian extends to infinity, so we would have to window or truncate it.
  2. Two gaussians overlapping form higher peaks than each one by itself. Imagine two scanlines next to each other. Near the peaks of the individual gaussians would be slightly taller peaks of their sum. If this was our spot model, we would have to deal with these by rescaling the gaussians or the sums to be within the range of 0 to 1. See overlap-gaussian.png for a visual example.

Neither of these problems is insurmountable, but they make the math a bit more annoying. Instead, we can use a raised cosine:

$$ g(x) = \begin{cases} \frac{1}{2} + \frac{1}{2} \cos(\pi x) & \text{if } x \in [-1,1]\\ 0 & \text{otherwise} \end{cases} $$

When two adjacent raised cosines are added together, the sum will never be above the peaks as long as they are at least half of their widths apart. I believe I got this idea from crt-lottes-fast, although that shader uses a different way of calculating scanlines. See overlap-cosine.png for a visual example.

We can generalize this to two dimensions:

$$ g(x, y) = \begin{cases} \frac{1}{4} (1 + \cos(\pi x)) (1 + \cos(\pi y)) & \text{if } x \in [-1,1] \text{ and } y \in [-1,1]\\ 0 & \text{otherwise} \end{cases} $$

Compare the images below: spot_gaussian.png and spot_cosine.png. The cosine spot is slightly squared off in comparison but is pretty similar.

The size and intensity of the spot also varies. We can consider $s$ to be the spot's intensity, with a range of 0 at full black to 1 at full white. The spot should:

  • Have an integral over its bounds equal to $s$ so that its overall brightness varies linearly.
  • Have a width $\sigma(s)$ that increases as the brightness increases. In other words, the spot should get larger as its intensity increases.

We can satisfy both of these constraints:

$$ g(x, y, s) = \begin{cases} \frac{s}{4 \sigma(s)^{2}} (1 + \cos(\frac{\pi x}{\sigma(s)})) (1 + \cos(\frac{\pi y}{\sigma(s)})) & \text{if } x \in [\frac{-1}{\sigma(s)},\frac{1}{\sigma(s)}] \text{ and } y \in [\frac{-1}{\sigma(s)},\frac{1}{\sigma(s)}] \\ 0 & \text{otherwise} \end{cases} $$

Now, imagine the spot being scanned across the screen horizontally over time. The areas it lights up are integrated by our eyes (helped by the persistence of the phosphor) to look like lines. We can model a scanline with an integral and the addition of a time element:

$$ \int_{t_0}^{t_1} \frac{s(t)}{4 \sigma(s(t))^{2}} (1 + \cos(\frac{\pi (x-t)}{\sigma(s(t))})) (1 + \cos(\frac{\pi y}{\sigma(s(t))})) \ \mathrm{d}t $$

(Note that the bounds of the spot function are omitted for visual clarity.)

We essentially scan the spot over the screen between time $t_0$ and $t_1$ (the start and end of the scanline), adding up all the light that was produced. This is similar to a convolution. The result of our integral is the sum of the light at a given point $(x,y)$, where $x=0$ is the start of the scanline and $y=0$ is the center of the scanline. Units are scanline widths (the distance from the center of one scanline to the center of the next).

For $\sigma(s)$, a square root bounded by a maximum, $\sigma_{max}$, and minimum, $\sigma_{min}$ (as a proportion of the maximum), seems appropriate:

$$ \sigma(s) = \sigma_{max} ((1 - \sigma_{min}) \sqrt{s} + \sigma_{min}) $$

I would like to measure an actual CRT to check how accurate to reality this width function is, but my JVC is currently relegated to the garage.

So now with all of this together, we can either solve our scanline integral analytically or numerically. Solving it analytically is practical when the input $s(t)$ is the original pixel data, because the we can treat it as a piecewise constant signal and itegrate in pieces. If we prepend a low pass filter, we will resample our data and will need to solve the integral numerically. Instead we have a series of equally-spaced samples from $s(t)$, and we can use these samples to estimate the integral using the rectangle rule.

For any given point on the screen, we can estimate its value by adding the estimates from the two nearest scanlines. As long as $\sigma_{max} \le 1$, no other scanline will contribute to its brightness.

Note that if we had instead projected our spot to two dimensions as $\frac{1}{2} + \frac{1}{2} \cos(\pi \sqrt{x^2 + y^2})$, adjacent, full-brightness scanlines would not add to 1 and would instead have peaks. To keep this property through the integral, we need to keep the $x$ and $y$ components separable.

@mdrejhon
Copy link

mdrejhon commented Apr 25, 2025

Hello @aduffey !
I didn't realize you personally invited me to your private gist until I checked my gists -- I'm flattered you invited me here.

I presume it's because you found out about my temporal CRT electron beam simulator at www.github/blurbusters/crt-beam-simulator -- which can be used independently (but in conjunction) with other spatial CRT electron beam simulators;

I'm also on Discord as @blurbusters as another chat venue if you need a Temporal Expert in CRT. I leave spatials (filters) to people like you; I mostly focus on the time dimension.

First, let me compliment you on your novel method of simulating a CRT electron beam. You're on the right path, spatially. And in theory it's possible to combine the two, but it's a bit challenging at the moment. As soon as I've upgraded TestUFO to run shaders (ETA: summer), I'm going to release CRT simulator version 2 which may be easier to combine/port into existing spatial filter simulators.


Now I would like to address a few things, since adding a spatial to a temporal dimension needs to be tweaked a bit towards Best Practices (bullet 4 below)

(1) CRT beam spot is often from 10,000nit to 30,000+ nits at the raster dot. It falls off really rapidly though, in a inverse logarithmic curve. If you look at non-overexposed highspeed video (e.g. SuperSloMo Guys Caliber, rather than other peoples' caliber), you will notice that only one scanline is super-bright.

(2) To correctly perceptual temporally a CRT, the leading edge still needs to be fairly sharp (not blurry), as CRT is fast-rise slow-fall.

(3) You can "cheat" by spreading brightness over multiple scanlines. 30,000 nits for a few pixels can be spread to 1000 nits for a few scanlines. This requires a bit of a "photon accelerate" at the top of the inverse logarithmic curve. I use such a (approximation) of such an algorithm.

(4) If you photonspread "In a best-effort temporally-faithful-way as much as technologically possible to original CRT", then definitely photonspread ONLY on the most recent pixels visited (aka the trailing few scanlines). But some minor practicalities occur. I have to blur the leading edge though a bit, because sharp edge creates tearing artifact (VSYNC OFF style) during fast horizontal pans. So when you pause my shader toy, you see a lot of concentrated light surge in as few refresh cycles as possible, with trailing blur bigger than leading blur -- just like a real CRT (fast rise, slow fall)

The tactic is to spew out photons quickly, then taper. For example a halfbright (linear brightness) grey can be blasted out in the first refresh cycle of two, e.g. RGB(255,255,255) followed by RGB(0,0,0). Short pulsewidth is key, because motion blur is proportional to pixel visibility time as demonstrated by https://www.testufo.com/blackframes#count=4 as seen on a 240Hz OLED (For linear blur maths, where 240Hz = 1/240sec camera shutter, you want GtG=0. Having GtG<>0 interferes with blur, so experiment with GtG==0.00 as close as possible, GtG is like a slowly-moving camera shutter before/after refreshtime, as explained at www.blurbusters.com/120vs480 ...)

Note: I do have a known side effect in my CRT simulator. Both me and Tim do the photonspreading (since a single refresh cycle can't possibly surge out the 30,000 nits, haha) on a per-channel basis in that filter. One problem with that is that it can cause chroma ghosting if some color channels can surge out quicker (1 refresh cycle) than others (2-3 refresh cycle). My CRTv2 will include a chroma ghosting adjustment to synchronize the photonspreading to the slowest color channel available. But it will be a continuum between amount of motion blur (as per Display Motion Blur Science of Blur Busters fame), and amount of chroma ghosting -- user's own choice.

Talbot Plateau Law is Key With Light Modulation

If you plan to do motion blur reduction in software, you must become an expert in Talbot Plateau Law.

Now, the tricky part is Talbot Plateau Law and linear-correctness. RGB(127,127,127) is not half the photon brightness of RGB(254,254,254) because of the gamma curve. So I have to do a gamma2linear() formula before doing my CRT simulator maths, then a liner2gamma(). Otherwise I get problems like banding (more common on VA and TN LCDs); it helps to lift your black levels and lower your white levels, to make linearization easier without white/black clipping.

The nice thing about gamma curve is that everything from RGB(0,0,0) thru RGB(188,188,188) can be squeezed into the first refresh cycle of two of a 120Hz OLED for 60fps CRT simulation (approx). So almost 75% of RGB values can be blur reduced by a full perfect 50%.

Now, you want extra margin, e.g. 240Hz OLED, or 480Hz for CRT beam simulation, and keep persistence low, so that fullwhites don't take more than half of the refresh cycles of a CRT simulation. Plus oversample for the nyquist issues of spreading Hz over multiple Hz. So native:simulated Hz ratio of at least 4 or more works best with CRT electron beam simulation; but a ratio of 2 can still help some displays (to a limited extent). Viewing TestUFO Variable Persistence Black Frame Insertion is probably highly educational that fully software-based motion blur blur reduction ratio is throttled by native:simulated Hz ratio (480Hz OLED reduces 60fps 60Hz CRT simulation motion blur by 87.5%, down to just 60:480 ratio). A bit extra fuzz above that though, due to the alphablend overlaps between adjacent CRT-simulated refresh cycles (necessary).

Incidentially, the ground truth proof is a dynamically camera-sensor-range-equalized 480fps high speed video of a 60fps CRT tube, played back in realtime to a 480Hz OLED, kinda temporally look like the original tube -- flickerfeel, lowblurfeel, and even the parallelogramming during eyerolls. So, my CRT beam simulators aims to be a shader-equivalent of simulating that. The more Hz oversample and closer to analog real life, the more of the temporal aspects of a CRT tube is preserved (including flickerfeel, zeroblurfeel, phosphor decayfeel, etc)

Hope this temporal advice helps!

Some additional notes/tips:

  • Note: I have tweaks that helps people try to fix banding in my CRT simulator: blurbusters/crt-beam-simulator#4
    If you're having banding issues, try these first. Easiest is to buy an OLED of at least 240Hz, but a very good IPS 240Hz LCD will be adequate, TN 6bit and VA slow pixel response creates problems that are hard to fix in electron beam simulators. Also slower older MiniLED backlights creates problems, so if you go MiniLED, make sure it's superfast (e.g. a 2025-era Lenovo 240Hz MiniLED laptop worked, but a cheap MSI MAG MIniLED monitor lagged bad), since the backlight has to keep up with the quick flashing. So, YMMV, but in 5 years, most displays should reasonably comply with Blur Busters Open Source Display Initiative ( www.blurbusters.com/open-source-display ).... a few apps like RetroArch, WibbleWobbleCore, and Steam's Vint App, have started getting ready to support future Blur Busters temporal shaders (including my CRTv2 and a plasma TV simulator). Right now I'm upgrading TestUFO first, as TestUFO 3.0 will support shaders.
  • TIP: HDR helps brightens BFI and CRT simulator, with some caveats. Some devices like Retrotink and software like RetroArch can upconvert SDR to HDR, and use HDR headroom to surge-brightness. Many HDR displays can do 1000-1500 nits, so you can overcome the dimming a bit, and do less photonspreading with the extra HDR headroom. The more HDR headroom, the less photonspreading you do. Also, you know that HDR pixels can be brighter if it's only a few pixels. Tomorrow's 960Hz OLEDs will only need to light up less than 10% of pixels brightly while doing a CRT beam simulation. Making photon spreading much more accurate. It matters less if the human eyes can't tell apart a 1 nanosecond CRT risetime or a 1 millisecond CRT risetime, as long as the WHOLE pulsewidth stays short (e.g. 1-2 milliseconds). Now one problem with HDR is the nonlinear behavior that makes it hard to gamma2linear / linear2gamma. So you have to do things like pq2linear and linear2pq and stay within the linear region before it distorts (a brightness slider + calibration test pattern helps), but usually you can go up to 2x brighter than SDR before it distorts. (Even Digital Foundry was gawking over HDR's amazing brightening benefit to BFI). So one good thing for HDR for CRT simulation is getting closer to CRT beam spot brightness, and keeping trailing photonspreading tighter... (In other words: It no longer becomes noticeable if you photonspread over 1 trailing scanline or 30 trailing scanlines; it's within human perceptual error margin on a 960Hz OLED for retro-resolution content...)
  • Also, as an approximate rule of thumb (which I can scientifically explain, at your request), at the maximally optimal CRT simulation variables/settings/uniforms -- then scrollspeeds will be clear to up to approximately between (1Hz) to (2xHz) pixels/sec. So low resolution 60fps games like 320x240 will stay very sharp at 640 pixels/sec scrolling speeds on a 320Hz+ OLED running CRT simulator (at 8 generated refresh cycles per 1 simulated refresh cycle), for 60fps content at 480Hz, at extremely minimum GAIN_VS_BLUR setting (e.g. 0.125 for a 480Hz OLED). Bearing in mind sharper content (e.g. 1080p content) will still begin to show motionblur limitations again even at 480fps 480Hz OLED, and we'll still need 2000fps 2000Hz OLEDs to push things beyond human perceptual error margins yet again... Then 4K/8K material, which has a retina refresh rate well north of 2000Hz..... Big rabbit hole explained in various Coles Notes explainers at my purple Research button on my main website...

I am always happy to collab with other spatial experts with my temporal expert skills; we're just combining separate dimensions in a best-effort basis, given tech limitations of today (not enough refresh rate).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment