To model a scanline, we can first start by modeling a CRT's spot and then model its translation across the screen.
What should we use to model the spot? Something based on a two-dimensional gaussian (
- A gaussian extends to infinity, so we would have to window or truncate it.
- Two gaussians overlapping form higher peaks than each one by itself. Imagine two scanlines next to each other. Near the peaks of the individual gaussians would be slightly taller peaks of their sum. If this was our spot model, we would have to deal with these by rescaling the gaussians or the sums to be within the range of 0 to 1. See
overlap-gaussian.png
for a visual example.
Neither of these problems is insurmountable, but they make the math a bit more annoying. Instead, we can use a raised cosine:
When two adjacent raised cosines are added together, the sum will never be above the peaks as long as they are at least half of their widths apart. I believe I got this idea from crt-lottes-fast, although that shader uses a different way of calculating scanlines. See overlap-cosine.png
for a visual example.
We can generalize this to two dimensions:
Compare the images below: spot_gaussian.png
and spot_cosine.png
. The cosine spot is slightly squared off in comparison but is pretty similar.
The size and intensity of the spot also varies. We can consider
- Have an integral over its bounds equal to
$s$ so that its overall brightness varies linearly. - Have a width
$\sigma(s)$ that increases as the brightness increases. In other words, the spot should get larger as its intensity increases.
We can satisfy both of these constraints:
Now, imagine the spot being scanned across the screen horizontally over time. The areas it lights up are integrated by our eyes (helped by the persistence of the phosphor) to look like lines. We can model a scanline with an integral and the addition of a time element:
(Note that the bounds of the spot function are omitted for visual clarity.)
We essentially scan the spot over the screen between time
For
I would like to measure an actual CRT to check how accurate to reality this width function is, but my JVC is currently relegated to the garage.
So now with all of this together, we can either solve our scanline integral analytically or numerically. Solving it analytically is practical when the input
For any given point on the screen, we can estimate its value by adding the estimates from the two nearest scanlines. As long as
Note that if we had instead projected our spot to two dimensions as
Hello @aduffey !
I didn't realize you personally invited me to your private gist until I checked my gists -- I'm flattered you invited me here.
I presume it's because you found out about my temporal CRT electron beam simulator at www.github/blurbusters/crt-beam-simulator -- which can be used independently (but in conjunction) with other spatial CRT electron beam simulators;
I'm also on Discord as @blurbusters as another chat venue if you need a Temporal Expert in CRT. I leave spatials (filters) to people like you; I mostly focus on the time dimension.
First, let me compliment you on your novel method of simulating a CRT electron beam. You're on the right path, spatially. And in theory it's possible to combine the two, but it's a bit challenging at the moment. As soon as I've upgraded TestUFO to run shaders (ETA: summer), I'm going to release CRT simulator version 2 which may be easier to combine/port into existing spatial filter simulators.
Now I would like to address a few things, since adding a spatial to a temporal dimension needs to be tweaked a bit towards Best Practices (bullet 4 below)
(1) CRT beam spot is often from 10,000nit to 30,000+ nits at the raster dot. It falls off really rapidly though, in a inverse logarithmic curve. If you look at non-overexposed highspeed video (e.g. SuperSloMo Guys Caliber, rather than other peoples' caliber), you will notice that only one scanline is super-bright.
(2) To correctly perceptual temporally a CRT, the leading edge still needs to be fairly sharp (not blurry), as CRT is fast-rise slow-fall.
(3) You can "cheat" by spreading brightness over multiple scanlines. 30,000 nits for a few pixels can be spread to 1000 nits for a few scanlines. This requires a bit of a "photon accelerate" at the top of the inverse logarithmic curve. I use such a (approximation) of such an algorithm.
(4) If you photonspread "In a best-effort temporally-faithful-way as much as technologically possible to original CRT", then definitely photonspread ONLY on the most recent pixels visited (aka the trailing few scanlines). But some minor practicalities occur. I have to blur the leading edge though a bit, because sharp edge creates tearing artifact (VSYNC OFF style) during fast horizontal pans. So when you pause my shader toy, you see a lot of concentrated light surge in as few refresh cycles as possible, with trailing blur bigger than leading blur -- just like a real CRT (fast rise, slow fall)
The tactic is to spew out photons quickly, then taper. For example a halfbright (linear brightness) grey can be blasted out in the first refresh cycle of two, e.g. RGB(255,255,255) followed by RGB(0,0,0). Short pulsewidth is key, because motion blur is proportional to pixel visibility time as demonstrated by https://www.testufo.com/blackframes#count=4 as seen on a 240Hz OLED (For linear blur maths, where 240Hz = 1/240sec camera shutter, you want GtG=0. Having GtG<>0 interferes with blur, so experiment with GtG==0.00 as close as possible, GtG is like a slowly-moving camera shutter before/after refreshtime, as explained at www.blurbusters.com/120vs480 ...)
Note: I do have a known side effect in my CRT simulator. Both me and Tim do the photonspreading (since a single refresh cycle can't possibly surge out the 30,000 nits, haha) on a per-channel basis in that filter. One problem with that is that it can cause chroma ghosting if some color channels can surge out quicker (1 refresh cycle) than others (2-3 refresh cycle). My CRTv2 will include a chroma ghosting adjustment to synchronize the photonspreading to the slowest color channel available. But it will be a continuum between amount of motion blur (as per Display Motion Blur Science of Blur Busters fame), and amount of chroma ghosting -- user's own choice.
Talbot Plateau Law is Key With Light Modulation
If you plan to do motion blur reduction in software, you must become an expert in Talbot Plateau Law.
Now, the tricky part is Talbot Plateau Law and linear-correctness. RGB(127,127,127) is not half the photon brightness of RGB(254,254,254) because of the gamma curve. So I have to do a gamma2linear() formula before doing my CRT simulator maths, then a liner2gamma(). Otherwise I get problems like banding (more common on VA and TN LCDs); it helps to lift your black levels and lower your white levels, to make linearization easier without white/black clipping.
The nice thing about gamma curve is that everything from RGB(0,0,0) thru RGB(188,188,188) can be squeezed into the first refresh cycle of two of a 120Hz OLED for 60fps CRT simulation (approx). So almost 75% of RGB values can be blur reduced by a full perfect 50%.
Now, you want extra margin, e.g. 240Hz OLED, or 480Hz for CRT beam simulation, and keep persistence low, so that fullwhites don't take more than half of the refresh cycles of a CRT simulation. Plus oversample for the nyquist issues of spreading Hz over multiple Hz. So native:simulated Hz ratio of at least 4 or more works best with CRT electron beam simulation; but a ratio of 2 can still help some displays (to a limited extent). Viewing TestUFO Variable Persistence Black Frame Insertion is probably highly educational that fully software-based motion blur blur reduction ratio is throttled by native:simulated Hz ratio (480Hz OLED reduces 60fps 60Hz CRT simulation motion blur by 87.5%, down to just 60:480 ratio). A bit extra fuzz above that though, due to the alphablend overlaps between adjacent CRT-simulated refresh cycles (necessary).
Incidentially, the ground truth proof is a dynamically camera-sensor-range-equalized 480fps high speed video of a 60fps CRT tube, played back in realtime to a 480Hz OLED, kinda temporally look like the original tube -- flickerfeel, lowblurfeel, and even the parallelogramming during eyerolls. So, my CRT beam simulators aims to be a shader-equivalent of simulating that. The more Hz oversample and closer to analog real life, the more of the temporal aspects of a CRT tube is preserved (including flickerfeel, zeroblurfeel, phosphor decayfeel, etc)
Hope this temporal advice helps!
Some additional notes/tips:
If you're having banding issues, try these first. Easiest is to buy an OLED of at least 240Hz, but a very good IPS 240Hz LCD will be adequate, TN 6bit and VA slow pixel response creates problems that are hard to fix in electron beam simulators. Also slower older MiniLED backlights creates problems, so if you go MiniLED, make sure it's superfast (e.g. a 2025-era Lenovo 240Hz MiniLED laptop worked, but a cheap MSI MAG MIniLED monitor lagged bad), since the backlight has to keep up with the quick flashing. So, YMMV, but in 5 years, most displays should reasonably comply with Blur Busters Open Source Display Initiative ( www.blurbusters.com/open-source-display ).... a few apps like RetroArch, WibbleWobbleCore, and Steam's Vint App, have started getting ready to support future Blur Busters temporal shaders (including my CRTv2 and a plasma TV simulator). Right now I'm upgrading TestUFO first, as TestUFO 3.0 will support shaders.
I am always happy to collab with other spatial experts with my temporal expert skills; we're just combining separate dimensions in a best-effort basis, given tech limitations of today (not enough refresh rate).