Skip to content

Instantly share code, notes, and snippets.

@Myndex
Last active June 18, 2024 18:39
Show Gist options
  • Save Myndex/04dd7d3143806ad050bb946d667e889f to your computer and use it in GitHub Desktop.
Save Myndex/04dd7d3143806ad050bb946d667e889f to your computer and use it in GitHub Desktop.
Fast Integer Lightness: brintness

brintness is an integer brightness/lightness/darkness calculation

This is part of an experiment in estimating a perceived brightness while remaining in integer math and using bitshifts to maximize performance.

The Issue

The traditional means to determine the perceived lightness or brightness for a given color value is to first normalize R, G, and B from 0-255 to 0.0-1.0, linearize the values via exponent or more exotic methods (we assume colors are in a gamma encoded color space, such as sRGB), and then after linearizing, creating a linear luminance value by applying coefficients to each of the R, G, B values, adding them, and then finally applying an exponent or more exotic math to find a predicted lightness value.

This is computationally expensive. And even then, we generally miss factors such as the HK effect, and the above as described does not consider the importance of context. In other words, we may say "this is the accurate way" and yet it still lacks in accuracy.

The Unbearable Lightness of Perception

So if the commonly accepted methods lack inherent accuracy do to disregarding certain factors, and given that RGB color spaces are often encoded with a gamma or transfer curve of some type, which while different than some lightness curves, still "in the ballpark" in terms of perception.

And let's not forget that the human vision system has its own built-in gain control that makes measuring lightness perception a frustrating task that is still a matter of emerging science.

How Fast Does Red Weigh?

Light in the world follows simple linear math. That is, if you have 100 photons of light, and triple it, you then have 300 photons of light. Human vision does not perceive light linearly however, a given change in light value will result in a larger or smaller change in perceived lightness, depending on a number of contextual factors.

And, light does not have a "color", as color is only a perception of our vision system. But light does have different wavelengths or frequencies, like musical notes on a piano for want of an analogy. But also, human vision is most sensitive to a very narrow range of "middle notes", the middle wavelengths we identify as green, with sensitivity rapidly dropping off for shorter (blue) or longer (red) wavelengths.

Note

So to model the mixing of light, we often want to be in a linear space, but when we want to predict how we see a color or lightness, we want to be in a space that is curved per our perception in the given context.

Among the implications is that each of the red, green, and blue primaries in our display are weighted differently based on an averaged visual sensitivity to each, so that #ffffff i.e. equal values of R,G & B is white or grey. Because these weights are being applied to light sources, they should ideally be applied in a linear space. If you apply spectral weighting to values that are gamma or TRC encoded, you'll get some errors, most noticeable in the middle ranges.

Never The Same Color: NTSC

With all of the above as some foundation, let's not forget that for decades, NTSC encoding for Luma ($Y^\prime$ i.e Y prime) applied the weighting to gamma encoded signals. And Luma is a gamma encoded achromatic signal which is what black and white televisions displayed.

Note

In the examples below, we assume $R, G, \& B$ are normalized to $0.0-1.0$

The common NTSC weights for Luma are $Y^\prime = R^\prime \times 0.299 + G^\prime \times 0.587 + B^\prime \times 0.114$ the fact that they are applied to gamma-encoded values is not that problematic as long as the decoding at the set uses the inverse transform. The image seen on black and white televisions however, while essentially compatible, does look a bit different when fed a Luma signal vs an actual black and white signal.

sRGB which uses different primaries uses the weights as applied to linearized values as $Y = R \times 0.2126 + G \times 0.7152 + B \times 0.0722 $ here $Y$ is linear Luminance, not gamma encoded Luma $Y^\prime$.

Note

And to be very clear, $Y^\prime \neq Y$.

The First Rule of Bright Club is...

Important

The point of this Gist is a method for calculating an achromatic lightness that is reasonably accurate to be useful, but computationally fast so that it is suitable for applications in realtime image analysis.

If the image being analyzed is in a gamma encoded space, and the gamma value is "close enough" to that of human lightness perception for the given case, then we can probably apply coefficients and sum without linearizing. The middle range of lightness/darkness and saturation will be the least accurate, while highest or lowest saturation or brightness will be the least affected by our "cheating" here, assuming we use the standard weightings for sRGB/Rec709.

$pseudoLightness = sR^\prime \times 0.2126 + sG^\prime \times 0.7152 + sB^\prime \times 0.0722 $

Though we might improve the middle range and bit at the expense of the high and low end, splitting the difference if you will by adjusting the weightings to spread the errors more evenly across the range as a compromise.

$pseudoLightness = sR^\prime \times 0.25 + sG^\prime \times 0.66 + sB^\prime \times 0.09 $ (experimental weighting)

Warning

CAVEAT: The following is beta, not fully tested yet, implementation depends on language, so below is pseudocode. Also, and I'll mention this often, these are not intended to be "accurate" lightness calculations, they are just intended to be FAST yet still reasonable...

Now, if we are working with 8 bit int values for each primary, so each is 0-255, but we'd like a lightness value that is 0-100, and language and/or hardware is fastest working with ints, then we might optimize for speed with:

    // r,g,b are 0-255, brintness is 0-100
int brintness = (r * 25 + g * 66 + b * 9 + 100) >> 8;

So here, the coefficients are ints being applied to the int color values, adding 100, and bit-shifting by 8 which is the same as dividing by 256. The result is an integer lightness value of 0-100.

Tip

The coefficients add up to 100, so the maximum value is 25500. If that is divided by 255 (or bit-shifted >> 8) we get a 0-99 range. Adding 100 before the bit shift gives us a 0-100 range.

Alternately, if we want to construct a B&W image or otherwise want to output a 0-255 range for brintness, then we can pre-multiply the coefficients relative to the size of the bit-shift. In this case we bit-shift >> 10 which is the same as dividing by 1024. Here, we took the weights from the previous example, multiplied by 10.24, then rounded or truncated back to ints so that the total of the weights is exactly 1024:

    // r,g,b are 0-255, brintness is 0-255
int brintness = (r * 256 + g * 676 + b * 92) >> 10;

Tip

And again, to be abundantly clear, these coefficients may not be useful nor accurate enough for any given application. We're cheating in the name of fewer cycles.

One more, this one may be the fastest, depending on hardware/language factors. Add two red, five green and one blue. The bit-shift of >> 3 is the same as divide by 8.:

    // r,g,b are 0-255, brintness is 0-255
int brintness = (r+r+g+g+g+g+g+b) >> 3;

This last one is essentially equivalent to:

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.625 + sB^\prime \times 0.125 $

Notice the closeness to the traditional NTSC values of $\ 0.3 \ \ 0.59 \ \ 0.11 $
And kind of splitting the difference to sRGB: $\ 0.213 \ \ 0.715 \ \ 0.072 \ $ at least for red and green.

Which is close enough for some applications. Or we can add 4 red, 11 green, 1 blue and shift by 4:

    // r,g,b are 0-255, brintness is 0-255
int brintness = (r+r+r+r+g+g+g+g+g+g+g+g+g+g+g+b) >> 4;

This shifts the blue lower, green higher, so it's equivalent to:

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.6875 + sB^\prime \times 0.0625 $

Which gets it closer to sRGB: $\ 0.213 \ \ 0.715 \ \ 0.072 \ $

More Tricky Bit Fiddling

As I think about some of the versions above, we can add bit shifts in the addition portion, and reduce the cycle count further.

    // r,g,b are 0-255, brintness is 0-255
int brintness = ((r << 1) + (g << 2) + g + b) >> 3;

Is essentially equivalent to:

$brintness = (sR^\prime \times 2 + sG^\prime \times 4 + sG^\prime + sB^\prime) / 8 $

Or

$brintness = (sR^\prime \times 2 + sG^\prime \times 5 + sB^\prime) * 0.125 $

Or

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.625 + sB^\prime \times 0.125 $

And we can make some adjustments to the relative weights, as shown here:

    // r,g,b are 0-255, brintness is 0-255
int brintness = ((r << 1) + (g << 2) + g + (g >> 1) + (b >> 1)) >> 3;

Is essentially equivalent to:

$brintness = (sR^\prime \times 2 + sG^\prime \times 5.5 + sB^\prime * 0.5) * 0.125 $

Or

$floatingLightness = sR^\prime \times 0.25 + sG^\prime \times 0.6875 + sB^\prime \times 0.0625 $

Or we can weight the red greater, as in:

    // r,g,b are 0-255, brintness is 0-255
int brintness = ((r << 1) + (r >> 1) + (g << 2) + g + (b >> 1)) >> 3;

Which is essentially equivalent to:

$brintness = (sR^\prime \times 2.5 + sG^\prime \times 5 + sB^\prime * 0.5) * 0.125 $

Or

$floatingLightness = sR^\prime \times 0.3125 + sG^\prime \times 0.625 + sB^\prime \times 0.0625 $

Make a B-Line to Linear

The above assumed gamma-encoded color or image data, and creating a pseudoLightness in minimum CPU cycles.

But what if we need to be in linear light, not perceptual lightness? Some time ago, I presented "Andy's Down and Dirty Grayscale", which output a gamma encoded sRGB compatible grayscale from an RGB value.

             // ANDY'S DOWN AND DIRTY GRAYSCALE™
            // sR sG sB are 0-255 sRGB values. The ** replaces Math.pow and works with recent browsers.
           // For purists: Yea this is NOT the IEC piecewise, but it's fast and simple, hence 'down and dirty'

  let gray = Math.min(255,((sR/255.0)**2.2*0.2126+(sG/255.0)**2.2*0.7152+(sB/255.0)**2.2*0.0722)**0.4545*255); 

But if we strip off the conversion back to sRGB 0-255, we can be left with a linear luminance from the RGB value:

             // ANDY'S DOWN AND DIRTY LUMINANCE™ - Luminance in one line.
            // sR sG sB are 0-255 sRGB values. The ** replaces Math.pow and works with recent browsers.
           // For purists: Yea this is NOT the IEC piecewise, but it's fast and simple, hence 'down and dirty'

  let sY = (sR/255.0)**2.2*0.2126 + (sG/255.0)**2.2*0.7152 + (sB/255.0)**2.2*0.0722); 

Can we do some of what we were doing earlier to simplify? One of the problems here is that to linearize the encoded values, they need to be normalized such that 0-255 is mapped to 0.0-1.0, and by definition that eliminates ints. But we can reduce some of the more expensive math, for instance instead of dividing we could multiply by a recalculated $1/255$, and instead of raising to the power of 2.2, we could square by multiplying. And since all we are going to do is multiply and add, we can combine each coefficient with pre-calculation. And ultimately, we can actually avoid the normalize step.

Step 0: $(sR/255.0)^{2.2} \times 0.2126$
Step 1: $(sR \times 0.003921568627451)^{2.2} \times 0.2126$
Step 2: $sR \times 0.003921568627451 \times sR \times 0.003921568627451 \times 0.2126$
Step 3: $sR \times sR \times 0.0000153787005 \times 0.2126$
Step 4: $sR \times sR \times 0.000003269511726$

So then our multiply and add-only version is:

$sY = sR \times sR \times 0.000003269511726 + sG \times sG \times 0.000010998846597 + sB \times sB \times 0.000001110342176$

Warning

This is not a "technically correct" linear luminance, and can be too light in the midrange.

It is simply intended as a minimum, pre-optimized to "mostly" linearize sRGB values. The pre-multiplied coefficients shown are for sRGB or Rec709 only. As an example, using the correct way, rgb(128,128,128) returns 21.5, but with the following cheat method, rgb(128,128,128) returns 25.2 which is a 17% different, but with this error decreasing for higher or lower RGB values.

      // sY is 0.0-1.0
float sY = sR * sR * 0.000003269511726 + sG * sG * 0.000010998846597 + sB * sB * 0.000001110342176;

But the point of this Gist is to end up with an int between 0-255. Obviously we can multiply sY * 255 but the more elegant solution is to distribute and premultiply the coefficients. The following creates a seni-linearized luminance that can be truncated into an int 0-255:

      // sY is 0-255
let sY = sR * sR * 0.000833725490196 + sG * sG * 0.002804705882353 + sB * sB * 0.000283137254902;

The only difference between the 0.0-1.0 and the 0-255 version is:

  • 0-255: premultiply each sRGB coefficient by $\ 0.003921568627451 $

  • 0.0-1.0: premultiply each sRGB coefficient by $\ 0.0000153787005 $

    • (Which is just $\ 0.003921568627451^2 \ $)

CAVEATS & CENTIPEDES

Caution

Danger Will Robinson! The values shown above are sure to cause anxiety amongst all who find color sacred, including myself. The point of this Gist relates to applications where fidelity to image data or true lightness values is a lower priority than speed of computation.

In particular this may apply to machine environments where remaining as integer math is important, such as in embedded or low power applications (think: motion detection or gain control in remote security cameras as an example).

But another place it can be useful is in real time user interfaces for color controls, here the "accuracy" is not as important as speed, provided there is an accurate model behind it that aligns on control release.

Not Contrast

Important

I also feel I should point out that the gamma or TRC used in most image encodings is not a useful way to find accurate contrast. While these high gamma values used in image processing may give pleasing images, when it comes to predicting contrast, and especially contrast of text and thin lines, the related lightness curves are essentially flatter for predicting contrast as a difference between encoded values.


Copyright © 2024 by Myndex. All Rights Reserved. Thank you for reading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment