Pixel-art is a palette-limited image that uses strictly specific colors, and those colors can be easily swapped in the editor. But what about doing it at runtime without a need to re-edit them? For example on sprites to make them appear belonging to a specific team. That technology was used for a long time, but with modern pixel-art games using PNGs, it became harder to do so, since those are not indexed colors.
Working with various people I've seen that it's often done in an inefficient way of just preparing copy of exact same sprite just to change the colors, or make limited version of palette swap by recoloring only a specific color (like magenta) at runtime. But what if we could just use a simple lookup table and swap any color we want during the rendering for a fairly cheap price? That is the why this a thing now.
Let's start what we have. Usually people use 8-bit images, meaning that there's 8 bits per the primary color, meaning that each channel have 256 (0-255) density values, and RGB8 color space gives us exactly 16 777 216 colors (256^3). Coincidentally, a 4096x4096 (aka 4k) texture fits the exact same amount of pixels. Now we have only 2 question to answer: How do we encode the palette onto 4k texture and how do we decode it on GPU in a shader. Both are pretty simple to do.
If we imagine color space as a 3D cube with 256x256x256 dimensions, we'll get something like that:
But our texture is 2D, meaning that instead of depth, Z-aka-Blue would have to represent an index of the cube slice. 256 of them to be precise. Leaving us with a 64x64 combination of slices.
This is a mock example with 6-bit RGB, producing 64x64 tables per slice.
Meaning that for each pixels we can calculate a position on a texture and a reverse with the following formulas:
// From color to texture
X = R + mod(B, 64) * 256;
Y = G + floor(B / 64) * 256;
// From texture to color
Red = mod(X, 256);
Green = mod(Y, 256);
Blue = floor(X / 256) + floor(Y / 256) * 64
As you can see, it's not that hard to do. In fact, we only really need the first formula.
In my case, I do it in 3 steps with a simple program.
- I take all the images I want to get palette extracted from and feed it to my program. Here I just generate a palette file. And in case you already have one (for example you strictly use DawnBringers palette) - you can just as well use it, without that processing step.
- I save original palette file somewhere - it will be used as a coordinate reference of our LUT, and then edit a copy of it, changing the colors with ones I want.
- Then I feed the reference palette and edited one to my program which would generate a full 4k LUT texture. It takes the reference palette color and uses it as a coordinate to put the color in the edited palette on a LUT. Simply:
// I first read all the colors on the palettes and generate a [reference => edited] map.
for (reference in colors) {
putColor(
reference.getRed() + (reference.getBlue() % 64) * 256, // X
reference.getGreen() + Math.floor(reference.getBlue() / 64) * 256, // Y
colors[reference]
);
}
That's it, you save the resulting LUT texture and use it to swap colors.
So now that you've got your lookup table, you want to swap colors. In this example I use lut:Sampler2D
and pixelColor:Vec4
as sources of input.
// Converts the RGB color into the LUT UV coordinates.
function lutUV(color:Vec4):Vec2 {
// For optimisation purposes I don't call mod(), and perform it manually, since I reuse the floored value mod() calculates.
var z = floor(color.z * 255);
var fz = floor(z / 16); // 256 / 64 = 16
// If you omit the 4096 division, you'd get exact texel coordinate on the LUT, but we want them normalised into 0...1 UVs.
// Note the 0.5 offset, since you'd want to point at the texel center instead of its edge.
return (floor(color.xy * 255.0) + vec2(z - fz * 16, fz) * 256 + 0.5) / 4096;
}
function fragment() {
output.color = lut.get(lutUV(pixelColor));
}
As you can see - it's not that hard to do.
As with many other approaches - this one not without issues.
- If you apply it globally as a post-processing step and use semi-transparent elements - you're screwed.
- If you used wrong color in your new sprite (i.e.
#343435
instead of#343434
- you're screwed. - It's a whole 4k texture just for a texture swap. And mobile still really don't like 4k textures. Plus if you use a lot of different ones luts, you're up for some intense VRAM footprint. See notes below about workarounds.
- It's possible to optimise the LUT size depending on the palette, but I didn't spent too much time on it, and short test resulted in incorrect indexing, causing me to forgo usage of compressed luts for now. I may revisit it later.
- As you may noticed I don't use modulo call in my shader. That's because it's internally unwraps into
x - y * floor(x/y)
, but I also usefloor(x/y)
as a Y component, hence I store bothz
and flooredfz = floor(z/16)
and calculate modulo manually, making a tiny bit more optimal to compute. - The compression is possible quite solidly as long as you don't have color overlap in your source palette, i.e. reduce to as little as 64x64 texture (or even less), but with smaller resolution I had issues with getting the UV on a lut from color. Most likely because I did a mistake in math.
- As a workaround to using many 4k LUTs you can use a double-LUT approach. What does it mean? Simple - first lut just turns your color into an index, as you likely won't exceed 256 colors in your game if you PA, unless you are a filthy degenerate and don't adhere to a palette. After you got an index - you can just use a secondary texture to get color out of indexed palette.
- If you've seen the gif with a showcase, there's an extra layer - color swap didn't occur on green part. It's as simple as making the mask input non-grayscale and use red channel as shadow mask and green as "don't shadow here" mask. Also I used 2 LUTs here, one for unshadowed and second for shadowed, because there's exactly one place where it should use the same color in both cases but it's already getting recolored in shadow LUT, hence initial image uses different color, and I needed to restore it.
- Depending on your palette you may not even need to care about one of the color channels, as other 2 provide you with unique positioning for each color, so texture can be reduced to 256x256 in size. But it's less generic. Hell, if you're doing something like gameboy style - just one channel is enough to color everything. But here's a question of why you'd even do this approach.
Obviously there are other possible solutions.
- The grayscale/indexed to color. That one is pretty simple. Instead of having your sprites in color, you do them in grayscale and use separate palette index texture to color it. It's used from time to time in games, but requires either work from artist or preprocessing on the engine side.
- Be a madman: Reduce 3 color channels into 2 or even 1 with some obscure formula for direct 2D indexing. Good luck with that I suppose.
- Regular old color transforms. Can be done easily, but don't provide per-color precision.
- In-shader array of colors you check against in a loop. Very expensive. Don't do that. Shaders don't like loops in general.
- Just recolor your image on CPU and upload as a separate texture. Bigger VRAM footprint, but who cares nowadays?
- 3D textures. It's basically same, but you don't have to do Z math and use color as UV directly.
- Some other tech I'm not aware of?
@LeFede I'm not sure what you mean. The technique described in that video is more about the way you draw your sprites, and then use automated pre-processing to process them into usable sprites. It's extremely easy to implement even. After you have your UV map. Pre-processing step can be done simply:
And from shader perspective it's even simpler: