Skip to content

Instantly share code, notes, and snippets.

@castano
Created March 30, 2017 22:54
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save castano/b05863af48138dab368b23a0ab86e735 to your computer and use it in GitHub Desktop.
Save castano/b05863af48138dab368b23a0ab86e735 to your computer and use it in GitHub Desktop.
Least Squares Vertex Quantization
I didn't think there was much to gain from optimizing quantization intervals for vertex compression, but I thought it would give it a try and see it for myself.
I have a set of vertices X that I want to quantize. The trivial way to do that is to transform them to the [0,1] range and store them using an integer UNORM format.
At runtime the vertices are reconstructed as follows:
X = Q * m + a
Where:
m = max(X) - min(X)
a = min(X)
My idea was to tweak the m,a values to minimize the quantization error. My assumption is that under small adjustments the clustering of the values would not change, and if it did, we could re-run the optimization process.
This is basically a least squares problem with two unknowns and one linear equation for each vertex:
[XQ]=[QQ SQ][m]
[SX]=[SQ n][a]
XQ = dot(X, Q)
QQ = dot(Q, Q)
SX = Sum(X,0,n-1)
SQ = Sum(Q,0,n-1)
We can solve it using a standard solver, or directly:
XQ = QQ*m + SQ*a
SX = SQ*m + n*a
m = (XQ - SQ*SX/n) / (QQ - SQ*SQ/n)
a = (SX - SQ*m) / n
When using byte_unorm values, the error reduction that I see is in the range of 1 to 10%. As expected it's not much. The benefits using higher precision formats are much lower, probably because the clustering changes much more under small adjustments in the quantization range.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment