Skip to content

Instantly share code, notes, and snippets.

@fed135
Last active September 28, 2016 04:49
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save fed135/5ddc58b64520f6c814198b999fdba47b to your computer and use it in GitHub Desktop.
Save fed135/5ddc58b64520f6c814198b999fdba47b to your computer and use it in GitHub Desktop.
Node is funky

Node is funky

The story of how I got exited for nothing and why you should not over-optimize JS code.

The Commit in question (warning, ugly code ahead): https://github.com/fed135/compactr/commit/44d24edc48302b650e46cbb365dd64ec2dff8854

First a disclaimer

  • The optimizations you see should not be part of any node app. JS code, like all code actually, should be readable and easily maintainable. ALWAYS favor these values over cryptic optimizations... and in the case of JS, these optimizations are in most cases unreliable.

  • If you must run micro-benchmarking, make sure that you include the basic counter-compiler destructuring-optimizations mecanisms (timestamps, no dummy swapping or allocating, etc) and try to run them on systems with specs similar to your target environement (ex: server harware, server OS, conf, etc.) and please try to come up with test cases that mimic typical uses cases, rather than artifical loads.

  • Finally, look at your supported node versions and try your optimization on multiple before calling it a day... sometimes even minor versions change how some algorithms work drastically.

The changes

Tested with node 6.6.0

Replaced all defined constants at the top of the file for inline number declarations.

Theory: It probably takes less time to simply declare a number inline than to fetch the value of the constant

Result: Some minor improvement in speed (~5%)

Replaced all let statement compound assignments.

Theory: Docs say that functions with compound assignments on let variables cannot be optimized

Result: Some major speed up (~30%)

Changed encoder method to a class to have better management of the result stream and the caret position

Theory: Better control, but constructing a new Encoder object every time will make the program slower

Result: Some drop in perfs (~20%)

Preallocating some Encoder instances and re-using them

Theory: Will occupy some space in memory, but will give us back our lost instantiation overhead

Result: Back to normal with (~20%) gains

No resetting of the result stream between serializations

Theory: ArrayGrowStub is a known performance killed, but so is instantiating an empty array of {x} length

Result: A decent (~10%) gain

Round 1 results

Before

  • encode: 4607
  • decode: 3883

After

  • encode: 2301
  • decode: 1985

So, twice as fast. We win at everything, right? Not really, this is the same builds run with node 6.0.0

Before

  • encode: 2853
  • decode: 2944

After

  • encode: 2895
  • decode: 3469

Oh f--k. The optimizations are actually making it WORSE!

Conclusion

My code is uglier.

My code doesn't perform better in a consistent manner.

Is it worth it? I see very few cases, let me know what you guys think.

PS: I know I should've better tested each optimization separatly, as some may be viable across version, but the point I want to push here is that even if it works now, it may not in the future and I don't think it's worth making your code uglier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment