- Base58 is quadratic (O(n^2)). Basically you can’t encode 1MB of data with it. This has been found with our DoS tests, which we employ for scure-base and noble-hashes. See README for more details
- Hashes are additionally tested against huge multi-gig inputs, scrypt/pbkdf2 are tested against all possible combination of options. They take 2 hours to run on a decent machine
- Hashes are actually faster than many wasm alternatives. A single sha256 hashing of 32 bytes of data takes 888 nanoseconds on mac with M1
- The last fact is extremely remarkable, because we do not employ loop unrolls in the code. A loop unroll is when you’re writing code which could have been executed in loop like
for (let step = 0; step < 64), but instead, you’re writing every iteration step-by-step. Which increases code size a lot, and is very complex to audit
- We’ve implemented KangarooTwelve hash function just for fun. It’s based on SHA3 and allows to achieve reasonable security level while being faster. It was made by the same folks who've created original SHA3
- bip32 is terrible. The hierarchical wallets should not be that complicated.
- Did you know the network IDs (different currencies) are taken from a single GitHub document called SLIP-0044? I know about cases where there were new projects, which did not yet have SLIP. Exchanges added support of those projects to their cold wallets. Then after the projects were added to SLIP, the exchanges were required to re-generate their cold wallets! Which is a huge mess and a complicated task.
- Did you know that it’s unusable for newer technologies? For example, ETH2 uses bls12-381 curve, and with bip32 54% of generated keys would be invalid. So, they’re using EIP-2333 as a replacement. It’s much better, but unfortunately it’s BLS-only standard.
- It’s easy to shoot yourself in foot with non-hardened keys, which could allow simple de-anonimization of all addresses
- Tests are hard, significantly more effort has been spent on tests. On the other hand, almost all test cases caught bugs. Most JS libraries only check RFC vectors - barely enough
- JS integers are haunted. Not only bitwise operations truncates 2^52 number to 2^32, it also works as signed integer, which is pointless for bitwise operations
- TypedArrays use platform endianness (LE for x86 and ARM). There are almost no users with BE, however most JS libraries don’t check platform endianness at all. We check and throw exception instead of silently returning corrupted output
- Unfortunately, there is still no support for 64 bit integers in JS, BigUint64Array is too slow (~x10), which makes some parts of code like 64-bit SHA512 harder to read than it can be. We’re splitting its state into two 32-bit arrays
- JS has significant advantage in terms of performance: you can dynamically create functions using
new Function, which will be JIT'ed by engine. This allows to create fast and easy to read code. Think: loop unrolls without unrolling. However, it is often disabled by CSP policy 'unsafe-eval' - which is kinda pointless, since you always can create small interpreter, it only affects performance, which is bad for optimizations, but doesn't affect security