Skip to content

Instantly share code, notes, and snippets.

@dominictarr
Last active July 9, 2024 21:23
Show Gist options
  • Save dominictarr/34026e5b64644bed11ba to your computer and use it in GitHub Desktop.
Save dominictarr/34026e5b64644bed11ba to your computer and use it in GitHub Desktop.
expandable buffers pool

a pool of expandable buffers

pool noodles, thanks @maxogden

anyone who has played around with getting performant IO in node will tell you to get really great performance you need to avoid unnecessary memory allocations and memory copying.

Working on pull-file I could get about 1gb/s read on a warm cache, but if I just reused the same buffer over and over I could get 2gb/s!

reusing memory is easy to do in a benchmark, but in an a real system when you read data in order to do something with it. maybe you write it to another file or socket, or maybe you encrypt it? maybe you pass it to a multiplexer which adds framing and then writes that to a socket?

normally something like this would be acomplished via a memory pool. Instead of just allocating memory you allocate a "pool" and then reuse items in that pool.

However, there is another problem.

There are many libraries written with performance in mind such as libsodium that support mutable memory use. you can encrypt directly over the buffer containing plaintext... but the tricky thing: the encrypted data is slightly larger, because a Message Authentication Code (MAC) is written before the cyphertext! This means we need to get all up in the memory allocation, so when we read some data, we read it into a buffer with a bit of extra space at the start that will later hold the MAC. Another case: some times we need to write framing around a buffer (say, a length or a checksum) if we didn't have some space reserved it means we need to copy that buffer into a larger buffer.

Unfortunately, node doesn't currently support this - reading a file writes directly to the start of the buffer, so to get some more space, we'd need to copy that buffer. But node buffers do have one good feature. You can take a slice of a buffer (a subrange) just by making another reference to it with a new offset, without needing copy that buffer.

proposed solution

A buffer pool library, which slightly over-allocates each buffer, and adds a method embiggen() that allows you to make a buffer slightly larger. It would have a pool of say, 3k buffers, and then if you asked for a 2k buffer, it would give you something from the middle of a 3k buffer. Then when you need to make that larger, it can just give you a new Buffer object which points to a larger window of the 3k buffer. But, if you try to expand the buffer two much, say, asking for a 4k buffer, then it would fallback to allocating a new larger buffer and copying it.

optimization

The question, though, is how much should you over-allocate? that would depend heavily on your application. One method would be to allow the user to tune the application manually, which would be easy to implement, but you'd have to understand what is really going on to be able to tune it well. Another option might be to dynamically tune it, the pool could just try different ranges and gestimate how much larger buffers need to get, based on what has been requested. It could average the sizes requested and create buffers that are a certain percentage larger. so that on average, 99% of the time (or whatever) buffers do not need to be expanded.

this would mean you'd get good memory performance but wouldn't need to tune anything!

@euse44
Copy link

euse44 commented Jul 9, 2024

One of the standout features of BeautyPlus Cam is its array of advanced editing tools. Users can retouch their photos with precision, adjusting elements like brightness with BeautyPlus Cam, contrast, and saturation to achieve the perfect look. The app also offers tools to smooth skin, whiten teeth, and remove blemishes, providing a professional-level retouching experience. This level of control ensures that every photo can be fine-tuned to meet the user’s exact preferences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment