Skip to content

Instantly share code, notes, and snippets.

@stefan-wullems
Last active August 24, 2020 19:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save stefan-wullems/2f634b93fcfb921ba60f1c756963e1dd to your computer and use it in GitHub Desktop.
Save stefan-wullems/2f634b93fcfb921ba60f1c756963e1dd to your computer and use it in GitHub Desktop.
Exivity: Compression of request and response data

Compression of request and response data

Primer

Glass needs to handle a lot of data. Almost all of which comes from Proximity in the JSON:API format. A feature of this format is that every request has exactly the same structure, with only the data differing. Following the JSON:API format has many benefits, but one of its drawbacks is the fact that JSON:API messages contain a lot of boilerplate (informationless structure). Because glass has to handle a lot of data, this boilerplate can increasingly slow down our requests the more data is transmitted (compared to raw information). What I would like to propose is the supporting of compressed messages by proximity, an optional feature that strips these responses from their boilerplate, allowing us to sidestep some transmission overhead.

How much work is this

I think that this could be achieved within a cycle of 4 weeks. In proximity we would have to setup a system to compress the data and in glass we would have to setup one to decompress it. The compression and decompression can be made simple at first, and more complex compression techniques can be implemented when needed. After finishing the feature I doubt that much maintenance will be needed since the transformations will only be dependent on the JSON:API structure. This is also the reason why I think that this will easily scale to new resources. It only needs to be implemented once.

How much of a performance gain will this provide

{MORE RESEARCH NEEDED} I think we can decrease the size of each record with 50% or more so the transmission overhead will go down quite a lot. It will, however, require more processing because the data needs to be compressed and decompressed again. It's essentially a tradeoff between network and processing power. The thresholds for this becoming worthwhile are still unknown. I'm not actually sure how much the content size affects total request duration. I setup this issue on stack overflow to find out.

@hongaar
Copy link

hongaar commented Aug 24, 2020

Good idea! Would be interesting to test what standard HTTP gzip compression already achieves on regular uncompressed payloads.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment