Skip to content

Instantly share code, notes, and snippets.

@stephenlb
Created June 27, 2024 16:02
Show Gist options
  • Save stephenlb/54f11b29d5deb7220fb9110a3c6aa68e to your computer and use it in GitHub Desktop.
Save stephenlb/54f11b29d5deb7220fb9110a3c6aa68e to your computer and use it in GitHub Desktop.
Images from 120ms to 30ms: Python 🐍 to Rust 🦀🚀 article
@stephenlb
Copy link
Author

stephenlb commented Jun 27, 2024

Chart A

image

Chart A: Each line represents is a separate datacenter serving traffic for connected devices that are sending event data. Once the event data reaches the datacenter the event is validated for security and formatting. After successful security and validations pass the timer starts. The timer stops after the data has been successfully written to the indexed time series data store and is saved. The total end-to-end write latency is captured from a Prometheus ports from each container servicing the data write.

Chart B

image

Chart B: Our change to Rust also removed the old batching processor, which had seemed like it was helping! The numbers looked pretty good to have a low per-event average delivery between the service sending the event and the service receiving the event. However it was really hurting us with significantly with end-to-end latency. The batcher would combine events together and send as a bundle. The average per event send+recv was lower before our change. So we an average increase in event transmission. And the benefit is a dramataic end-to-end write latency improvement seen in Chart A.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment