Skip to content

Instantly share code, notes, and snippets.

@petehunt
Created October 7, 2016 08:12
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save petehunt/228b587254ac5f62a97893a95890f91a to your computer and use it in GitHub Desktop.
Save petehunt/228b587254ac5f62a97893a95890f91a to your computer and use it in GitHub Desktop.

Redis in Production at Smyte

To be clear we continue to run many Redis services in our production environment. It’s a great tool for prototyping and small workloads. For our use case however, we believe the cost and complexity of our setup justifies urgently finding alternate solutions.

  • Each of our Redis servers are clearly numbered with a current leader in one availability zone, and a follower in another zone.
  • The servers run ~16 different individual Redis processes. This helps us utilize CPUs (as Redis is single-threaded) but it also means we only need an extra 1/16th memory in order to safely perform a BGSAVE (due to copy-on-write), though in practice it’s closer to 1/8 because it’s not always evenly balanced.
  • Our leaders do not every run BGSAVE unless we’re bringing up a new slave which is carefully done manually. Since issues with the slave should not affect the leader and new slave connections might trigger an unsafe BGSAVE on the leader, slave Redis processes are set to not automatically restart on failure.
  • If the leader is too low on memory to BGSAVE we must manually perform a few careful operations to move databases around. These are extremely time-sensitive and stressful as the leader is likely still growing in size. The easiest (but bad) solution is to temporarily turn off writes to allow it to complete its BGSAVE safely.
  • The slaves are all configured to not BGSAVE automatically since they don’t have enough memory to do it in parallel. Instead we have a side process that loops through sequentially and runs a backup. We use heartbeats to ensure that each database is backed up every hour.
  • Our clients have an up-to-date list of Redis servers from Consul, a strongly consistent data store. On a leader failover we can update these records to instantly promote the follower and have production traffic sent to it.

Thanks to Josh Yudaken for writing this up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment