Skip to content

Instantly share code, notes, and snippets.

@ruebenramirez
Created March 13, 2015 16:15
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ruebenramirez/f0d567db68a851af16e1 to your computer and use it in GitHub Desktop.
Save ruebenramirez/f0d567db68a851af16e1 to your computer and use it in GitHub Desktop.
maxmemory-key-eviction

#The ObjectRocket Customer Support Story We had an RTG customer using Redis as a cache for their web-app. They were receiving write errors from Redis in their application as writes were attempted. Usually this is due to running out of available memory in Redis. After diagnosing the issue, we were able to quickly update their maxmemory policy to resolve the problem. How, and why, did this solve their problem? There are a few things to know about key eviction and the various maxmemory policies available in Redis.

Redis can be configured to behave differently when the database gets full. There are various options to recycle memory used by less important keys (through eviction) to allow for continued writes. Key eviction operations are the same as key deletions: the key is no longer accessible and the memory it occupied is marked as available for overwriting. What does it mean for the Redis db to be "full" though?

If we do not specify a maxmemory value in the Redis configuration, Redis will continue to allocate memory until none is left to consume. By setting a maxmemory conf value, we're able to place a maximum cap on the amount of RAM consumed. A maxmemory value is recommended, as it helps to ensure that your OS and other critical processes on your server continue to function without being starved of memory. However, this still leaves open the question about what to do when you run out of that memory. This brings us to eviction policies.

Key Eviction Policies

Redis has several policies available to handle the low-or-no-memory condition. Which one is best is highly dependent on your use case and business requirements.

Policy Aspect: LRU

In caching applications, as was the case with our customer, the LRU (least recently used) algorithm provides a fantastic key eviction mechanism that is only getting better in newer releases of Redis. With LRU key eviction, Redis attempts to remove keys with oldest access times before keys that have been accessed more recently. This is great for web app caching since we want the most recently accessed keys (those in which more recent page loads depend) to remain available in the system over older keys.

A main caveat with this policy type is that it only works, by default, against keys with an expiration. This protects persistent keys that you don't want deleted or evicted. There is an override option that you can use to evict all keys, whether or not they have an expire key set.

Redis impelements an approximate LRU algorithm, which while precise enough for most use cases, by default trades precision for speed and light system resource utilization. A number of sample keys are selected randomly for key eviction consideration. The least recently used of the sample set is evicted. The amount of precision is a configurable value "maxmemory-samples" that can be set between 3 (being very fast, but not precise) to 10 (very precise but not as fast) with a default sample size of 5.

Policy Aspect: TTL

When more control over key eviction is required, the "TTL" maxmemory policy can be used to prioritize evicting keys closer to expiration first. For example, If we know that certain cache items should remain in the system longer, then we can configure higher expire/TTL values for those keys.

TTL algorithm precision is configurable by the same "maxmemory-sample" setting as the LRU algorithm. Key eviction from the sample size works similarly to LRU, substituting remaining key time to live in place of the oldest access time.

Policy Aspect: Random

Redis also provides a random maxmemory policy for scenarios where all keys are accessed with the same probability. (typically by random reads or continuous scanning of all keys) I've not found a use case for this type of application for Redis but perhaps someone in the community has an example they can share!

Policy Aspect: Noeviction

Just as noeviction sounds, no keys are ever evicted automatically; Redis clients must explicitly remove keys. When used and all usable memory is consumed, Redis clients will receive error messages when attempting write operations until keys are removed to free up memory for more writes.

Key Volatility

Redis keys can optionally have an expire (TTL) value applied to them, where the key is removed after expiration (either a specific time or a key's age) is reached. If no expire value is applied, the keys are considered persistent. This is important because we have maxmemory policy options that can apply to only "volatile" keys or to "allkeys" (both volatile and persistent keys):

only evict Expire/TTL keys evict Persistent keys also
LRU volatile-lru allkeys-lru
TTL volatile-ttl n/a
Random volatile-random allkeys-random
noeviction n/a n/a

Because our RTG customer was not expiring their cache keys, and we needed a quick solution, we switched the key eviction policy to "allkeys-lru". This eliminated the Redis errors in their application and allowed for continued writes after evicting keys. Redis was being used only as a cache layer in this application. Making this change didn't require any customer code changes and was very quick to implement. If Redis is used for purposes beyond caching (e.g. persistent keys we don't want to auto-evict like session storage, app settings, etc.) it is generally better to expire your cache keys and stick with the default "volatile-lru" key eviction.

RedisMaxOut tool

(TODO: include blurb about RedixMaxOut)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment