Skip to content

Instantly share code, notes, and snippets.

@antirez
Last active November 30, 2023 14:09
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save antirez/f6bf3e2ff6cab94b863b974b23916c57 to your computer and use it in GitHub Desktop.
Save antirez/f6bf3e2ff6cab94b863b974b23916c57 to your computer and use it in GitHub Desktop.

127.0.0.1:6379> memory doctor

"Hi Sam, I can't find any memory issue in your instance. I can only account for what occurs on this base."

127.0.0.1:6379> flushall

OK

127.0.0.1:6379> memory doctor

"Hi Sam, this instance is empty or is using very little memory, my issues detector can't be used in these conditions. Please, leave for your mission on Earth and fill it with some data. The new Sam and I will be back to our programming as soon as I finished rebooting."

127.0.0.1:6379> debug populate 5000000

OK (4.52s)

127.0.0.1:6379> flushall

OK (4.13s)

127.0.0.1:6379> debug populate 100000

OK

127.0.0.1:6379> memory doctor

Sam, I detected a few issues in this Redis instance memory implants:

Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.

High fragmentation: This instance has a memory fragmentation greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc. Note: The currently used allocator is: libc

I'm here to keep you safe, Sam. I want to help you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment