Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
By sampling keys from your redis databases, this script tries to identify what types of keys are occupying the most memory.
#!/usr/bin/env ruby
# Evaluates a sample of keys/values from each redis database, computing statistics for each key pattern:
# keys: number of keys matching the given pattern
# size: approximation of the associated memory occupied (based on size/length of value)
# percent: the proportion of this 'size' relative to the sample's total
# Copyright Weplay, Inc. 2010. Available for use under the MIT license.
require 'rubygems'
require 'redis'
require 'yaml'
SAMPLE_SIZE = 10_000 # number of keys to sample from each db before computing stats
# Naive approximation of memory footprint: size/length of value.
def redis_size(db, k)
t = db.type(k)
case t
when 'string' then db.get(k).length
when 'list' then db.lrange(k, 0, -1).size
when 'zset' then db.zrange(k, 0, -1).size
when 'set' then db.smembers(k).size
else raise("Redis type '#{t}' not yet supported.") # TODO accommodate more types
def array_sum(array)
array.inject(0){ |sum, e| sum + e }
def redis_db_profile(db_name, sample_size = SAMPLE_SIZE)
db = => db_name)
keys = []
sample_size.times { |i| keys << db.randomkey }
key_patterns = keys.group_by{ |key| key.gsub(/\d+/, '#') }
data ={ |pattern, keys|
[pattern, {'keys' => keys.size, 'size' => array_sum({ |k| redis_size(db, k) })}]
}.sort_by{ |a| a.last['size'] }.reverse
size_sum = data.inject(0){|sum, d| sum += d.last['size'] }
data.each { |d| d.last['percent'] = '%.2f%' % (d.last['size'].to_f*100/size_sum) }
db_names = `redis-cli info | grep ^db[0-9]`{ |line| line.scan(/^db\d+/).first }
db_names.each do |name|
puts "\nProfiling \"#{name}\"...\n#{'-'*20}"
y redis_db_profile(name)
puts "\nOverall statistics:\n#{'-'*20}"
puts `redis-cli info | grep memory`
Copy link

scvinodkumar commented Jan 25, 2013

Hi i am new to Redis...could you pls guide me how to run this file and where to run this file...

Copy link

syrnick commented Jul 20, 2013

Very nice! We have a bit more involved version that scans all keys, but using random key is better for analysis of a running master. We have all keys declared with something like this:

 redis_key :launch_finished, "jobs:JOB_ID:reports:ID:launch_finished"

where JOB_ID and ID are substituted. The analysis then aggregates the usage by a group.

Here's what I've done for size estimates that can improve your version:

  def key_size( redis, key )
    case redis.type(key)
    when "none" then 0
    when "string" then redis.get(key).size
    when "list" then redis.lrange(key,0,-1).map{|m| 1+m.size}.sum || 1
    when "zset" then redis.zrange(key,0,-1).map{|m| 1+m.size}.sum || 1
    when "set" then redis.smembers(key).map{|m| 1+m.size}.sum || 1
    when "hash" then redis.hgetall(key){|m| 1+m.size}.sum || 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment