Skip to content

Instantly share code, notes, and snippets.

@sourcec0de
Last active August 31, 2023 07:43
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sourcec0de/7bb09c511f7bc6db66ae1eaa2c16505c to your computer and use it in GitHub Desktop.
Save sourcec0de/7bb09c511f7bc6db66ae1eaa2c16505c to your computer and use it in GitHub Desktop.
Why I choose not to store sessions in redis.

Why I don't use redis as a session store

You can never rely on a system to be online 100% of the time. It's just the ephemeral nature of computing. Inevitably things break and shit happens.

My primary reasons for not using redis

  • Increased tail latency by adding network overhead (yes redis is fast but you're still using the network)
  • If you run a slow operation it will slow down all other queries (SORT, LREM, SUNION)
    • this is because redis is single threaded
  • If you loose connection with redis you've introduced a fault and can no longer serve
  • Replication has a performance cost and increases complexity
    • and if you're using redis to persist data and you aren't running a cluster you're just asking for trouble
  • Finally, redis has a very delicate balance between speed and persistence

Depending on the persistence settings you choose you'll get one of two things (speed or fault tolerance)

snappshotting

This method keeps redis fast, but you loose fault tolerance. Take these settings for example save 60 1000. This tells redis to persist every 60 seconds if at least 1000 keys have changed. If redis crashes or a machines fails you just lost data.

append-only file AOF

This method will give you full fault tolerance across restarts and crashes. However, it doesn't come without a cost. AOF will get larger as write operations are performed. For example, if you increment a counter 100 times your cached item will have the final value but you'll end up with 100 entries in your AOF. 99 of them are unneeded. On a restart or crash, this would cause redis to replay the entire oplog (VERY EXPENSIVE).

However, you can mitigate this by periodically issuing BGREWRITEAOF. This will tell redis to rewrite the AOF on disk in the background with the shortest sequence of commands. If you're using redis 2.2 you'll need to schedule this. If you're on 2.4 or above it can be configured to automatically rewrite the AOF.

You can change how often redis fsyncs to disk.

  • fsync every time a command comes in (THE NO NO OPTION) too slow.
  • fsync every second. Defaults and best balance of safe and fast
  • never fsync, by letting the OS figure it out (the faster less safe)

Now on top of all this, you also need to make sure that you take a copy of your AOF. Server crashes while writing AOF are likely to corrupt the file. If you have a backup redis has a built in tool to resolve these corruptions resid-check-aof --fix.

Why I like client sessions

With all the aforementioned complexity why not just use a cookie(small data) or local storage(blobs)?

  • the data is always available
  • there is no stateful service to manage (redis)
  • you can focus on scaling other components of your stack (web servers!!!!)

The only issue is you can't trust the client. How do we resolve this? ENCRYPTION!!!! A current web standard is JWT (JSON Web Tokens). Easy to use, they have a great spec and libraries out the ass.

If you wanna roll your own just pick a cipher.

# The AES-256-CBC cipher is used to encrypt the session contents, with an HMAC-SHA-256 
# authentication tag (via Encrypt-then-Mac composition). A random 128-bit Initialization 
# Vector (IV) is generated for each encryption operation (this is the AES block size 
# regardless of the key size). The CBC-mode input is padded with the usual PKCS#5 scheme.
encKey := HMAC-SHA-256(secret, 'cookiesession-encryption');
sigKey := HMAC-SHA-256(secret, 'cookiesession-signature');

The guys over at Mozilla who run Mozilla Persona (an authentication and identity service) have already done all the work for you if you're using node.js. https://www.npmjs.com/package/client-sessions

Invalidating sessions

Now, in the case that you're not going to build multi-factor authentication into your system if a user has their password compromised the client session leaves one major hole in security. Any cookies already assigned to someone's browser are irrevocable.

To solve this issue simply extend your user cookie to contain a session token (UUID v4?). Then since you're already loading a user pull a list of their session tokens from the DB as well. If they changed their password delete the other session tokens to ensure that any sessions containing the old values will be invalidated.

Conclusion

All of these items should give you plenty of reason to avoid using redis as a session store. A lot of people in the node.js community fall victim to this since the first thing they see when they come in is tutorials everywhere that use redis as the session store.

It's unnecessary, its additional complexity, it's only slightly fault tolerant and it's only as fast as you scale it to be, adding more complexity in managing your system. Though that the last argument is duplicitous if you already plan on using redis for other things like... caching, pub-sub or a distributed lock queue.

Ultimatly its up to you and as always these are just suggestions and the ramblings of web developer. Choose the best tool for the job. :)

@erlangparasu
Copy link

The only problem with JWT is syncronize the revoked tokens. With centralized session store like Redis, so easy to make all in sync.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment