Created
April 26, 2023 17:13
-
-
Save Achllle/482b60ab11952bbe163a9bb30ca10473 to your computer and use it in GitHub Desktop.
An Async TTL cache with a lock that forces concurrent calls to wait for the first call to complete and use the cached result rather than recompute
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import asyncio | |
import time | |
import functools | |
def cache_result(seconds): | |
""" | |
Caches result of an async function, depending on its arguments | |
No support for maxsize, only TTL for now | |
Will lock access to the function being wrapped so that if two calls are being made close to | |
eachother (within $seconds), the second invocation will simply wait for the first call's result | |
to be ready rather than async start computing the same value. This is unlike the behavior | |
of async_cache and onecache | |
""" | |
def decorator(func): | |
cache = {} | |
lock = asyncio.Lock() | |
@functools.wraps(func) | |
async def wrapper(*args, **kwargs): | |
async with lock: | |
key = (func.__name__, args, frozenset(kwargs.items())) | |
if key in cache and time.time() - cache[key]['time'] < seconds: | |
return cache[key]['value'] | |
else: | |
cache[key] = {'time': time.time(), 'value': await func(*args, **kwargs)} | |
return cache[key]['value'] | |
return wrapper | |
return decorator |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Demo usage:
Will print
Rather than ~6 seconds with onecache or async-cache.