Skip to content

Instantly share code, notes, and snippets.

@kolypto
Last active January 18, 2021 13:03
Show Gist options
  • Save kolypto/2bb6e98aca4cbb6eee42492be880812f to your computer and use it in GitHub Desktop.
Save kolypto/2bb6e98aca4cbb6eee42492be880812f to your computer and use it in GitHub Desktop.
Benchmark: test the overhead of running functions in threadpool
import time
import asyncio
import functools
def runs_in_threadpool(function):
""" Decorate a sync function to become an async one run in a threadpool """
@functools.wraps(function)
async def wrapper(*args, **kwargs):
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, function, *args, **kwargs)
return wrapper
# This is our test function.
# It does nothing, but we'll run it many times to measure the pure overhead of running anything inside a threadpool.
@runs_in_threadpool
def sample_function():
return 1
# Async entry point
async def main():
for i in range(ITERATIONS):
await sample_function()
# Run, measure the time
ITERATIONS = 10_000
t1 = time.monotonic()
asyncio.run(main())
t2 = time.monotonic()
total_ms = (t2-t1)*1000
per_iteration = total_ms / ITERATIONS
print(f'total: {total_ms:.2f}ms, per iteration: {per_iteration:.2f}ms')
# total: 781.93ms, per iteration: 0.08ms
# Result: the overhead is 0.08ms
# Interpretation: it's really cheap to send your functions to threadpool. Not by thousands, though :)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment