Skip to content

Instantly share code, notes, and snippets.

@zhenyuan992
Last active December 5, 2022 06:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save zhenyuan992/1602e18e536c2ad28281ba1aba708689 to your computer and use it in GitHub Desktop.
Save zhenyuan992/1602e18e536c2ad28281ba1aba708689 to your computer and use it in GitHub Desktop.
benchmarks python's build-in multiprocessing. uses all the threads available in the system.
import multiprocessing as mp
import timeit
threads=mp.cpu_count()
print(f"Number of threads: {threads}")
vlist=range(15000) # takes about 5 seconds to run serially
def f(v):
return sum([v_ for v_ in range(v)])
start_pool = timeit.default_timer()
with mp.Pool(threads) as p:
vals_pool = p.map(f, vlist)
stop_pool = timeit.default_timer()
start_serial = timeit.default_timer()
vals_serial = [f(v) for v in vlist]
stop_serial = timeit.default_timer()
assert all([v1==v2 for v1,v2 in zip(vals_serial, vals_pool)])
print(f"Pool : {stop_pool-start_pool:.5f} s (time taken)")
print(f"Serial: {stop_serial-start_serial:.5f} s (time taken)")
print(f"Speed up from multiprocessing: {(stop_serial-start_serial)/(stop_pool-start_pool):.5f} X")
@zhenyuan992
Copy link
Author

possible result:

#Number of threads: 56
#Pool  : 0.55056 s (time taken)
#Serial: 4.74138 s (time taken)
#Speed up from multiprocessing: 8.61188 X

note that the 56 threads does not mean 56X speed up, this is due to message passing overhead.
also, the variation of the speed up may be large (than expected), so multiple runs may be needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment