Skip to content

Instantly share code, notes, and snippets.

@gromgull
Created May 16, 2017 18:43
Show Gist options
  • Save gromgull/3a2e343d50184a853fcf1dca5e690a6b to your computer and use it in GitHub Desktop.
Save gromgull/3a2e343d50184a853fcf1dca5e690a6b to your computer and use it in GitHub Desktop.
concurrent.futures.ProcessPoolExecutor bug in 2.7.12
"""
This script will break in various ways in python 2.7 with the concurrent.futures backport.
The script will either:
* run just fine, and exit cleanly
* do all the work, but not exit.
* and more rarely, not actually start any work.
In both error cases, ctrl-c'ing the main process will leave zombie workers around.
I ran this on:
Linux 4.4.0-77-generic #98-Ubuntu SMP x86_64
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
concurrent.futures 3.1.1
The script also runs under python 3. With 3.5.2 it seems to always work.
"""
from concurrent.futures import ProcessPoolExecutor
from random import random
from time import sleep
import logging
import multiprocessing
multiprocessing.log_to_stderr()
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger()
def work(i,x):
log.info("Hi from job %d, I will now sleep %.2f", i, x)
sleep(x)
log.info("%d done sleeping %.2f, returning!", i, x)
return i
if __name__ == '__main__':
N = 5
POOL_SIZE = 4
with ProcessPoolExecutor(POOL_SIZE) as pool:
inp = [ random()*10. for _ in range(N) ]
done = [False]*N
log.info("Doing %d jobs"%N)
def setdone(i): done[i] = True
for i,x in enumerate(inp):
log.info("adding job to sleep for %.2f", x)
pool.submit(work, i, x).add_done_callback(lambda f: setdone(f.result()))
while True:
log.info("Sleeping for a second ...")
sleep(1)
if all(done): break
log.info("%d still to go", len([ x for x in done if not x]))
log.info('all done')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment