Skip to content

Instantly share code, notes, and snippets.

@rmcgibbo
Created February 1, 2013 19:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save rmcgibbo/4693441 to your computer and use it in GitHub Desktop.
Save rmcgibbo/4693441 to your computer and use it in GitHub Desktop.
pbs python shim for embarrassingly parallel job execution
#!/bin/sh
# probably need to set up some PBS directives up here...
N_PROCS=4
# Set up a little python "shim" using mpi4py that farms
# out a set of command line commands to multiple nodes
# using MPI. This shim gets written to a little file
rm -f ./temp_shim.py
cat > ./temp_shim.py <<EOF
import subprocess
from mpi4py import MPI
comm = MPI.COMM_WORLD
# these are commands we want to execute (with bash)
# The alternative would be to read this list from a file, which would
# be pretty easy as well
jobs = [
'echo "hello 1"',
'echo "hello 2"',
'sleep 1; echo "hello 3"',
'echo "hello 4"',
'echo "hello 5"',
'echo "hello 6"',
'echo "hello 7"',
]
# start up all of the jobs that I'm assigned
processes = []
for job in jobs[comm.rank::comm.size]:
print "I'm rank %d and i'm going to execute job: %s\n" % (comm.rank, job)
p = subprocess.Popen(job, shell=True)
processes.append(p)
# wait for all my jobs to finish
for process in processes:
process.wait()
EOF
# now, run the shim using mpirun
# If using PBS, we're going to need to provide the hostfile via --hostfile
# mpirun --bynode --hostfile $PBS_NODEFILE -np $N_PROCS python temp_shim.py
mpirun --bynode -np $N_PROCS python temp_shim.py
echo "Done!"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment