Skip to content

Instantly share code, notes, and snippets.

@jcwright77
Created January 30, 2017 22:15
Show Gist options
  • Save jcwright77/4c64088a67868ef2ecf31319898f0730 to your computer and use it in GitHub Desktop.
Save jcwright77/4c64088a67868ef2ecf31319898f0730 to your computer and use it in GitHub Desktop.
SLURM script for parallel jobs
#!/bin/bash
# submit with sbatch cpi_nse.slurm
# commandline arguments may instead by supplied with #SBATCH <flag> <value>
# commandline arguments override these values
# Number of nodes
#SBATCH -N 32
# Number of processor core (32*32=1024, psfc, mit and emiliob nodes have 32 cores per node)
#SBATCH -n 1024
# specify how long your job needs. Be HONEST, it affects how long the job may wait for its turn.
#SBATCH --time=0:04:00
# which partition or queue the jobs runs in
#SBATCH -p sched_mit_nse
#customize the name of the stderr/stdout file. %j is the job number
#SBATCH -o cpi_nse-%j.out
#load default system modules
. /etc/profile.d/modules.sh
#load modules your job depends on.
#better here than in your $HOME/.bashrc to make debugging and requirements easier to track.
#here we are using gcc under MPI mpich
module load mpich/ge/gcc/64/3.1
#I like to echo the running environment
env
#Finally, the command to execute.
#The job starts in the directory it was submitted from.
#Note that mpirun knows from SLURM how many processor we have
#In this case, we use all processes.
mpirun ./cpi
#!/bin/bash
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --mem 10000 #in MB
#SBATCH --time=0:04:00
#SBATCH -p sched_mit_psfc
#SBATCH -o myjob-%j.out
. /etc/profile.d/modules.sh
./pi_serial
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment