Skip to content

Instantly share code, notes, and snippets.

@smhr
Created August 28, 2020 09:09
Show Gist options
  • Save smhr/e7e465ae9dfe71e5d6e0f3c7f4dc9add to your computer and use it in GitHub Desktop.
Save smhr/e7e465ae9dfe71e5d6e0f3c7f4dc9add to your computer and use it in GitHub Desktop.
An example nbody6 slurm script for submitting on scicluster
#!/bin/bash -l
#############################
# example for an OpenMP job #
#############################
#SBATCH --job-name=N50R0.5d20S0
# we ask for 1 task with 20 cores
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=20
#SBATCH -w compute-0-0
# exclusive acccess to compute nodes.
# default is sharing nodes
#SBATCH --exclusive
# run for five minutes
# d-hh:mm:ss
#SBATCH --time=7-00:00:00
# determine the partition
#SBATCH --partition=para
#SBATCH --output="stdout.txt"
#SBATCH --error="stderr.txt"
# you may not place bash commands before the last SBATCH directive
ml purge # it's a good practice to first unload all modules
ml CUDA foss Boost # then load what module you need, if any
modelName='N50R0.5d20S0'
# define and create a unique scratch directory
SCRATCH_DIRECTORY=/scratch1/${USER}/TidalMF/${modelName}
mkdir -p ${SCRATCH_DIRECTORY}
cd ${SCRATCH_DIRECTORY}
# we copy everything we need to the scratch directory
# ${SLURM_SUBMIT_DIR} points to the path where this script was submitted from
cp ${SLURM_SUBMIT_DIR}/${modelName}.* ./
mv ./${modelName}.fort.10 ./fort.10
# we set OMP_NUM_THREADS to the number of available cores
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
ulimit -s unlimited
# we execute the job and time it
nbody6.gpu_size_NSBH190_WD0 < ${modelName}.input &> out.log
# after the job is done we copy our output back to $SLURM_SUBMIT_DIR
mkdir ${SLURM_SUBMIT_DIR}/outputs
cp -rv ${SCRATCH_DIRECTORY}/* ${SLURM_SUBMIT_DIR}/outputs
# we step out of the scratch directory and remove it
cd ${SLURM_SUBMIT_DIR}
#rm -rf ${SCRATCH_DIRECTORY}
# happy end
exit 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment