Skip to content

Instantly share code, notes, and snippets.

@BenWibking
Created May 17, 2023 16:34
Show Gist options
  • Save BenWibking/22bb80dd80ad087297433efb97f56c45 to your computer and use it in GitHub Desktop.
Save BenWibking/22bb80dd80ad087297433efb97f56c45 to your computer and use it in GitHub Desktop.
SLURM script for Pawsey GPU nodes
#!/bin/bash
#SBATCH -A pawsey0807-gpu
#SBATCH -J quokka_benchmark
#SBATCH -o 1node_%x-%j.out
#SBATCH -t 00:10:00
#SBATCH -p gpu
#SBATCH --exclusive
#SBATCH --ntasks-per-node=8
#SBATCH --gpus-per-node=8
#SBATCH --cpus-per-task=8
##SBATCH --core-spec=8
#SBATCH -N 1
# load modules
module load craype-accel-amd-gfx90a
module load rocm/5.0.2
# this environment setting is currently needed to work-around a
# known issue with Libfabric
export FI_MR_CACHE_MAX_COUNT=0 # libfabric disable caching
# always run with GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
# use correct NIC-to-GPU binding
export MPICH_OFI_NIC_POLICY=NUMA
## run
EXE="build/src/HydroBlast3D/test_hydro3d_blast"
INPUTS="tests/benchmark_unigrid_512.in"
srun bash -c "
case \$((SLURM_LOCALID)) in
0) GPU=4;;
1) GPU=5;;
2) GPU=2;;
3) GPU=3;;
4) GPU=6;;
5) GPU=7;;
6) GPU=0;;
7) GPU=1;;
esac
export ROCR_VISIBLE_DEVICES=\$((GPU));
${EXE} ${INPUTS}"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment