Skip to content

Instantly share code, notes, and snippets.

@berceanu
Last active November 3, 2021 08:57
Show Gist options
  • Save berceanu/caa8fa6533821e55eb5e70e0223ec9da to your computer and use it in GitHub Desktop.
Save berceanu/caa8fa6533821e55eb5e70e0223ec9da to your computer and use it in GitHub Desktop.
Example parallel SLURM script
#!/bin/bash
#SBATCH --job-name="1PW_serie2_He"
#SBATCH --output=petawatt_%j.log # Standard output and error log
#SBATCH --account=ptomassini_a+
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=paolo.tomassini@mail.eli-np.ro
#SBATCH --partition=gpu
#SBATCH --time 144:00:00 # Max allowed job runtime
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=31200m # Reserve 32 GB of RAM per core
#SBATCH --gres=gpu:4 # Allocate four GPUs
#SBATCH --gres-flags=enforce-binding
module use $HOME/MyModules
module load mambaforge_pic/latest
export FBPIC_DISABLE_THREADING=1
export MKL_NUM_THREADS=1
export NUMBA_NUM_THREADS=1
export OMP_NUM_THREADS=1
srun --mpi=pmi2 -n 4 python 1PW_serie2_He.py
wait
#!/bin/bash
#SBATCH --job-name="fbpic_1pw"
#SBATCH --output=one_pw_%j.log # Standard output and error log
#SBATCH --account=ptomassini_a+
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=paolo.tomassini@mail.eli-np.ro
#SBATCH --partition=gpu
#SBATCH --time 72:00:00 # Max allowed job runtime
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=31200m # Reserve 32 GB of RAM
#SBATCH --gres=gpu:1 # Allocate a single GPU
#SBATCH --gres-flags=enforce-binding
module use $HOME/MyModules
module load mambaforge_pic/latest
export MPICH_GPU_SUPPORT_ENABLED=1
export FBPIC_ENABLE_GPUDIRECT=1
srun -n 1 python LB_ELINP_1PW_bubble.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment