Skip to content

Instantly share code, notes, and snippets.

@talesa
Created October 28, 2020 12:04
Show Gist options
  • Save talesa/fbff532ca2628014b7ec21d12d146c9f to your computer and use it in GitHub Desktop.
Save talesa/fbff532ca2628014b7ec21d12d146c9f to your computer and use it in GitHub Desktop.
playing with zizgpu slurm
sbatch --gres=gpu:1 --partition=ziz-gpu-small slurm.sh
#!/bin/bash
#SBATCH -A bigbayes # Account to be used, e.g. academic, acadrel, aims, bigbayes, opig, oxcsml, oxwasp, rstudent, statgen, statml, visitors
#SBATCH -J job01 # Job name, can be useful but optional
#SBATCH --time=00:00:30 # Walltime - run time of just 30 seconds
#SBATCH --mail-user=adamg@robots.ox.ac.uk # set email address to use, change to your own email address instead of "me"
#SBATCH --mail-type=ALL # Caution: fine for debug, but not if handling hundreds of jobs!
#SBATCH --output="/data/ziz/agolinsk/slurm_stdout" # To avoid slurm job failing if it can not write to a file in the current directory, specify an output file for standard output
#SBATCH --error="/data/ziz/agolinsk/slurm_stderr" # Same for standard error
echo Starting on `hostname`
source /data/localhost/not-backed-up/agolinsk/utils/miniconda3/bin/activate amci
python /data/ziz/agolinsk/slurm_python.py
nvidia-smi > /data/ziz/agolinsk/slurm_output.txt
echo Finishing
# in /data/ziz/agolinsk
import torch
print(torch.cuda.is_available())
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment