Skip to content

Instantly share code, notes, and snippets.

@brynpickering
Last active April 23, 2020 09:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save brynpickering/78ed2b32792cfd73997ff148b941a265 to your computer and use it in GitHub Desktop.
Save brynpickering/78ed2b32792cfd73997ff148b941a265 to your computer and use it in GitHub Desktop.
Load Jupyter lab on a ETH euler compute node
#!/bin/bash
#BSUB -J jupyter_lab[8889]
#BSUB -n 1
#BSUB -W 480
#BSUB -R "rusage[mem=80G]"
#BSUB -o jupyter-%I.log
# get tunnelling info
XDG_RUNTIME_DIR=""
node=$(hostname -s)
port=${LSB_JOBINDEX}
module load eth_proxy
# Run Jupyter
jupyter lab --no-browser --port=${port} --ip=${node}
name: jupyter_lab # Although I'm putting this in a new conda env, I generally install these in the base environment.
channels:
- conda-forge
- anaconda
dependencies:
- python
- jupyterlab
- nb_conda_kernels # Access all conda environments from one Jupyter lab session. On other conda envs, ensure you have ipython installed
- nodejs # to install jupyterlab extensions
- widgetsnbextension # On other conda envs, ensure you have ipywidgets installed
## Once the environment is installed, run any of the following extension loaders
## (also see https://jupyterlab.readthedocs.io/en/stable/user/extensions.html):
# jupyter labextension install @jupyter-widgets/jupyterlab-manager
# optional: jupyter labextension install @jupyterlab/plotly-extension
#!/bin/bash
usage="$(basename "$0") [-h] [-p W R] -- program to run jupyter lab on a compute node, for remote access
where:
-h show this help text
-p set the tunnelling port (default = 8889)
-W set the runtime, in minutes (default = 960)
-R set the memory allocation (default = 'rusage[mem=80G]'). Use 'light' to use a 5 day (1GB) node"
set -e
# Default values
port="8889"
time="960"
mem="rusage[mem=80G]"
while getopts ':hp:W:R:' option
do
case "$option" in
h) echo "$usage"
exit
;;
p)
port=${OPTARG}
;;
W)
time=${OPTARG}
;;
R)
mem=${OPTARG}
;;
esac
done
jobs=`bjobs -o "JOB_NAME" | sed -n 2p`
if [[ $jobs == *"jupyter_lab[${port}]"* ]]; then
echo "Jupyter lab instance already running or pending on this port! Exiting..."
exit
elif [[ $jobs == *"jupyter_lab"* ]]; then
echo "Warning: Jupyter Lab already running (${jobs}), be careful of not overwriting notebooks between your two lab instances."
fi
bsub -J jupyter_lab[${port}] -W ${time} -R ${mem} < jupyter_job.sh
until [ "$(bjobs -J jupyter_lab[${port}] -o 'STAT' | tail -1)" = "RUN" ];
do
echo "Waiting for job to run..."
sleep 5
done
user=$(whoami)
cluster="euler"
echo "Job 'jupyter_lab[${port}]' is running!"
node=`bjobs -J jupyter_lab[${port}] -o "EXEC_HOST" | tail -1`
# print tunnelling instructions jupyter-log
echo -e "
Job is running on ${node}
Command to create ssh tunnel:
> ssh -N -f -L ${port}:${node}:${port} ${user}@${cluster}.ethz.ch
Or, since it can be a pain to kill the tunnel:
> ssh -N -f -M -S /path/to/temp/file -L ${port}:${node}:${port} ${user}@${cluster}.ethz.ch
Then to kill:
> ssh -S /path/to/temp/file -O exit ${user}@${cluster}.ethz.ch
Where '/path/to/temp/file' is e.g. '/tmp/jupyter_lab_euler'
Use a Browser on your local machine to go to:
localhost:${port} (prefix w/ https:// if using password)
Once the job is complete (timeout, killed, etc.), you can find the log at 'jupyter-${port}.log'
"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment