Skip to content

Instantly share code, notes, and snippets.

@combiz
Last active July 15, 2024 15:11
Show Gist options
  • Save combiz/0939ea23366805de0f2dcf6f43763660 to your computer and use it in GitHub Desktop.
Save combiz/0939ea23366805de0f2dcf6f43763660 to your computer and use it in GitHub Desktop.
Imperial HPC Cluster Getting Started Tips/Notes

Some tips on getting started with Imperial RDS/HPC cluster

Combiz Khozoie, Ph.D.

Mounting an RDS folder for local access

To mount an RDS folder locally in nautilus, navigate to 'other locations' and enter the address (where username is your Imperial username (e.g. ckhozoie): -

smb://rds.imperial.ac.uk/rds/user/username

On the dialog enter: -

user: username domain: IC.AC.UK password: yourpassword

Or using the command line: -

#!/bin/bash
CREDFILE="/root/.credentials"

sudo mount.cifs //rds.imperial.ac.uk/rds/user/username ~/mounts/RDS -o credentials=$CREDFILE

Where the contents of /root/.credentials is: -

username=yourusername
password=yourpassword
uid=ck
gid=ck

Where uid and gid are your local computer uid and gid; these can be obtained using id at the terminal.

Connecting to the HPC

ssh -XY username@login.hpc.ic.ac.uk

Transfer files to the RDS

If mounted the storage can be accessed locally. Alternatively: -

sudo scp -r primaries/data/raw/bam/ username@login.hpc.ic.ac.uk:/rds/general/user/username/home

Or from the RDS: - scp -r /rds/general/user/ckhozoie/home/ms-sc/data/ destination-ip-address:/home/ckhozoie/Documents/ms-sc/data/

Submit a script to the queue

It's important not to run any kind of work-heavy script directly on the HPC terminal shell (head node). Work performed on the head node can cause major shell performance issues for all users of the HPC. Instead, commands should be incorporated into shell scripts (e.g. tobcf.sh ) and submitted to the queue via qsub: -

/opt/pbs/bin/qsub -lselect=1:ncpus=32:mem=62Gb -l walltime=24:00:00 tobcf.sh

Check the Imperial Job Sizing website (https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/computing/high-throughput-computing/job-sizing/) for job classes and node/cpu/memory details that may affect queue waiting times, etc.

Check status of a job in the queue

qstat and qstat -f

Multi-core processing

To take advantage of multi-core processing on the HPC, the shell parallel command is typically used within the script submitted as a job, e.g.

ls *.bam | parallel "samtools sort {} ./sorted/sorted{.}.bam"

Install tools / applications on your HPC account

Many tools are available as modules. Use module avail and module load to install modules. Note that shell scripts submitted as jobs will require module load commands for the required tools within the script. More information on module is available here: http://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/applications/

For tools not available via module, special requests can be made for tools to be made available and these are typically approved and processed within a few days.

Install via Conda

To generate a custom anaconda environment, use module load anaconda3/personal followed by anaconda – setup. After the initial setup, module load anaconda3/personal is used on every login. Search using conda search rstudio and install using conda install rstudio.

Illumina Genome Facility - Data transfer via iRODS

See: https://imperial-genomics-facility.github.io/igf-pipeline-help/data_access.html

Med-bio queue

Permitted job sizes: - Private queue pqmedbio-tput: Availability: https://selfservice.rcs.imperial.ac.uk/pqs/nodes/pqmedbio-tput Permitted job configurations: -lselect=1-20:ncpus=1-40:mem=128gb -lwalltime=168:00:00

Private queue pqmedbio-large: Availability: https://selfservice.rcs.imperial.ac.uk/pqs/nodes/pqmedbio-large Permitted job configurations: -lselect=1:ncpus=20:mem=240gb -lwalltime=168:00:00 -lselect=1:ncpus=30:mem=360gb -lwalltime=168:00:00 ... -lselect=1:ncpus=990:mem=11880gb -lwalltime=168:00:00 -lselect=1:ncpus=1000:mem=12000gb -lwalltime=168:00:00

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment