To mount an RDS folder locally in nautilus, navigate to 'other locations' and enter the address (where username
is your Imperial username (e.g. ckhozoie
): -
smb://rds.imperial.ac.uk/rds/user/username
On the dialog enter: -
user: username domain: IC.AC.UK password: yourpassword
Or using the command line: -
#!/bin/bash
CREDFILE="/root/.credentials"
sudo mount.cifs //rds.imperial.ac.uk/rds/user/username ~/mounts/RDS -o credentials=$CREDFILE
Where the contents of /root/.credentials
is: -
username=yourusername
password=yourpassword
uid=ck
gid=ck
Where uid and gid are your local computer uid and gid; these can be obtained using id
at the terminal.
ssh -XY username@login.hpc.ic.ac.uk
If mounted the storage can be accessed locally. Alternatively: -
sudo scp -r primaries/data/raw/bam/ username@login.hpc.ic.ac.uk:/rds/general/user/username/home
Or from the RDS: -
scp -r /rds/general/user/ckhozoie/home/ms-sc/data/ destination-ip-address:/home/ckhozoie/Documents/ms-sc/data/
It's important not to run any kind of work-heavy script directly on the HPC terminal shell (head node). Work performed on the head node can cause major shell performance issues for all users of the HPC. Instead, commands should be incorporated into shell scripts (e.g. tobcf.sh
) and submitted to the queue via qsub
: -
/opt/pbs/bin/qsub -lselect=1:ncpus=32:mem=62Gb -l walltime=24:00:00 tobcf.sh
Check the Imperial Job Sizing website (https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/computing/high-throughput-computing/job-sizing/) for job classes and node/cpu/memory details that may affect queue waiting times, etc.
qstat
and qstat -f
To take advantage of multi-core processing on the HPC, the shell parallel
command is typically used within the script submitted as a job, e.g.
ls *.bam | parallel "samtools sort {} ./sorted/sorted{.}.bam"
Many tools are available as modules. Use module avail
and module load
to install modules. Note that shell scripts submitted as jobs will require module load
commands for the required tools within the script. More information on module
is available here: http://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/applications/
For tools not available via module
, special requests can be made for tools to be made available and these are typically approved and processed within a few days.
To generate a custom anaconda environment, use module load anaconda3/personal
followed by anaconda – setup
. After the initial setup, module load anaconda3/personal
is used on every login. Search using conda search rstudio
and install using conda install rstudio
.
See: https://imperial-genomics-facility.github.io/igf-pipeline-help/data_access.html
Permitted job sizes: - Private queue pqmedbio-tput: Availability: https://selfservice.rcs.imperial.ac.uk/pqs/nodes/pqmedbio-tput Permitted job configurations: -lselect=1-20:ncpus=1-40:mem=128gb -lwalltime=168:00:00
Private queue pqmedbio-large: Availability: https://selfservice.rcs.imperial.ac.uk/pqs/nodes/pqmedbio-large Permitted job configurations: -lselect=1:ncpus=20:mem=240gb -lwalltime=168:00:00 -lselect=1:ncpus=30:mem=360gb -lwalltime=168:00:00 ... -lselect=1:ncpus=990:mem=11880gb -lwalltime=168:00:00 -lselect=1:ncpus=1000:mem=12000gb -lwalltime=168:00:00