Skip to content

Instantly share code, notes, and snippets.

@kantale
Last active March 8, 2016 04:47
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save kantale/b7fecd62da22a1523aa2 to your computer and use it in GitHub Desktop.
Save kantale/b7fecd62da22a1523aa2 to your computer and use it in GitHub Desktop.
How to setup galaxy with TORQUE in Amazon's cloud
#!/usr/bin/python
import sys
'''
Just a simple tool that adds the line number at
the end of each line
'''
with open(sys.argv[1]) as f_in, open(sys.argv[2], 'w') as f_out:
c = 0
for l in f_in:
c += 1
f_out.write(l.replace('\n', '') + '\t%i'%c + '\n')
<tool id="custom_tool_1" name="Custom Tool 1" version="0.1.0">
<description>Experimental tool 1</description>
<command interpreter="python">example_tool.py $input $output</command>
<inputs>
<param format="tabular" name="input" type="data" label="Source file"/>
</inputs>
<outputs>
<data format="tabular" name="output" />
</outputs>
<tests>
<test>
<param name="input" value="test_input.txt"/>
<output name="output" file="test_output.txt"/>
</test>
</tests>
<help>
The best tool ever.
</help>
</tool>

The location of the files

  • example_tool.py tools/mytools
  • example_tool.xml tools/mytools
  • my_tool_conf.xml config/
  • my_job_conf.xml config/
<?xml version="1.0"?>
<!-- A sample job config that explicitly configures job running the way it is configured by default (if there is no explicit config). -->
<job_conf>
<plugins>
<plugin id="local"
type="runner"
load="galaxy.jobs.runners.local:LocalJobRunner"
workers="4"/>
<plugin id="torque1" type="runner"
load="galaxy.jobs.runners.pbs:PBSJobRunner"
workers="2"/>
</plugins>
<handlers>
<handler id="main"/>
</handlers>
<destinations default="local">
<destination id="local" runner="local"/>
<destination id="torque1_dst" runner="torque1"/>
</destinations>
<tools>
<tool id="custom_tool_1" destination="torque1_dst"/>
</tools>
</job_conf>
<section name="MyTools" id="mTools">
<tool file="mytools/example_tool.xml" />
</section>
a b c
e f g
h i j
@kantale
Copy link
Author

kantale commented Mar 4, 2016

This guide assumes that you have setup and ssh-ed in an EC2 instance. The instance that I used was "Ubuntu Server 14.04 LTS (HVM), SSD Volume Type - ami-f95ef58a". I would love to see some comments.

Get packages:

sudo apt-get update
sudo apt-get -y install g++ make libssl-dev libxml2-dev libboost-dev git 

Install TORQUE

mkdir torque
cd torque

wget http://www.adaptivecomputing.com/index.php?wpfb_dl=2984 
mv "index.php?wpfb_dl=2984" torque-5.1.2-1448394813_f498aba.tar.gz 
tar zxvf torque-5.1.2-1448394813_f498aba.tar.gz 
cd torque-5.1.2-1448394813_f498aba/
./configure
make
sudo make install

Add to /etc/hosts

<IP> <HOSTNAME>

For example:

172.30.0.172 ip-172-30-0-172

Continue:

sudo echo "/usr/local/lib" > ld.so.conf 
sudo mv ld.so.conf  /etc/ld.so.conf 
sudo ldconfig 
sudo ./torque.setup root 

This should look like:

initializing TORQUE (admin: root@ip-172-30-0-172)

You have selected to start pbs_server in create mode.
If the server database exists it will be overwritten.
do you wish to continue y/(n)?y
root     29986     1  0 13:11 ?        00:00:00 pbs_server -t create
Max open servers: 9
Max open servers: 9

continue with:

sudo killall pbs_server 

Add HOSTNAME to

/var/spool/torque/server_name
/var/spool/torque/server_priv/nodes  
/var/spool/torque/server_priv/acl_svr/acl_hosts
/var/spool/torque/mom_priv/config 

Add root@HOSTNAME to:

/var/spool/torque/server_priv/acl_svr/operators
/var/spool/torque/server_priv/acl_svr/managers

Start everything:

sudo pbs_server 
sudo pbs_sched 
sudo pbs_mom 

Make sure this this command:

sudo pbsnodes -a

Produces something like this:

ip-172-30-0-172
     state = free
     power_state = Running
     np = 1
     ntype = cluster
     status = rectime=1457098876,macaddr=02:dd:30:22:59:4b,cpuclock=Fixed,varattr=,jobs=,state=free,netload=79993541,gres=ip:-172-30-0-172,loadave=0.00,ncpus=1,physmem=2048516kb,availmem=1870768kb,totmem=2048516kb,idletime=3461,nusers=1,nsessions=1,sessions=1327,uname=Linux ip-172-30-0-172 3.13.0-74-generic #118-Ubuntu SMP Thu Dec 17 22:52:10 UTC 2015 x86_64,opsys=linux
     mom_service_port = 15002
     mom_manager_port = 15003

Make a test job

cd
mkdir test
cd test

Make the file: test.sh

echo "YEAH" > /home/ubuntu/test/result.txt

Submit it:

qsub test.sh

Make sure that everything went ok:

ubuntu@ip-172-30-0-172:~/test$ qstat
Job ID                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
0.ip-172-30-0-172          test.sh          ubuntu          00:00:00 C batch          
ubuntu@ip-172-30-0-172:~/test$ ls
result.txt  test.sh  test.sh.e0  test.sh.o0
ubuntu@ip-172-30-0-172:~/test$ cat result.txt 
YEAH

Install pbs-python

cd
wget https://bootstrap.pypa.io/get-pip.py 
sudo python get-pip.py  
git clone https://github.com/radik/pbs-python 
cd pbs-python 
sudo python setup.py install 

Install galaxy

cd
git clone https://github.com/galaxyproject/galaxy/ 
cd galaxy
mv config/galaxy.ini.sample config/galaxy.ini

edit config/galaxy.ini and add: (MAKE SURE YOU ARE MAKING THIS CHANGE IN THE [server:main] SECTION!!)

host = 0.0.0.0

Make sure that it runs fine:

sh run.sh

Now it is a nice time to:

  • Create an account
  • Upload the file test_input.txt (Get_data --> upload file)

stop server with control-c
Let's add a tool: (Assuming you have a local copy of these files in ../)

mkdir tools/mytools 
cp ../example_tool.py   tools/mytools/example_tool.py
cp ../example_tool.xml tools/mytools/example_tool.xml
cp ../my_tool_conf.xml config/my_tool_conf.xml

Edit config/galaxy.ini and add the line:

tool_config_file = config/tool_conf.xml.main,config/my_tool_conf.xml 

Rerun server:

sh run.sh

You should be able to see the tool at the end of the tool bar:
tprczjt - imgur
Make a workflow to test the tool installation
y70sghm - imgur
Save it and run, giving as input the file test.txt
fldnnqp - imgur
The results should look like this:
n2ts0zr - imgur

Configure my_tool to run via TORQUE

Stop the serve (ctrl-c)

cp ../my_job_conf.xml config/my_job_conf.xml

Edit galaxy/config.ini and add the following lines:

job_config_file = config/my_job_conf.xml
cluster_files_directory = database/pbs

Create the cluster_files_directory

mkdir database/pbs

Rerun the server and run the same workflow again (exactly as before)
With qstat you should be able to confirm that the workflow was submitted:

ubuntu@ip-172-30-0-172:~$ qstat
Job ID                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
1.ip-172-30-0-172          ...@ics.forth.gr ubuntu          00:00:01 C batch          

Also the resulted file should be the same as before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment