Skip to content

Instantly share code, notes, and snippets.

View natefoo's full-sized avatar

Nate Coraor natefoo

View GitHub Profile
import logging
from galaxy.jobs.mapper import JobMappingException
log = logging.getLogger(__name__)
DESTINATION_IDS = {
1 : 'slurm',
2 : 'slurm-2c'
}
FAILURE_MESSAGE = 'This tool could not be run because of a misconfiguration in the Galaxy job running system, please report this error'
@natefoo
natefoo / tutorial.md
Last active March 3, 2020 13:37
Installing Data Managers @ GAT 2020 Barcelona

Reference Genomes - Exercise

Adapted from Oslo training

Learning Outcomes

By the end of this tutorial, you should:

  1. Have an understanding of the way in which Galaxy stores and uses reference data
  2. Be able to download and use data managers to add a reference genome and its pre-calculated indices into the Galaxy reference data system
@natefoo
natefoo / 00-README.md
Last active February 27, 2020 20:08
uWSGI Zerg Mode + Mules

Background

Two commonly used [Galaxy][galaxy] server configurations are the use of [uWSGI Zerg Mode][uwsgi-zerg-mode] and [uWSGI Mules][uwsgi-mules] as [Galaxy job handlers][galaxy-scaling]. These features are not easily compatible because Galaxy job handlers rely heavily on having unique server names, and handlers' server names must be persistent across restarts. Because zerg mode results in running two Galaxy servers simultaneously (however briefly), using mules with zerg mode would necessarily mean running mules with overlapping server names.

Solution

In a typical Galaxy zerg mode setup, the newly started zergling (B) terminates the old zergling (A) once B is ready to serve requests. Zergling B then continues to serve requests until another zergling (C) is started and terminates B.

It is possible to get both zerg mode and mules working together by configuring zergling B to start without mules, and perform a double zerg dance on each restart:

#!/usr/bin/env python
import argparse
import sys
import boto3
from jinja2 import Environment
from s3pypi.exceptions import S3PyPiError
from s3pypi.package import Package
Exception in thread database_heartbeart_main.web.6.thread:
Traceback (most recent call last):
File "/cvmfs/test.galaxyproject.org/deps/_conda/envs/_galaxy_/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/cvmfs/test.galaxyproject.org/deps/_conda/envs/_galaxy_/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "lib/galaxy/model/database_heartbeat.py", line 96, in send_database_heartbeat
self.update_watcher_designation()
File "lib/galaxy/model/database_heartbeat.py", line 85, in update_watcher_designation
self.sa_session.flush()
<?xml version="1.0"?>
<toolbox>
<!--
This is Galaxy's integrated tool panel and should be modified directly only for
reordering tools inside a section. Each time Galaxy starts up, this file is
synchronized with the various tool config files: tools, sections and labels
added to one of these files, will be added also here in the appropriate place,
while elements removed from the tool config files will be correspondingly
deleted from this file.
To modify locally managed tools (e.g. from tool_conf.xml) modify that file
@natefoo
natefoo / notes.md
Last active February 26, 2020 19:15
Interactive Tools setup notes

Interactive Tools setup notes

  1. Can't start uWSGI proxy before proxy SQLite map exists
  2. interactivetools_map needs to go in the galaxy section for the Galaxy side
  3. Docker has to be installed on the Galaxy server even though it won't run there (ok)
  4. Nodes can't have the job dir root squashed
  5. Set <param id="docker_set_user"></param> on destination to run container as root
  6. Not implemented for DRMAA runner I added the call to get the ports, but container stopping doesn't work. I consider this a container resolver issue and not a runner issue and am using an Epilog script to deal with it for the moment.
  7. Wildcard certs are only valid for a single level, so you need a cert for *.interactivetoolentrypoint.interactivetool.example.org where example.org is your Galaxy server.
  8. uWSGI as a proxy seems to have the same problems I encountered when originally trying to set it up as a proxy (enabling offload-threads causes connnections to fail after 3 connections are made, disabling `offlo
galaxy.web.framework.decorators ERROR 2019-10-21 13:42:17,701 [p:6121,w:2,m:0] [uWSGIWorker2Core1] Uncaught exception in exposed API method:
Traceback (most recent call last):
File "lib/galaxy/web/framework/decorators.py", line 282, in decorator
rval = func(self, trans, *args, **kwargs)
File "lib/galaxy/webapps/galaxy/api/tool_shed_repositories.py", line 656, in uninstall_repository
errors = irm.uninstall_repository(repository=repository, remove_from_disk=kwd.get('remove_from_disk', True))
File "lib/tool_shed/galaxy_install/installed_repository_manager.py", line 773, in uninstall_repository
uninstall=remove_from_disk)
File "lib/tool_shed/galaxy_install/tools/tool_panel_manager.py", line 444, in remove_repository_contents
self.app.install_model.context.add(repository)
(ansible-latest)nate@weyerbacher% ansible --version
ansible 2.8.5
config file = /home/nate/ansible/usegalaxy-clone/ansible.cfg
configured module search path = ['/home/nate/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/nate/.virtualenvs/ansible-latest/lib/python3.7/site-packages/ansible
executable location = /home/nate/.virtualenvs/ansible-latest/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
(ansible-latest)nate@weyerbacher% cat ansible.cfg
[defaults]
retry_files_enabled = False
Friday 27 September 2019 12:51:11 -0400 (0:00:00.004) 0:16:50.574 ******
===============================================================================
galaxyproject.galaxy : Run webpack ------------------------------------ 378.58s
galaxyproject.galaxy : Install packages with yarn --------------------- 278.82s
galaxyproject.galaxy : Install Galaxy base dependencies --------------- 187.06s
usegalaxy_cvmfs : Remove node_modules ---------------------------------- 43.87s
galaxyproject.galaxy : Update Galaxy to specified ref ------------------ 20.08s
galaxyproject.galaxy : Install yarn ------------------------------------ 12.77s
usegalaxy_cvmfs : Abort CVMFS transaction ------------------------------ 10.86s
usegalaxy_cvmfs : Fetch Galaxy version ---------------------------------- 9.12s