Skip to content

Instantly share code, notes, and snippets.

@pcm32
pcm32 / check_check_binary_on_datatypes.py
Last active February 21, 2021 22:06
Check check_binary on Galaxy datatypes tests
#!/usr/bin/env python3
# Run standing on Galaxy lib directory, with the galaxy venv activated
import os
from io import (
BytesIO,
StringIO
)
from galaxy import util
@pcm32
pcm32 / job_conf.xml
Created May 28, 2020 16:36
Dynamic destinations for multiple resubmissions in Galaxy
<?xml version="1.0"?>
<!-- A sample job config that explicitly configures job running the way it is configured by default (if there is no explicit config). -->
<job_conf>
<handlers assign_with="db-skip-locked" />
<plugins>
<plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
<plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner" />
</plugins>
<!-- For dynamic webless handlers -->
<!-- <handlers assign_with="db-skip-locked" /> -->
@pcm32
pcm32 / intructions.md
Created May 24, 2020 19:48
Setting up a Escher, Jupyter and Cobrapy

Requirements

  • Python 3: should be available in most modern macOS or Linux installations. Please install if not available. You can check if it is available by running which python3 on the terminal.
  • pip for python 3: Check if available by running which pip3, it should normally be available with the python 3 installation. Otherwise, please visit https://pip.pypa.io/en/stable/installing/.

All commands mentioned here should be run on the same terminal, sequentially.

1.- Install virtualenv for simplicity

Execute:

@pcm32
pcm32 / logs_with_issues_kombu_tools_installations.log
Created May 7, 2020 10:15
Initial issues with kombu on certain uwsgi processes that might explain lack of later tool sync
This file has been truncated, but you can view the full file.
Wed May 6 22:55:43 2020 - *** Starting uWSGI 2.0.18 (64bit) on [Wed May 6 22:55:43 2020] ***
Wed May 6 22:55:43 2020 - compiled with version: 8.2.1 20180905 (Red Hat 8.2.1-3) on 17 April 2019 16:51:59
Wed May 6 22:55:43 2020 - os: Linux-3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020
Wed May 6 22:55:43 2020 - nodename: ********
Wed May 6 22:55:43 2020 - machine: x86_64
Wed May 6 22:55:43 2020 - clock source: unix
Wed May 6 22:55:43 2020 - pcre jit disabled
Wed May 6 22:55:43 2020 - detected number of CPU cores: 32
Wed May 6 22:55:43 2020 - current working directory: *******/galaxy/server/galaxy-20.01
Wed May 6 22:55:43 2020 - detected binary path: /usr/bin/python3.6
@pcm32
pcm32 / error_job_cores.py
Last active October 31, 2019 20:18
job_cores error Galaxy
galaxy.tools ERROR 2019-10-31 17:12:51,776 [p:3704,w:0,m:3] [WorkflowRequestMonitor.monitor_thread] Exception caught while attempting tool execution:
Traceback (most recent call last):
File "lib/galaxy/tools/__init__.py", line 1491, in handle_single_execution
collection_info=collection_info,
File "lib/galaxy/tools/__init__.py", line 1573, in execute
return self.tool_action.execute(self, trans, incoming=incoming, set_output_hid=set_output_hid, history=history, **kwargs)
File "lib/galaxy/tools/actions/__init__.py", line 289, in execute
history, inp_data, inp_dataset_collections, preserved_tags, all_permissions = self._collect_inputs(tool, trans, incoming, history, current_user_roles, collection_info)
File "lib/galaxy/tools/actions/__init__.py", line 256, in _collect_inputs
inp_data, all_permissions = self._collect_input_datasets(tool, incoming, trans, history=history, current_user_roles=current_user_roles, collection_info=collection_info)
Started by user Pablo Moreno
Building remotely on conda-mulled (mulled conda) in workspace /home/ubuntu/jenkins/workspace/mulled-seurat
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/ebi-gene-expression-group/bioconda-recipes # timeout=10
Fetching upstream changes from https://github.com/ebi-gene-expression-group/bioconda-recipes
> git --version # timeout=10
> git fetch --tags --progress https://github.com/ebi-gene-expression-group/bioconda-recipes +refs/heads/*:refs/remotes/origin/* --depth=1
> git rev-parse refs/remotes/origin/r-seurat-workflow^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/r-seurat-workflow^{commit} # timeout=10
Started by user Pablo Moreno
Building remotely on conda-mulled (mulled conda) in workspace /home/ubuntu/jenkins/workspace/mulled-seurat
> git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/ebi-gene-expression-group/bioconda-recipes # timeout=10
Fetching upstream changes from https://github.com/ebi-gene-expression-group/bioconda-recipes
> git --version # timeout=10
> git fetch --tags --progress https://github.com/ebi-gene-expression-group/bioconda-recipes +refs/heads/*:refs/remotes/origin/* --depth=1
> git rev-parse refs/remotes/origin/r-seurat-workflow^{commit} # timeout=10
> git rev-parse refs/remotes/origin/origin/r-seurat-workflow^{commit} # timeout=10
@pcm32
pcm32 / seven_days.png
Last active November 29, 2017 16:10
InfluxDB/heapster/Grafana memory usage badly plotted examples

The following plots were obtained from using the setup available here on a Kubernetes cluster. Only variation was that a Persistent Volume and subsequent claim were added to have a persistent storage.

Using a querying period of "Last 2 days" (two_days.png file on this gist), you can see that Individual Memory Usage for pod galaxy-k8s-b5zs6 goes from around 1.6 GiB on 27/11 (~15:12) to around 3.18 GiB on 29/11 (~16:00).

For the same pod, now making the query over "Last 7 days period" (seven_days.png file on this gist), you can see that the Individual Memory Usage for pod galaxy-k8s-b5zs6 goes from around 8.2 GiB on 27/11 (~15:10) to around 11.25 GiB on 29/11 (~16:00).

Values reported by docker stats for the same container (the pod has a single container) show different values (1.1 GiB, this on 29/11 at ~16:00).

CONTAINER    CPU %        MEM USAGE / LIMIT       MEM %        NET I/O   BLOCK I/O           PIDS
@pcm32
pcm32 / commands.sh
Last active November 26, 2016 10:20
example-volume-error-terraform
terraform apply -state=trystack.tfstate .
@pcm32
pcm32 / error.log
Created November 25, 2016 17:31
Failed volumes on terraform
2016/11/25 16:10:08 [DEBUG] root: eval: *terraform.EvalWriteState
2016/11/25 16:10:08 [DEBUG] root: eval: *terraform.EvalApplyProvisioners
2016/11/25 16:10:08 [DEBUG] root: eval: *terraform.EvalIf
2016/11/25 16:10:08 [DEBUG] root: eval: *terraform.EvalWriteState
2016/11/25 16:10:08 [DEBUG] root: eval: *terraform.EvalWriteDiff
2016/11/25 16:10:08 [DEBUG] root: eval: *terraform.EvalApplyPost
2016/11/25 16:10:08 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error(s) occurred:
* openstack_blockstorage_volume_v2.glusterfs_volume.1: Error waiting for volume (74f4a343-dd61-415f-930f-8d29f4693dff) to become ready: unexpected state 'error', wanted target 'available'. last error: %!s(<nil>)
2016/11/25 16:10:08 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred: