Skip to content

Instantly share code, notes, and snippets.

View blahah's full-sized avatar

Rik blahah

View GitHub Profile
blahah / 10kb.random
Created July 21, 2022 17:40
random 10kb text file
blahah /
Last active November 16, 2020 06:56
_Dendrosenecio_ molecular phylogeny (chloroplast genome)
{ "one": "hello", "two": "venus", "three": "help", "four": "I'm stuck here" }
{ "one": "hello", "two": "mercury", "three": "help", "four": "I'm stuck here" }
{ "one": "hello", "two": "earth", "three": "help", "four": "I'm stuck here" }
{ "one": "hello", "two": "mars", "three": "help", "four": "I'm stuck here" }
blahah /
Last active April 18, 2018 21:43
Resources for self-teaching around feminism, anti-racism, and indigenous and land-rights
blahah /
Last active December 5, 2017 20:44
INK in production

Installing INK

The quickest and most maintainable way to get a production INK instance up and running is using Docker. INK is provided as a dockerised service, and can be run along with all dependency services using docker compose. This guide shows you how to do that, with commands designed for a fresh ubuntu LTS (16.04) installation.



You'll need Node.JS, preferably via a Node version manager.

blahah /
Created July 24, 2017 06:25
mafintosh sciencefair todo

implement SLEEP-backed metadata sources

  • take a dir of json files and store them in hypercore
  • client integrated into sciencefair's datasource class, replacing the existing metadata feed
  • make a CLI for creating a datasource
  • add a way for the feed to update itself by pointing to a new feed key
  • feed creator should be able to specify new key
blahah / busco_to_upset.R
Created July 17, 2017 09:48
Example of BUSCO results to upset plots
# example of loading busco sample data, preparing gene set data, and making plots
# if necessary, uncomment to install dependencies
# install.packages("readr")
# install.packages("UpSetR")
# function to load BUSCO results and label the first two columns
load_busco <- function(path) {
blahah /
Last active February 29, 2020 17:50
Ways dat can be leveraged to transform science, #1 - the internet of data transforms

dat is an incredibly powerful technology for peer to peer sharing of versioned, secure, integrity-guaranteed data.

One thing it excels at is populating a live feed of data points from one source, and allowing any number of peers to subscribe to that feed. The data can only originate from the original source (this is guaranteed using public-key encryption), but the peers in the network can still sync the new data with one another. To subscribe to a given source you only need to know an alphanumeric key that uniquely identifies the source, and is automatically generated by dat.

There are many ways that this simple system can be used to build a new infrastructure for science. This is the first in a series of posts in which I'll explain how.

Here I briefly describe some ways dat can be used to automate some aspects of scientific discovery, increase resource and information reuse efficiency, and help keep our information resources up to date with science (a topic I will expand on signif