Skip to content

Instantly share code, notes, and snippets.

@rmporsch
rmporsch / crontab.sh
Last active July 15, 2019 17:37
taskwarrior checks
#!/bin/bash
daily='$(readlink -f daily-startup.sh)'
min='$(readlink -f minreminder.sh)'
line="@reboot $daily"
(crontab -l; echo "$line" ) | crontab -
line="0,30 9-19 * * 1-5 $min"
(crontab -l; echo "$line" ) | crontab
@rmporsch
rmporsch / checkepisode.py
Created August 16, 2015 13:52
Episode Crawler
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import urllib2
from bs4 import BeautifulSoup
import ConfigParser
configFile = '~/.episodeSearch.config'
config = ConfigParser.RawConfigParser()
config.readfp(open(configFile))
@rmporsch
rmporsch / TODOtaskwarrior.py
Created August 29, 2015 09:13
adds TODOs from script to taskwarrior
import taskw as tw
import argparse
parser = argparse.ArgumentParser(description='Import TODOs from a coding file and add them to taskwarrior')
parser.add_argument('--file', dest='f', help='input file')
parser.add_argument('--project', dest='project', help='project name')
args = parser.parse_args()
f = args.f
project = args.project
@rmporsch
rmporsch / vimrc
Last active October 1, 2015 14:40
speed up ctrp
" Ignore some folders and files for CtrlP indexing
let g:ctrlp_custom_ignore = {
\ 'dir': '\.git$\|\.yardoc\|public$|log\|tmp$',
\ 'file': '\.so$\|\.dat$|\.DS_Store$'
\ }
" Use The Silver Searcher https://github.com/ggreer/the_silver_searcher
if executable('ag')
" Use Ag over Grep
set grepprg=ag\ --nogroup\ --nocolor
@rmporsch
rmporsch / mail_git.py
Last active January 18, 2017 08:24
transforms emails from Dictionary.com to table formatted entries
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
newEntry = False
words = []
defs = []
definition = []
with open("dict.email", 'r') as f:
for line in f:
make.lists <- function (mat) {
traits <- mat[,1]
prs <- mat[,2]
d <- t(combn(c(traits, prs), 2))
d <- rbind(d, d[, c(2,1)])
out <- vector()
i <- 1
for (i in 1:nrow(d)) {
temp <- d[i,]

Model explorations and hyperparameter search with W&B and Kubernetes

In every machine learning project we have to continuously tweek and experiment with our models. This is necessary, not only to further improve performance, but also to explore underlying model characteristics. These constant experiments require rigorous logging and performance tracking. Hence, various different provider have come up with solutions to facilitate this tracking such as Tensorboard, Comet, W&B, as well as others. Here at Apoidea we make use of W&B.

Within this blog post we would like to give a more practical overview of how we run machine learning experiments and track their performance. Specifically, how we quickly set up clusters in the cloud and train our models. We hope this might help others, as well as improve our current practices by enganging in a discussion with the wider machine learning community.

Within this post we will out

@rmporsch
rmporsch / install_libraries.py
Last active November 23, 2020 01:58
Jupyter Notebook #jupyter #pip
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install numpy
@rmporsch
rmporsch / batching.py
Last active December 11, 2020 05:57
[Simple Python Recepies] Just some python recepies #python
# raw python
def batch(iterable, n=1):
l = len(iterable)
for ndx in range(0, l, n):
yield iterable[ndx:min(ndx + n, l)]
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in xrange(0, len(lst), n):
@rmporsch
rmporsch / woker.py
Created November 25, 2020 03:03
[Async worker] #python
import asyncio
async def queue(task_name):
i = 0
while True:
if i >= 5:
return True
print(f"checking queue for task {task_name}")
i += 1