Skip to content

Instantly share code, notes, and snippets.

@shippy
shippy / bitbucket-pipeline.yml
Created April 11, 2022 14:27
Slim CI/CD in Bitbucket Pipelines (files)
image: python:3.8
pipelines:
# Continuous Integration pipeline
pull-requests:
'**': # run on any branch
- step:
name: Set up and build
caches:
- pip
@shippy
shippy / _test.py
Last active September 18, 2018 07:40
Using nested rules with Dragonfly
# A proof of concept for implementing re-usable nested rules in Dragonfly.
#
# See http://simon.podhajsky.net/blog/2017/executing-nested-rules-with-dragonfly/
# for the accompanying article.
try:
import pkg_resources
pkg_resources.require("dragonfly >= 0.6.5beta1.dev-r99")
except ImportError:
pass
#!/usr/bin/env bash
# A script to replicate VundleVim/Vundle.vim/issues/807
## Setup
cd /tmp
rm -rf bugtest
mkdir bugtest && cd bugtest && git init
git submodule add -b vundle_bug https://github.com/shippy/dotvim.git dotvim
git submodule update --init --recursive
hook_path=".git/modules/dotvim/hooks/post-checkout"
@shippy
shippy / 1-downloadFromGH.rb
Last active March 21, 2017 00:29
Mirroring a repository with issues, labels & milestones from one GitHub server to another
require 'pry'
require 'octokit'
require 'json'
# Part 0: Extract bare repo and push it to GH:
# Follow https://help.github.com/enterprise/2.2/admin/articles/moving-a-repository-from-github-com-to-github-enterprise/
# Part 1: Extract issues & everything else from the source repo
## Setup
Octokit.configure do |c|
## Dependencies (get them with `install.packages(c("dplyr", "ggplot2", "ggthemes"))`)
library(dplyr)
library(ggplot2)
library(ggthemes)
options(repr.plot.width = 8, repr.plot.height = 5) # Default figure size
## Load + process data
x <- read.csv("gelman_cup_graphic_reporting_challenge_data.csv")
# The output of the script is meant to be piped to an output csv file.
# Usage:
# python foursquare-parse.py > foursquare.csv
import xml.etree.ElementTree as ET
from dateutil import parser
from datetime import *
tree = ET.parse('foursquare.kml') # or whatever the kml file is called
root = tree.getroot()
@shippy
shippy / crawler.py
Created April 29, 2014 04:06
Scripts for scraping google results, gather first-order links between them, and export them to Gephi-compatible CSV. Order of running: scrape.sh, crawler.py, remove-duplicates.py, export.py, and domain-network.py.
# Crawls results scraped by scrape.sh; saves resulting nodes and edges into pickles.
import pickle
import urlparse
from objects import *
import pdb
nodes = []
edges = []
@shippy
shippy / volby13-gdp.r
Created January 27, 2013 04:49
Explores the relationship between regional GDP per capita and proportion of second-round votes that Milos Zeman, the Czech President-Elect, received.
library(XML)
reg_gdp_url <- "http://en.wikipedia.org/wiki/Regions_of_the_Czech_Republic"
reg_gdp <- readHTMLTable(reg_gdp_url, encoding = "UTF-8")
reg_gdp[[2]][1:14,2] -> regions
as.numeric(gsub(",", "", as.character(reg_gdp[[2]][1:14,8]))) -> GDP
nuts <- c(1100, 2100, 3100, 3200, 4100, 4200, 5100, 5200, 5300, 6200, 8100, 7100, 7200, 6100)