Skip to content

Instantly share code, notes, and snippets.


cyberbikepunk cyberbikepunk

View GitHub Profile
"""Boilerplate functionality for processors."""
import logging
import copy
import petl
import json
import itertools
import collections
from datapackage_pipelines.wrapper.wrapper import processor
cyberbikepunk /
Created Nov 21, 2016
A Processor class for the datapackage-pipelines framework (halfway done)..
"""A boilerplate class for row processing.
The `Processor` class fulfills 2 purposes:
1. Provide boilerplate functionality
- Log the processor parameters
- Force the iteration over a given sample size
"""This processor casts amounts and dates by sniffing data.
At this stage we assume that the data has gone through `reshape_data` and
`concatenate_identical_resources`. In other words, we assume that we have a
single resource with all the fiscal fields. The current schema differs
from a fiscal datapackage, however, in that all fields are strings. After this
processor, we can safely run the `load_fiscal_schema` processor`. Values will
have the correct os-type and the data will pass validation tests.
def is_test():
"""Return true if runtime is a pytest."""
this_frame = inspect.currentframe()
all_frames = inspect.getouterframes(this_frame)
for frame in all_frames:
if 'pytest' in str(frame):
return True
cyberbikepunk /
Created Oct 13, 2016
def resource_generator(row_processor, test_me=False):
"""A decorator to loop over all resources.
This convenience decorator is used inside processor modules to turn a
function that processes a single row into a function that returns a
generator of generators (all resources then all rows in each resource).
The decorator disables itself automatically in the context of pytests,
so that the decorated function can be tested more easily.
"""Data-package validation report"""
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from logging import getLogger
from future import standard_library
"""Color print JSON objects"""
from collections import OrderedDict
from termcolor import colored, cprint
def quote(value):
return '"' + str(value) + '"'
cyberbikepunk /
Created Jun 27, 2016 — forked from Chaser324/
GitHub Standard Fork & Pull Request Workflow

Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, when I started going through the process of forking and issuing pull requests, I had some trouble figuring out the proper method for doing so and made quite a few mistakes along the way. I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.

In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.

Creating a Fork

Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your

cyberbikepunk /
Created Jun 26, 2016
Get command line scripts from
#! /usr/bin/env python
"""The project dommand line interface for developers.
from __future__ import (absolute_import,
from io import open
View ZSH autocompletion
As an alternative to implementing native zsh completion, you can also use zsh's bash completion compatibility mode. To use it, add
autoload bashcompinit
eval "$(_FOO_BAR_COMPLETE=source foo-bar)"
to your .zshrc. The last line assumes your click script is called foo-bar; see the click docs for more info.