Skip to content

Instantly share code, notes, and snippets.

View dyerrington's full-sized avatar
💭
I may be slow to respond.

David Yerrington dyerrington

💭
I may be slow to respond.
View GitHub Profile
@oanhnn
oanhnn / using-multiple-github-accounts-with-ssh-keys.md
Last active May 4, 2024 04:45
Using multiple github accounts with ssh keys

Problem

I have two Github accounts: oanhnn (personal) and superman (for work). I want to use both accounts on same computer (without typing password everytime, when doing git push or pull).

Solution

Use ssh keys and define host aliases in ssh config file (each alias for an account).

How to?

  1. Generate ssh key pairs for accounts and add them to GitHub accounts.
@brizandrew
brizandrew / README.md
Created July 28, 2017 22:07
How to use node.js build routines and npm packages in Django.

Using Node.js With Django

When writing django apps it's easy to ignore the organization of your front end code. Often, these backend coders will just write a static js and css file, stick it in the static directory, and call it a day.

You can also build them as two completely independent parts. With a complex gulp build routine independent of the django app. But if you don't know gulp, node, or those kinds of systems it can be a daunting process to get started with.

Enter django-compressor-toolkit (the name doesn't quite roll off the tongue).

Setting Up Django-Compressor-Toolkit

Using django-compressor and django-compressor-toolkit you can write Javascript ES6 code with all its fancy import/export logic or style your pages with sass instead of css, and leave your deploy routine largely untouched.

@juliasilge
juliasilge / installation.md
Last active April 22, 2024 21:04
Installing R + Tensorflow on M1
@dyerrington
dyerrington / subplots.py
Created March 29, 2017 21:33
Plotting multiple figures with seaborn and matplotlib using subplots.
##
# Create a figure space matrix consisting of 3 columns and 2 rows
#
# Here is a useful template to use for working with subplots.
#
##################################################################
fig, ax = plt.subplots(figsize=(10,5), ncols=3, nrows=2)
left = 0.125 # the left side of the subplots of the figure
right = 0.9 # the right side of the subplots of the figure
@bgweber
bgweber / pandasUDF.py
Last active May 19, 2022 09:19
Distributing Feature Generation with Pandas UDFs
import featuretools as ft
from pyspark.sql.functions import pandas_udf, PandasUDFType
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def apply_feature_generation(pandasInputDF):
# create Entity Set representation
es = ft.EntitySet(id="events")
es = es.entity_from_dataframe(entity_id="events", dataframe=pandasInputDF)
es = es.normalize_entity(base_entity_id="events", new_entity_id="users", index="user_id")
@meiamsome
meiamsome / hn_search.js
Last active May 4, 2022 13:23 — forked from kristopolous/hn_seach.js
hn job query search
/* Hacker News Search Script
*
* Original Script by Kristopolous:
* https://gist.github.com/kristopolous/19260ae54967c2219da8
*
* Usage:
* First, copy the script into your browser's console whilst on the Hacker News
* jobs page. Then, you can use the query function to filter the results.
*
* For example,
@ines
ines / spacy_token_attrs.py
Last active May 2, 2021 04:32
spaCy v2.0 example: Get and set token text character spans as custom attribute extensions
# coding: utf8
from __future__ import unicode_literals
from spacy.lang.en import English
from spacy.tokens import Token
# Example of using spaCy v2.0's custom attribute extensions to get and set
# character spans of a token text as token attributes.
# Discussion: https://twitter.com/DavidWell/status/920245066404450304
@dyerrington
dyerrington / readme.md
Last active January 9, 2019 20:59
This is a very basic data generator to test recommender systems. A future version may simulate the actual sparseness of ratings data with a simple bootstrap function but for now, numpy generator does the job.

RecData

To use this snippet, install faker:

pip install faker

Parse Jupyter

This is a basic class that makes it convenient to parse notebooks. I built a larger version of this that was used for clustering documents to create symantic indeices that linked related content together for a personal project. You can use this to parse notebooks for doing things like NLP or preprocessing.

Usage

parser = ParseJupyter("./Untitled.ipynb")
parser.get_cells(source_only = True, source_as_string = True)
@marcelcaraciolo
marcelcaraciolo / spearman.py
Created September 12, 2011 02:21
spearman coefficient
import datetime
import sys
import random
def _rank_dists(ranks1, ranks2):
"""Finds the difference between the values in ranks1 and ranks2 for keys
present in both dicts. If the arguments are not dicts, they are converted
from (key, rank) sequences.
"""