Skip to content

Instantly share code, notes, and snippets.

View dyerrington's full-sized avatar
💭
I may be slow to respond.

David Yerrington dyerrington

💭
I may be slow to respond.
View GitHub Profile
@dyerrington
dyerrington / eth_cheatsheet.md
Last active January 3, 2018 20:20
Helpful eth snippets. I will update these through my evolution of understanding about Etherium.

Attach to console

After running geth for a while, I found that I could no longer attach to a running session. The message I encountered was Fatal: Unable to attach to remote geth: Timed out waiting for pipe '\\.\pipe\geth.ipc' to come available. I was able to resolve this by attaching to the RPC (?) endpoint.

geth attach http://127.0.0.1:8545

Check balance

Logging into the console (attach to a running session), to get current amount of Ethereum:

@dyerrington
dyerrington / auto_reload.py
Created December 6, 2017 04:19
This basic Python snippet is a basic pattern to automatically reload the contents of a file after it has changed. You can also adapt this example to extend the "watching" to include all files and directories within current context if you want to also trigger a reload on file modification / change.
import os, sys
from time import sleep
## We could update this to all files in all subdirectories
watching = [__file__]
watched_mtimes = [(f, os.path.getmtime(f)) for f in watching]
while True:
print("Idle...")
sleep(2) ## So not to kill our CPU cycles
@dyerrington
dyerrington / service.py
Created November 18, 2017 01:04
This service.py file demonstrates a variety of possible solutions. The "boosted" solution actually loads the model in the beginning of the file but doesn't predict probabilities.
from flask import Flask, jsonify, request
from sklearn.linear_model import LogisticRegression
## additional imports
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.externals import joblib
from sklearn.datasets import load_iris
import numpy as np
@dyerrington
dyerrington / inline_model_flask_route_solution.py
Created November 18, 2017 01:02
This particular solution fits a model inline upon each request to the endpoint. This is not ideal so one should load outside the Flask route method before prediction. More ideally, the model should be loaded through a serialization library like joblib.
@app.route('/predict-iris')
def predict_iris():
# Load data
iris = load_iris()
# print("Loaded iris", iris)
# Fit our model
logreg = LogisticRegression()
@app.route('/boosted')
def boosted():
input_sepal_len = request.args.get("sepal_len")
input_sepal_width = request.args.get("sepal_width")
input_petal_lengh = request.args.get("petal_lengh")
input_petal_width = request.args.get("petal_width")
if input_sepal_len:
@dyerrington
dyerrington / ian_flask_setosa.py
Created November 18, 2017 00:58
Flask route solution from a past student "Ian" that works with a loaded model based on a DecisionTreeClassifier instance.
@app.route('/ian')
def ian():
from sklearn.datasets import load_iris
import pandas as pd
data = load_iris()
df = pd.DataFrame(data['data'], columns=['sepal_len', 'sepal_width', 'petal_lengh', 'petal_width'])
y = data['target']
@dyerrington
dyerrington / k_neighbors_rec.py
Created November 13, 2017 08:40
Extremely terse example using out of context lastfm artist dataset. If you're interested in the full example, I will update this if you message me directly or comment here.
from sklearn.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=150, radius=10)
model = nn.fit(artist_genre)
A = model.radius_neighbors_graph(artist_genre)
artist_neighbors = pd.DataFrame(A.toarray(), columns=artist_sim.index, index=artist_sim.index)
artist_neighbors['2Pac'].sort_values(ascending=False).head(150)
@dyerrington
dyerrington / lasso_mse_paths.py
Created November 10, 2017 04:19
Just a basic example of loading a baseline Lasso model without validation. Putting this here for students who just need to load up the Boston Housing dataset and start building a basic regression pipeline.
from sklearn.linear_model import ElasticNetCV
from sklearn.datasets import load_boston
data = load_boston()
df = pd.DataFrame(data['data'], columns=data['feature_names'])
X, y = df, data['target']
elastic = ElasticNetCV(cv=5)
model = elastic.fit(X, y )
@dyerrington
dyerrington / parse_scores.py
Created November 9, 2017 23:45
Copy and paste from a student project, find extracted scores in a convenient location.
import ipywidgets as widgets
from IPython.display import clear_output, display
import re
raw_review = widgets.Textarea(
value='',
placeholder='Paste review text here',
layout=widgets.Layout(width='50%', height='80px'),
)
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.