Skip to content

Instantly share code, notes, and snippets.

View nmwalsh's full-sized avatar
💻

Nick Walsh nmwalsh

💻
View GitHub Profile
import boto3
# instantiate local comprehend client
comprehend_client = boto3.client('comprehend')
text = "A classic love heart emoji, used for expressions of love. Displayed in various shades of red on most platforms. A similar emoji exists for the heart suit in a deck of playing cards. On Snapchat, this emoji displays next to a friend when you have been #1 BFs with each other for two consecutive weeks."
# Invoke detect_key_phrases endopint and get response
comprehend_response = comprehend_client.detect_key_phrases(Text=text, LanguageCode='en')
@nmwalsh
nmwalsh / charge_credit_card.py
Last active September 14, 2018 05:44
Boilerplate credit card charge function using the Authorize.Net python SDK
from authorizenet import apicontractsv1
from authorizenet.apicontrollers import*
from decimal import*
import credentials # importing our credentials from credentials.py
# Authentication steps using Authorize.Net API credentials
merchantAuth = apicontractsv1.merchantAuthenticationType()
merchantAuth.name = credentials.api_login_name
merchantAuth.transactionKey = credentials.transaction_key
@nmwalsh
nmwalsh / select a language version
Created July 19, 2018 04:23
select a language version
(1) py27
(2) py35
Please select one of the above environment language (e.g. py27):
@nmwalsh
nmwalsh / select an environment
Created July 19, 2018 04:13
select an environment
(1) data-analytics : has libraries such as xgboost, lightgbm, sklearn etc.
(2) mxnet : has libraries for mxnet(v1.1.0) along with sklearn, opencv etc.
(3) caffe2 : has libraries for caffe2(v0.8.0) along with sklearn, opencv etc.
(4) keras-tensorflow : has libraries for keras(v2.1.6) and tensorflow(v1.9.0) along with sklearn, opencv etc.
(5) kaggle : has the environment provided by kaggle
(6) pytorch : has libraries for pytorch(v0.4.0) along with sklearn, opencv etc.
(7) python-base : has base python image with no libraries installed
(8) r-base : has base R image with no libraries installed. Use this environment for rstudio workspace
Please select one of the above environments (e.g. 1 or data-analytics):
@nmwalsh
nmwalsh / select system drivers
Created July 19, 2018 04:05
select system drivers
(1) gpu
(2) cpu
Please select one of the above environment type (e.g. 1 or gpu):
if [[ `uname` == 'Linux' ]]; then
echo 'Removing old Torch files from your Linux...'
# Removing folders
sudo rm -rf /usr/local/lib/{luarocks/,lua/,torch/,torchrocks/}
sudo rm -rf /usr/local/share/{torch,cmake/torch/,lua}
sudo rm -rf /usr/local/etc/{luarocks/,torchrocks/}
sudo rm -rf /usr/local/include/{torch,TH,THC,lauxlib.h,lua.h,lua.hpp,luaT.h,luaconf.h,luajit.h,lualib.h,qtlua}
sudo rm -rf ~/.luarocks
sudo rm -rf ~/.cache/luarocks*
# Removing files
# falcon.API instances are callable WSGI apps. Never change this.
app = falcon.API()
# Resources are represented by long-lived class instances. Each Python class becomes a different "URL directory"
info = InfoResource()
predicts = PredictsResource()
# things will handle all requests to the '/info' or '/predicts' URL path
app.add_route('/info', info)
app.add_route('/predicts', predicts)
class PredictsResource(object):
def on_get(self, req, resp):
"""Handles GET requests"""
resp.status = falcon.HTTP_200 # This is the default status
resp.body = ('\nThis is the PREDICT endpoint. \n'
'Both requests and responses are served in JSON. \n'
'\n'
'INPUT: Flower Lengths (in cm) \n'
' "sepal_length":[num] \n'
' "sepal_width": [num] \n'
class InfoResource(object):
def on_get(self, req, resp):
"""Handles GET requests"""
resp.status = falcon.HTTP_200 # This is the default status
resp.body = ('\nThis is an API for a deployed Datmo model, '
'where it takes flower lengths as input and returns the predicted Iris species.\n'
'To learn more about this model, send a GET request to the /predicts endpoint or visit the repository online at: \n\n'
'https://datmo.com/nmwalsh/falcon-api-model\n\n')
# data_handler.py
#
# Argument handler that does 4 things.
#
# 1. Decode: deserialize raw input from API POST request received in `falcon_gateway.py`
# 2. Preprocess: convert input data into form required for model, as specified in `predict.py`
# 3. Postprocess: convert prediction from model (from `predict.py`) into form that can be serializable for serving API response
# 4. Encode: serialize postprocessed data into valid JSON-esque format for API response, and pass back to `falcon_gateway.py`
import json