Skip to content

Instantly share code, notes, and snippets.

View changebio's full-sized avatar

Yin Huang changebio

  • Columbia University
  • New York
View GitHub Profile
@changebio
changebio / A_simple_Django_API_for_ML_model
Last active April 4, 2019 08:29
A simple Django API for ML model
replace views.py and urls.py in api folder
@changebio
changebio / Readme
Created April 4, 2019 07:09
A simple Flask API of the model of Machine Learning
The codes are based on: https://www.datacamp.com/community/tutorials/machine-learning-models-api-python
First, in command line, run(to get model.pkl and model_columns.pkl):
python trainmodel.py
Then, run:
python api.py
@changebio
changebio / multiple-push-urls.md
Created May 2, 2018 13:24 — forked from bjmiller121/multiple-push-urls.md
Add multiple push URLs to a single git remote

Sometimes you need to keep two upstreams in sync with eachother. For example, you might need to both push to your testing environment and your GitHub repo at the same time. In order to do this simultaneously in one git command, here's a little trick to add multiple push URLs to a single remote.

Once you have a remote set up for one of your upstreams, run these commands with:

git remote set-url --add --push [remote] [original repo URL]
git remote set-url --add --push [remote] [second repo URL]

Once set up, git remote -v should show two (push) URLs and one (fetch) URL. Something like this:

@changebio
changebio / finetune.py
Created March 16, 2018 08:39 — forked from panovr/finetune.py
Fine-tuning pre-trained models with PyTorch
import argparse
import os
import shutil
import time
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim
@changebio
changebio / data_loader.py
Created March 15, 2018 11:49 — forked from kevinzakka/data_loader.py
Train, Validation and Test Split for torchvision Datasets
"""
Create train, valid, test iterators for CIFAR-10 [1].
Easily extended to MNIST, CIFAR-100 and Imagenet.
[1]: https://discuss.pytorch.org/t/feedback-on-pytorch-for-kaggle-competitions/2252/4
"""
import torch
import numpy as np
This post provides an overview of performing diagnostic and performance evaluation on logistic regression models in R. After training a statistical model, it’s important to understand how well that model did in regards to it’s accuracy and predictive power. The following content will provide the background and theory to ensure that the right technique are being utilized for evaluating logistic regression models in R.
Logistic Regression Example
We will use the GermanCredit dataset in the caret package for this example. It contains 62 characteristics and 1000 observations, with a target variable (Class) that is allready defined. The response variable is coded 0 for bad consumer and 1 for good. It’s always recommended that one looks at the coding of the response variable to ensure that it’s a factor variable that’s coded accurately with a 0/1 scheme or two factor levels in the right order. The first step is to partition the data into training and testing sets.
```
library(caret)
data(GermanCredit)
Train <- cr