Skip to content

Instantly share code, notes, and snippets.

View Geoyi's full-sized avatar
💭
It's all about colors.

Zhuangfang Yi 依庄防 Geoyi

💭
It's all about colors.
View GitHub Profile
@Geoyi
Geoyi / figures.md
Last active February 15, 2021 17:28

Blank Diagram (6)

covid19-patient-flow covid_patient_flow_rates Blank Diagram (5)

jhu_convid_confirmed_cases

covid19_patient_flow_ppes_needs

'''This script goes along the blog post
"Building powerful image classification models using very little data"
from blog.keras.io.
It uses data that can be downloaded at:
https://www.kaggle.com/c/dogs-vs-cats/data
In our setup, we:
- created a data/ folder
- created train/ and validation/ subfolders inside data/
- created cats/ and dogs/ subfolders inside train/ and validation/
- put the cat pictures index 0-999 in data/train/cats
@Geoyi
Geoyi / git-feature-workflow.md
Created December 4, 2018 16:45 — forked from blackfalcon/git-feature-workflow.md
Git basics - a general workflow

There are many Git workflows out there, I heavily suggest also reading the atlassian.com [Git Workflow][article] article as there is more detail then presented here.

The two prevailing workflows are [Gitflow][gitflow] and [feature branches][feature]. IMHO, being more of a subscriber to continuous integration, I feel that the feature branch workflow is better suited.

When using Bash in the command line, it leaves a bit to be desired when it comes to awareness of state. I would suggest following these instructions on [setting up GIT Bash autocompletion][git-auto].

Basic branching

When working with a centralized workflow the concepts are simple, master represented the official history and is always deployable. With each now scope of work, aka feature, the developer is to create a new branch. For clarity, make sure to use descriptive names like transaction-fail-message or github-oauth for your branches.

Model GPUs GPU Memory vCPUs Main Memory EBS Bandwidth Price
g3.4xlarge 1 8 GiB 16 122 GiB 3.5 Gbps $1.14
g3.8xlarge 2 16 GiB 32 244 GiB 7 Gbps $2.28
g3.16xlarge 4 32 GiB 64 488 GiB 14 Gbps $4.56
p2.xlarge 1 12 GiB 4 61 GiB High $0.900
p2.8xlarge 8 96 GiB 32 488 GiB 10 Gbps $7.200
p2.16xlarge 16 192 GiB 64 732 GiB 20 Gbps $14.400
@Geoyi
Geoyi / np_to_tfrecords.py
Created January 26, 2018 12:27 — forked from swyoon/np_to_tfrecords.py
From numpy ndarray to tfrecords
import numpy as np
import tensorflow as tf
__author__ = "Sangwoong Yoon"
def np_to_tfrecords(X, Y, file_path_prefix, verbose=True):
"""
Converts a Numpy array (or two Numpy arrays) into a tfrecord file.
For supervised learning, feed training inputs to X and training labels to Y.
For unsupervised learning, only feed training inputs to X, and feed None to Y.
@Geoyi
Geoyi / python_environment_setup.md
Created January 20, 2018 19:01 — forked from wronk/python_environment_setup.md
Setting up your python development environment (with pyenv, virtualenv, and virtualenvwrapper)

Overview

When you're working on multiple coding projects, you might want a couple different version of Python and/or modules installed. That way you can keep each project in its own sandbox instead of trying to juggle multiple projects (each with different dependencies) on your system's version of Python. This intermediate guide covers one way to handle multiple Python versions and Python environments on your own (i.e., without a package manager like conda). See the Using the workflow section to view the end result.

Use cases

  1. Working on 2+ projects that each have their own dependencies; e.g., a Python 2.7 project and a Python 3.6 project, or developing a module that needs to work across multiple versions of Python. It's not reasonable to uninstall/reinstall modules every time you want to switch environments.
  2. If you want to execute code on the cloud, you can set up a Python environment that mirrors the relevant

Train a model with MXNet SageMaker

Amazon SageMaker is a new service from Amazon Web Service (AWS) that enables users to build, train, deploy and scale up machine learning approaches.It is pretty straightforward to use. Here are few steps to follow if you are interested in using it to train an image classification with MXNet:

  • You could go to your AWS console;
  • Log in your account, and go to the sagemaker home page
  • Create an Notebook InstanceScreenshot 2017-12-20 17.20.42 Create notebook Instance. You will have three instance options, ml.t2.medium, ml.m4.xlarge and ml.p2.xlarge, to choose from. We recommend you to us the p2 machine (a gpu machine) to train this image classification.

Once you have your p2 instance notebook set up, congratulations, you are now ready to train a building classifier. Specifically, you are going to learn h

@Geoyi
Geoyi / install virtualenv ubuntu 16.04.md
Created September 16, 2017 12:19 — forked from frfahim/install virtualenv ubuntu 16.04.md
How to install virtual environment on ubuntu 16.04

How to install virtualenv:

Install pip first

sudo apt-get install python3-pip

Then install virtualenv using pip3

sudo pip3 install virtualenv 
@Geoyi
Geoyi / iccv2015.md
Created April 17, 2017 15:43 — forked from myungsub/iccv2015.md
upload candidates to awesome-deep-vision

Vision & Language

  • Ask Your Neurons: A Neural-Based Approach to Answering Questions About Images

    • Mateusz Malinowski, Marcus Rohrbach, Mario Fritz
  • Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books

    • Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler
  • Learning Query and Image Similarities With Ranking Canonical Correlation Analysis

  • Wah Ngo