- Tean scrum updates The fifth update
'''This script goes along the blog post | |
"Building powerful image classification models using very little data" | |
from blog.keras.io. | |
It uses data that can be downloaded at: | |
https://www.kaggle.com/c/dogs-vs-cats/data | |
In our setup, we: | |
- created a data/ folder | |
- created train/ and validation/ subfolders inside data/ | |
- created cats/ and dogs/ subfolders inside train/ and validation/ | |
- put the cat pictures index 0-999 in data/train/cats |
There are many Git workflows out there, I heavily suggest also reading the atlassian.com [Git Workflow][article] article as there is more detail then presented here.
The two prevailing workflows are [Gitflow][gitflow] and [feature branches][feature]. IMHO, being more of a subscriber to continuous integration, I feel that the feature branch workflow is better suited.
When using Bash in the command line, it leaves a bit to be desired when it comes to awareness of state. I would suggest following these instructions on [setting up GIT Bash autocompletion][git-auto].
When working with a centralized workflow the concepts are simple, master
represented the official history and is always deployable. With each now scope of work, aka feature, the developer is to create a new branch. For clarity, make sure to use descriptive names like transaction-fail-message
or github-oauth
for your branches.
Model | GPUs | GPU Memory | vCPUs | Main Memory | EBS Bandwidth | Price |
---|---|---|---|---|---|---|
g3.4xlarge | 1 | 8 GiB | 16 | 122 GiB | 3.5 Gbps | $1.14 |
g3.8xlarge | 2 | 16 GiB | 32 | 244 GiB | 7 Gbps | $2.28 |
g3.16xlarge | 4 | 32 GiB | 64 | 488 GiB | 14 Gbps | $4.56 |
p2.xlarge | 1 | 12 GiB | 4 | 61 GiB | High | $0.900 |
p2.8xlarge | 8 | 96 GiB | 32 | 488 GiB | 10 Gbps | $7.200 |
p2.16xlarge | 16 | 192 GiB | 64 | 732 GiB | 20 Gbps | $14.400 |
import numpy as np | |
import tensorflow as tf | |
__author__ = "Sangwoong Yoon" | |
def np_to_tfrecords(X, Y, file_path_prefix, verbose=True): | |
""" | |
Converts a Numpy array (or two Numpy arrays) into a tfrecord file. | |
For supervised learning, feed training inputs to X and training labels to Y. | |
For unsupervised learning, only feed training inputs to X, and feed None to Y. |
When you're working on multiple coding projects, you might want a couple different version of Python and/or modules installed. That way you can keep each project in its own sandbox instead of trying to juggle multiple projects (each with different dependencies) on your system's version of Python. This intermediate guide covers one way to handle multiple Python versions and Python environments on your own (i.e., without a package manager like conda
). See the Using the workflow section to view the end result.
- Working on 2+ projects that each have their own dependencies; e.g., a Python 2.7 project and a Python 3.6 project, or developing a module that needs to work across multiple versions of Python. It's not reasonable to uninstall/reinstall modules every time you want to switch environments.
- If you want to execute code on the cloud, you can set up a Python environment that mirrors the relevant
Amazon SageMaker is a new service from Amazon Web Service (AWS) that enables users to build, train, deploy and scale up machine learning approaches.It is pretty straightforward to use. Here are few steps to follow if you are interested in using it to train an image classification with MXNet:
- You could go to your AWS console;
- Log in your account, and go to the sagemaker home page
- Create an Notebook Instance
Create notebook Instance
. You will have three instance options,ml.t2.medium
,ml.m4.xlarge
andml.p2.xlarge
, to choose from. We recommend you to us the p2 machine (a gpu machine) to train this image classification.
Once you have your p2 instance notebook set up, congratulations, you are now ready to train a building classifier. Specifically, you are going to learn h
-
Ask Your Neurons: A Neural-Based Approach to Answering Questions About Images
- Mateusz Malinowski, Marcus Rohrbach, Mario Fritz
-
Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books
- Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler
-
Learning Query and Image Similarities With Ranking Canonical Correlation Analysis
-
Wah Ngo