-
Ask Your Neurons: A Neural-Based Approach to Answering Questions About Images
- Mateusz Malinowski, Marcus Rohrbach, Mario Fritz
-
Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books
- Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler
-
Learning Query and Image Similarities With Ranking Canonical Correlation Analysis
-
Wah Ngo
def unet_model(batch_size, npix_in, n_channels, n_filters, n_classes, activation='relu'): | |
input_layer = Input(batch_shape=(batch_size, npix_in, npix_in, n_channels), name='input') | |
dblk1_conv1 = Convolution2D(n_filters, 3, 3, activation=activation, name='dblk1_conv1')(input_layer) | |
dblk1_conv2 = Convolution2D(n_filters, 3, 3, activation=activation, name='dblk1_conv2')(dblk1_conv1) | |
dblk1_pool = MaxPooling2D(pool_size=(2,2), name='dblk1_pool')(dblk1_conv2) | |
dblk2_conv1 = Convolution2D(n_filters * 2, 3, 3, activation=activation, name='dblk2_conv1')(dblk1_pool) | |
dblk2_conv2 = Convolution2D(n_filters * 2, 3, 3, activation=activation, name='dblk2_conv2')(dblk2_conv1) |
When you're working on multiple coding projects, you might want a couple different version of Python and/or modules installed. That way you can keep each project in its own sandbox instead of trying to juggle multiple projects (each with different dependencies) on your system's version of Python. This intermediate guide covers one way to handle multiple Python versions and Python environments on your own (i.e., without a package manager like conda
). See the Using the workflow section to view the end result.
- Working on 2+ projects that each have their own dependencies; e.g., a Python 2.7 project and a Python 3.6 project, or developing a module that needs to work across multiple versions of Python. It's not reasonable to uninstall/reinstall modules every time you want to switch environments.
- If you want to execute code on the cloud, you can set up a Python environment that mirrors the relevant
import numpy as np | |
import tensorflow as tf | |
__author__ = "Sangwoong Yoon" | |
def np_to_tfrecords(X, Y, file_path_prefix, verbose=True): | |
""" | |
Converts a Numpy array (or two Numpy arrays) into a tfrecord file. | |
For supervised learning, feed training inputs to X and training labels to Y. | |
For unsupervised learning, only feed training inputs to X, and feed None to Y. |
There are many Git workflows out there, I heavily suggest also reading the atlassian.com [Git Workflow][article] article as there is more detail then presented here.
The two prevailing workflows are [Gitflow][gitflow] and [feature branches][feature]. IMHO, being more of a subscriber to continuous integration, I feel that the feature branch workflow is better suited.
When using Bash in the command line, it leaves a bit to be desired when it comes to awareness of state. I would suggest following these instructions on [setting up GIT Bash autocompletion][git-auto].
When working with a centralized workflow the concepts are simple, master
represented the official history and is always deployable. With each now scope of work, aka feature, the developer is to create a new branch. For clarity, make sure to use descriptive names like transaction-fail-message
or github-oauth
for your branches.
'''This script goes along the blog post | |
"Building powerful image classification models using very little data" | |
from blog.keras.io. | |
It uses data that can be downloaded at: | |
https://www.kaggle.com/c/dogs-vs-cats/data | |
In our setup, we: | |
- created a data/ folder | |
- created train/ and validation/ subfolders inside data/ | |
- created cats/ and dogs/ subfolders inside train/ and validation/ | |
- put the cat pictures index 0-999 in data/train/cats |