See how a minor change to your commit message style can make you a better programmer.
Format: <type>(<scope>): <subject>
<scope>
is optional
g.set_titles("") #set title to blank | |
g.set(yticks=[]) #set y ticks to blank | |
g.despine(bottom=True, left=True) #remove 'spines' |
def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')): | |
""" Filter a distribution of logits using top-k and/or nucleus (top-p) filtering | |
Args: | |
logits: logits distribution shape (vocabulary size) | |
top_k >0: keep only top k tokens with highest probability (top-k filtering). | |
top_p >0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering). | |
Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) | |
""" | |
assert logits.dim() == 1 # batch size 1 for now - could be updated for more but the code would be less clear | |
top_k = min(top_k, logits.size(-1)) # Safety check |
--- | |
AWSTemplateFormatVersion: '2010-09-09' | |
Description: Simple S3 Bucket with SNS Trigger | |
Parameters: | |
BucketName: | |
Type: String | |
Description: The name of the S3 Bucket to create |
conda uninstall --force pillow -y | |
# install libjpeg-turbo to $HOME/turbojpeg | |
git clone https://github.com/libjpeg-turbo/libjpeg-turbo | |
pushd libjpeg-turbo | |
mkdir build | |
cd build | |
cmake .. -DCMAKE_INSTALL_PREFIX:PATH=$HOME/turbojpeg | |
make | |
make install |
from __future__ import print_function | |
from keras.datasets import cifar10 | |
from keras.layers import merge, Input | |
from keras.layers.convolutional import Convolution2D, ZeroPadding2D, AveragePooling2D | |
from keras.layers.core import Dense, Activation, Flatten | |
from keras.layers.normalization import BatchNormalization | |
from keras.models import Model | |
from keras.preprocessing.image import ImageDataGenerator | |
from keras.utils import np_utils |
The following recipes are sampled from a trained neural net. You can find the repo to train your own neural net here: https://github.com/karpathy/char-rnn Thanks to Andrej Karpathy for the great code! It's really easy to setup.
The recipes I used for training the char-rnn are from a recipe collection called ffts.com And here is the actual zipped data (uncompressed ~35 MB) I used for training. The ZIP is also archived @ archive.org in case the original links becomes invalid in the future.