Code for Keras plays catch blog post
python qlearn.py
- Generate figures
Code for Keras plays catch blog post
python qlearn.py
'''Trains a multi-output deep NN on the MNIST dataset using crossentropy and | |
policy gradients (REINFORCE). | |
The goal of this example is twofold: | |
* Show how to use policy graidents for training | |
* Show how to use generators with multioutput models | |
# Policy graidients | |
This is a Reinforcement Learning technique [1] that trains the model | |
following the gradient of the logarithm of action taken scaled by the advantage | |
(reward - baseline) of that action. | |
# Generators |
Follow the general instructions at https://github.com/simonschellaert/spotify2am .
This will export your Spotify playlist to csv and add the songs to your iTunes Library.
Notes:
For the proxy part I used https://mitmproxy.org/ instead of Charles. Charles isn't free.
mitmproxy is fairly easy to use, just don't forget to setup your HTTPS proxy server at 127.0.0.1 port 8080 (Settings -> Network -> Advanced -> Proxies).
If the insert_songs.py step doesn't work for you, try the one in this gist.
Note the TODO:REPLACE_THIS , you will get that info on mitmproxy when you add a new song from Apple Music to your Library.
# NOTE: I'm not sure if this is right | |
from keras.layers.recurrent import LSTM | |
class LSTMpeephole(LSTM): | |
def __init__(self, **kwargs): | |
super(LSTMpeephole, self).__init__(**kwargs) | |
def build(self): | |
super(LSTMpeephole, self).build() |
import numpy as np | |
import opensfm | |
import cv2 | |
from opensfm import dataset | |
import pylab | |
import scipy.interpolate | |
import opensfm.features | |
def project(shot, point): |
""" | |
Possibly correct implementation of an all conv neural network using a single residual module | |
This code was written for instruction purposes and no attempt to get the best results were made. | |
References: | |
Deep Residual Learning for Image Recognition: http://arxiv.org/pdf/1512.03385v1.pdf | |
STRIVING FOR SIMPLICITY, THE ALL CONVOLUTIONAL NET: http://arxiv.org/pdf/1412.6806v3.pdf | |
A video walking through the code and main ideas: https://youtu.be/-N_zlfKo4Ec |
from __future__ import division | |
import os | |
import time | |
from glob import glob | |
import tensorflow as tf | |
from six.moves import xrange | |
from scipy.misc import imresize | |
from subpixel import PS | |
from ops import * |
import os | |
os.environ["KERAS_BACKEND"] = "tensorflow" | |
import tensorflow as tf | |
from keras.engine import Layer, InputSpec | |
from keras import backend as K, regularizers, constraints, initializations, activations | |
class Deconv2D(Layer): | |
def __init__(self, nb_filter, nb_row, nb_col, | |
init='glorot_uniform', activation='linear', weights=None, | |
border_mode='valid', subsample=(1, 1), dim_ordering='tf', |