Skip to content

Instantly share code, notes, and snippets.

View EderSantana's full-sized avatar
🎯
Focusing

Eder Santana EderSantana

🎯
Focusing
View GitHub Profile
@EderSantana
EderSantana / Spotify2AppleMusic.md
Last active June 29, 2020 08:16
Updated insert_songs.py from spotify2am

Follow the general instructions at https://github.com/simonschellaert/spotify2am .
This will export your Spotify playlist to csv and add the songs to your iTunes Library.

Notes:
For the proxy part I used https://mitmproxy.org/ instead of Charles. Charles isn't free.

mitmproxy is fairly easy to use, just don't forget to setup your HTTPS proxy server at 127.0.0.1 port 8080 (Settings -> Network -> Advanced -> Proxies).

If the insert_songs.py step doesn't work for you, try the one in this gist.
Note the TODO:REPLACE_THIS , you will get that info on mitmproxy when you add a new song from Apple Music to your Library.

@EderSantana
EderSantana / sliding_widget.ipynb
Last active June 18, 2017 18:51
jupyter notebook: slide to change images
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
from __future__ import division
import os
import time
from glob import glob
import tensorflow as tf
from six.moves import xrange
from scipy.misc import imresize
from subpixel import PS
from ops import *
@EderSantana
EderSantana / warp_opensfm.py
Created September 27, 2016 20:20
Test structure from motion pipeline by warping images with estimated parameters
import numpy as np
import opensfm
import cv2
from opensfm import dataset
import pylab
import scipy.interpolate
import opensfm.features
def project(shot, point):
'''Trains a multi-output deep NN on the MNIST dataset using crossentropy and
policy gradients (REINFORCE).
The goal of this example is twofold:
* Show how to use policy graidents for training
* Show how to use generators with multioutput models
# Policy graidients
This is a Reinforcement Learning technique [1] that trains the model
following the gradient of the logarithm of action taken scaled by the advantage
(reward - baseline) of that action.
# Generators
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
from keras.engine import Layer, InputSpec
from keras import backend as K, regularizers, constraints, initializations, activations
class Deconv2D(Layer):
def __init__(self, nb_filter, nb_row, nb_col,
init='glorot_uniform', activation='linear', weights=None,
border_mode='valid', subsample=(1, 1), dim_ordering='tf',
# VERSION: 0.1
# DESCRIPTION: PLE
# AUTHOR: Eder Santana <edercsjr@gmail.com>
# COMMENTS:
# Pygame learning environment
# SETUP:
# # Download PLE Dockerfile
# wget ...
#
# # Build atom image
from keras.models import Sequential
from keras.layers.recurrent import Recurrent, GRU, LSTM
from keras import backend as K
# from seya.utils import rnn_states
tol = 1e-4
def _wta(X):
M = K.max(X, axis=-1, keepdims=True)
@EderSantana
EderSantana / resnet_all_conv.py
Last active March 30, 2019 14:43
Resnet + all conv implementation example
"""
Possibly correct implementation of an all conv neural network using a single residual module
This code was written for instruction purposes and no attempt to get the best results were made.
References:
Deep Residual Learning for Image Recognition: http://arxiv.org/pdf/1512.03385v1.pdf
STRIVING FOR SIMPLICITY, THE ALL CONVOLUTIONAL NET: http://arxiv.org/pdf/1412.6806v3.pdf
A video walking through the code and main ideas: https://youtu.be/-N_zlfKo4Ec
@EderSantana
EderSantana / CATCH_Keras_RL.md
Last active October 16, 2023 08:32
Keras plays catch - a single file Reinforcement Learning example