Skip to content

Instantly share code, notes, and snippets.

View AlyShmahell's full-sized avatar
💭
I may be slow to respond.

Aly Shmahell AlyShmahell

💭
I may be slow to respond.
  • European Union
View GitHub Profile
Task Time required Assigned to Current Status Finished
Calendar Cache > 5 hours @georgehrke in progress - [x] ok?
Object Cache > 5 hours @georgehrke in progress [x] item1
[ ] item2
Object Cache > 5 hours @georgehrke in progress
  • - [x]
  • item2
Object Cache > 5 hours @georgehrke in progress
  • item1
  • item2
  • works
  • works too
@AlyShmahell
AlyShmahell / min-char-rnn.py
Created May 19, 2018 23:15 — forked from karpathy/min-char-rnn.py
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
@AlyShmahell
AlyShmahell / tf_lstm.py
Created May 19, 2018 18:40 — forked from siemanko/tf_lstm.py
Simple implementation of LSTM in Tensorflow in 50 lines (+ 130 lines of data generation and comments)
"""Short and sweet LSTM implementation in Tensorflow.
Motivation:
When Tensorflow was released, adding RNNs was a bit of a hack - it required
building separate graphs for every number of timesteps and was a bit obscure
to use. Since then TF devs added things like `dynamic_rnn`, `scan` and `map_fn`.
Currently the APIs are decent, but all the tutorials that I am aware of are not
making the best use of the new APIs.
Advantages of this implementation:
from __future__ import print_function, division
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
num_epochs = 100
total_series_length = 50000
truncated_backprop_length = 15
state_size = 4
num_classes = 2
@AlyShmahell
AlyShmahell / tf.py
Created May 12, 2018 13:13 — forked from koaning/tf.py
tensorflow layer example
import tensorflow as tf
import numpy as np
import uuid
x = tf.placeholder(shape=[None, 3], dtype=tf.float32)
nn = tf.layers.dense(x, 3, activation=tf.nn.sigmoid)
nn = tf.layers.dense(nn, 5, activation=tf.nn.sigmoid)
encoded = tf.layers.dense(nn, 2, activation=tf.nn.sigmoid)
nn = tf.layers.dense(encoded, 5, activation=tf.nn.sigmoid)
nn = tf.layers.dense(nn, 3, activation=tf.nn.sigmoid)
@AlyShmahell
AlyShmahell / basic_conv1d.py
Created March 27, 2018 19:15 — forked from talolard/basic_conv1d.py
An example of how to do conv1d ourself in Tensorflow
import tensorflow as tf
def conv1d(input_, output_size, width, stride):
'''
:param input_: A tensor of embedded tokens with shape [batch_size,max_length,embedding_size]
:param output_size: The number of feature maps we'd like to calculate
:param width: The filter width
:param stride: The stride
:return: A tensor of the concolved input with shape [batch_size,max_length,output_size]
'''
inputSize = input_.get_shape()[-1] # How many channels on the input (The size of our embedding for instance)

Keybase proof

I hereby claim:

  • I am AlyShmahell on github.
  • I am alyshmahell (https://keybase.io/alyshmahell) on keybase.
  • I have a public key whose fingerprint is 32E1 E008 73DF 66D7 057C 25E6 657B 9B80 54B5 8616

To claim this, I am signing this object: