I hereby claim:
- I am AlyShmahell on github.
- I am alyshmahell (https://keybase.io/alyshmahell) on keybase.
- I have a public key whose fingerprint is 32E1 E008 73DF 66D7 057C 25E6 657B 9B80 54B5 8616
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
| import tensorflow as tf | |
| def conv1d(input_, output_size, width, stride): | |
| ''' | |
| :param input_: A tensor of embedded tokens with shape [batch_size,max_length,embedding_size] | |
| :param output_size: The number of feature maps we'd like to calculate | |
| :param width: The filter width | |
| :param stride: The stride | |
| :return: A tensor of the concolved input with shape [batch_size,max_length,output_size] | |
| ''' | |
| inputSize = input_.get_shape()[-1] # How many channels on the input (The size of our embedding for instance) |
| import tensorflow as tf | |
| import numpy as np | |
| import uuid | |
| x = tf.placeholder(shape=[None, 3], dtype=tf.float32) | |
| nn = tf.layers.dense(x, 3, activation=tf.nn.sigmoid) | |
| nn = tf.layers.dense(nn, 5, activation=tf.nn.sigmoid) | |
| encoded = tf.layers.dense(nn, 2, activation=tf.nn.sigmoid) | |
| nn = tf.layers.dense(encoded, 5, activation=tf.nn.sigmoid) | |
| nn = tf.layers.dense(nn, 3, activation=tf.nn.sigmoid) |
| from __future__ import print_function, division | |
| import numpy as np | |
| import tensorflow as tf | |
| import matplotlib.pyplot as plt | |
| num_epochs = 100 | |
| total_series_length = 50000 | |
| truncated_backprop_length = 15 | |
| state_size = 4 | |
| num_classes = 2 |
| """Short and sweet LSTM implementation in Tensorflow. | |
| Motivation: | |
| When Tensorflow was released, adding RNNs was a bit of a hack - it required | |
| building separate graphs for every number of timesteps and was a bit obscure | |
| to use. Since then TF devs added things like `dynamic_rnn`, `scan` and `map_fn`. | |
| Currently the APIs are decent, but all the tutorials that I am aware of are not | |
| making the best use of the new APIs. | |
| Advantages of this implementation: |
| """ | |
| Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
| BSD License | |
| """ | |
| import numpy as np | |
| # data I/O | |
| data = open('input.txt', 'r').read() # should be simple plain text file | |
| chars = list(set(data)) | |
| data_size, vocab_size = len(data), len(chars) |
| Task | Time required | Assigned to | Current Status | Finished |
|---|---|---|---|---|
| Calendar Cache | > 5 hours | @georgehrke | in progress | - [x] ok? |
| Object Cache | > 5 hours | @georgehrke | in progress | [x] item1 [ ] item2 |
| Object Cache | > 5 hours | @georgehrke | in progress |
|
| Object Cache | > 5 hours | @georgehrke | in progress |
|
| \documentclass{standalone} | |
| %% for compilation with htlatex (to produce svg image), | |
| %% uncomment the line below: | |
| % \def\pgfsysdriver{pgfsys-tex4ht.def} | |
| \usepackage{tikz} | |
| \tikzstyle{tensor}=[rectangle,draw=blue!50,fill=blue!20,thick] |
This is an unfinished list of remarks on how to write good pseudocode.
Pseudocode is a loosely defined way of transmitting the concept of an algorithm from a writer to a reader. Central is the efficiency of this communication, not the interpretability of the code by an automated program (e.g., a parser).
| #include <Python.h> | |
| #include <numpy/arrayobject.h> | |
| #include "chi2.h" | |
| /* Docstrings */ | |
| static char module_docstring[] = | |
| "This module provides an interface for calculating chi-squared using C."; | |
| static char chi2_docstring[] = | |
| "Calculate the chi-squared of some data given a model."; |