Skip to content

Instantly share code, notes, and snippets.

@JoshVarty
Last active January 27, 2018 03:42
Embed
What would you like to do?
layer1_weights = tf.Variable(tf.random_normal([784, 500]))
layer1_bias = tf.Variable(tf.zeros([500]))
layer1_output = tf.nn.relu(tf.matmul(input, layer1_weights) + layer1_bias)
layer2_weights = tf.Variable(tf.random_normal([500, 500]))
layer2_bias = tf.Variable(tf.zeros([500]))
layer2_output = tf.nn.relu(tf.matmul(layer1_output, layer2_weights) + layer2_bias)
layer3_weights = tf.Variable(tf.random_normal([500, 10]))
layer3_bias = tf.Variable(tf.zeros([10]))
logits = tf.matmul(layer2_output, layer3_weights) + layer3_bias
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment