Skip to content

Instantly share code, notes, and snippets.

@teamdandelion
Last active February 6, 2024 08:33
Show Gist options
  • Save teamdandelion/4f02ab8f1451e276fea1f165a20336f1 to your computer and use it in GitHub Desktop.
Save teamdandelion/4f02ab8f1451e276fea1f165a20336f1 to your computer and use it in GitHub Desktop.
TensorBoard: TF Dev Summit Tutorial
We can make this file beautiful and searchable if this error is corrected: No tabs found in this TSV file in line 0.
7
2
1
0
4
1
4
9
5
9
0
6
9
0
1
5
9
7
3
4
9
6
6
5
4
0
7
4
0
1
3
1
3
4
7
2
7
1
2
1
1
7
4
2
3
5
1
2
4
4
6
3
5
5
6
0
4
1
9
5
7
8
9
3
7
4
6
4
3
0
7
0
2
9
1
7
3
2
9
7
7
6
2
7
8
4
7
3
6
1
3
6
9
3
1
4
1
7
6
9
6
0
5
4
9
9
2
1
9
4
8
7
3
9
7
4
4
4
9
2
5
4
7
6
7
9
0
5
8
5
6
6
5
7
8
1
0
1
6
4
6
7
3
1
7
1
8
2
0
2
9
9
5
5
1
5
6
0
3
4
4
6
5
4
6
5
4
5
1
4
4
7
2
3
2
7
1
8
1
8
1
8
5
0
8
9
2
5
0
1
1
1
0
9
0
3
1
6
4
2
3
6
1
1
1
3
9
5
2
9
4
5
9
3
9
0
3
6
5
5
7
2
2
7
1
2
8
4
1
7
3
3
8
8
7
9
2
2
4
1
5
9
8
7
2
3
0
4
4
2
4
1
9
5
7
7
2
8
2
6
8
5
7
7
9
1
8
1
8
0
3
0
1
9
9
4
1
8
2
1
2
9
7
5
9
2
6
4
1
5
8
2
9
2
0
4
0
0
2
8
4
7
1
2
4
0
2
7
4
3
3
0
0
3
1
9
6
5
2
5
9
2
9
3
0
4
2
0
7
1
1
2
1
5
3
3
9
7
8
6
5
6
1
3
8
1
0
5
1
3
1
5
5
6
1
8
5
1
7
9
4
6
2
2
5
0
6
5
6
3
7
2
0
8
8
5
4
1
1
4
0
3
3
7
6
1
6
2
1
9
2
8
6
1
9
5
2
5
4
4
2
8
3
8
2
4
5
0
3
1
7
7
5
7
9
7
1
9
2
1
4
2
9
2
0
4
9
1
4
8
1
8
4
5
9
8
8
3
7
6
0
0
3
0
2
6
6
4
9
3
3
3
2
3
9
1
2
6
8
0
5
6
6
6
3
8
8
2
7
5
8
9
6
1
8
4
1
2
5
9
1
9
7
5
4
0
8
9
9
1
0
5
2
3
7
8
9
4
0
6
3
9
5
2
1
3
1
3
6
5
7
4
2
2
6
3
2
6
5
4
8
9
7
1
3
0
3
8
3
1
9
3
4
4
6
4
2
1
8
2
5
4
8
8
4
0
0
2
3
2
7
7
0
8
7
4
4
7
9
6
9
0
9
8
0
4
6
0
6
3
5
4
8
3
3
9
3
3
3
7
8
0
8
2
1
7
0
6
5
4
3
8
0
9
6
3
8
0
9
9
6
8
6
8
5
7
8
6
0
2
4
0
2
2
3
1
9
7
5
1
0
8
4
6
2
6
7
9
3
2
9
8
2
2
9
2
7
3
5
9
1
8
0
2
0
5
2
1
3
7
6
7
1
2
5
8
0
3
7
2
4
0
9
1
8
6
7
7
4
3
4
9
1
9
5
1
7
3
9
7
6
9
1
3
7
8
3
3
6
7
2
8
5
8
5
1
1
4
4
3
1
0
7
7
0
7
9
4
4
8
5
5
4
0
8
2
1
0
8
4
5
0
4
0
6
1
7
3
2
6
7
2
6
9
3
1
4
6
2
5
4
2
0
6
2
1
7
3
4
1
0
5
4
3
1
1
7
4
9
9
4
8
4
0
2
4
5
1
1
6
4
7
1
9
4
2
4
1
5
5
3
8
3
1
4
5
6
8
9
4
1
5
3
8
0
3
2
5
1
2
8
3
4
4
0
8
8
3
3
1
7
3
5
9
6
3
2
6
1
3
6
0
7
2
1
7
1
4
2
4
2
1
7
9
6
1
1
2
4
8
1
7
7
4
8
0
7
3
1
3
1
0
7
7
0
3
5
5
2
7
6
6
9
2
8
3
5
2
2
5
6
0
8
2
9
2
8
8
8
8
7
4
9
3
0
6
6
3
2
1
3
2
2
9
3
0
0
5
7
8
1
4
4
6
0
2
9
1
4
7
4
7
3
9
8
8
4
7
1
2
1
2
2
3
2
3
2
3
9
1
7
4
0
3
5
5
8
6
3
2
6
7
6
6
3
2
7
8
1
1
7
5
6
4
9
5
1
3
3
4
7
8
9
1
1
6
9
1
4
4
5
4
0
6
2
2
3
1
5
1
2
0
3
8
1
2
6
7
1
6
2
3
9
0
1
2
2
0
8
9
9
0
2
5
1
9
7
8
1
0
4
1
7
9
6
4
2
6
8
1
3
7
5
4
# Copyright 2017 Google, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import os
import tensorflow as tf
import urllib
LOGDIR = '/tmp/mnist_tutorial/'
GIST_URL = 'https://gist.githubusercontent.com/dandelionmane/4f02ab8f1451e276fea1f165a20336f1/raw/dfb8ee95b010480d56a73f324aca480b3820c180'
### MNIST EMBEDDINGS ###
mnist = tf.contrib.learn.datasets.mnist.read_data_sets(train_dir=LOGDIR + 'data', one_hot=True)
### Get a sprite and labels file for the embedding projector ###
urllib.urlretrieve(GIST_URL + 'labels_1024.tsv', LOGDIR + 'labels_1024.tsv')
urllib.urlretrieve(GIST_URL + 'sprite_1024.png', LOGDIR + 'sprite_1024.png')
def conv_layer(input, size_in, size_out, name="conv"):
with tf.name_scope(name):
w = tf.Variable(tf.truncated_normal([5, 5, size_in, size_out], stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[size_out]), name="B")
conv = tf.nn.conv2d(input, w, strides=[1, 1, 1, 1], padding="SAME")
act = tf.nn.relu(conv + b)
tf.summary.histogram("weights", w)
tf.summary.histogram("biases", b)
tf.summary.histogram("activations", act)
return tf.nn.max_pool(act, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
def fc_layer(input, size_in, size_out, name="fc"):
with tf.name_scope(name):
w = tf.Variable(tf.truncated_normal([size_in, size_out], stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[size_out]), name="B")
act = tf.nn.relu(tf.matmul(input, w) + b)
tf.summary.histogram("weights", w)
tf.summary.histogram("biases", b)
tf.summary.histogram("activations", act)
return act
def mnist_model(learning_rate, use_two_conv, use_two_fc, hparam):
tf.reset_default_graph()
sess = tf.Session()
# Setup placeholders, and reshape the data
x = tf.placeholder(tf.float32, shape=[None, 784], name="x")
x_image = tf.reshape(x, [-1, 28, 28, 1])
tf.summary.image('input', x_image, 3)
y = tf.placeholder(tf.float32, shape=[None, 10], name="labels")
if use_two_conv:
conv1 = conv_layer(x_image, 1, 32, "conv1")
conv_out = conv_layer(conv1, 32, 64, "conv2")
else:
conv1 = conv_layer(x_image, 1, 64, "conv")
conv_out = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")
flattened = tf.reshape(conv_out, [-1, 7 * 7 * 64])
if use_two_fc:
fc1 = fc_layer(flattened, 7 * 7 * 64, 1024, "fc1")
embedding_input = fc1
embedding_size = 1024
logits = fc_layer(fc1, 1024, 10, "fc2")
else:
embedding_input = flattened
embedding_size = 7*7*64
logits = fc_layer(flattened, 7*7*64, 10, "fc")
with tf.name_scope("xent"):
xent = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=y), name="xent")
tf.summary.scalar("xent", xent)
with tf.name_scope("train"):
train_step = tf.train.AdamOptimizer(learning_rate).minimize(xent)
with tf.name_scope("accuracy"):
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar("accuracy", accuracy)
summ = tf.summary.merge_all()
embedding = tf.Variable(tf.zeros([1024, embedding_size]), name="test_embedding")
assignment = embedding.assign(embedding_input)
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter(LOGDIR + hparam)
writer.add_graph(sess.graph)
config = tf.contrib.tensorboard.plugins.projector.ProjectorConfig()
embedding_config = config.embeddings.add()
embedding_config.tensor_name = embedding.name
embedding_config.sprite.image_path = LOGDIR + 'sprite_1024.png'
embedding_config.metadata_path = LOGDIR + 'labels_1024.tsv'
# Specify the width and height of a single thumbnail.
embedding_config.sprite.single_image_dim.extend([28, 28])
tf.contrib.tensorboard.plugins.projector.visualize_embeddings(writer, config)
for i in range(2001):
batch = mnist.train.next_batch(100)
if i % 5 == 0:
[train_accuracy, s] = sess.run([accuracy, summ], feed_dict={x: batch[0], y: batch[1]})
writer.add_summary(s, i)
if i % 500 == 0:
sess.run(assignment, feed_dict={x: mnist.test.images[:1024], y: mnist.test.labels[:1024]})
saver.save(sess, os.path.join(LOGDIR, "model.ckpt"), i)
sess.run(train_step, feed_dict={x: batch[0], y: batch[1]})
def make_hparam_string(learning_rate, use_two_fc, use_two_conv):
conv_param = "conv=2" if use_two_conv else "conv=1"
fc_param = "fc=2" if use_two_fc else "fc=1"
return "lr_%.0E,%s,%s" % (learning_rate, conv_param, fc_param)
def main():
# You can try adding some more learning rates
for learning_rate in [1E-4]:
# Include "False" as a value to try different model architectures
for use_two_fc in [True]:
for use_two_conv in [True]:
# Construct a hyperparameter string for each one (example: "lr_1E-3,fc=2,conv=2)
hparam = make_hparam_string(learning_rate, use_two_fc, use_two_conv)
print('Starting run for %s' % hparam)
# Actually run with the new settings
mnist_model(learning_rate, use_two_fc, use_two_conv, hparam)
if __name__ == '__main__':
main()
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@GoingMyWay
Copy link

GoingMyWay commented Jun 30, 2017

Great job, after learning how to use tensorboard, I can easily to know the performance of the algorithm via web browser.

@arnaldog12
Copy link

The slides file are broken for me too

@shekhovt
Copy link

shekhovt commented Sep 20, 2017

Hi,

With this version of code I am getting very poor training results, not at all like in the video,
image

I have no idea why. It is in the default settings 2 conv, 2 fc, learning rate 1e-4 Adam. Different runs may land in very different training accuracy but more often a poor accuracy and never close to 1.

Ok, after reading the other comments, the problem is clear:
it is the ReLu + softmax activation on the output. The moved tutorial repository does not have this problem. Maybe you should take this one down.

@Steven0706
Copy link

This is an amazing TensorBoard example! Love it!

@bluesammer
Copy link

Beautiful relatable example for humans to comprehend the power of tensorboard. Switching to my own data use cases will be cool.

@cnzero
Copy link

cnzero commented Nov 14, 2017

@shekhovt Yes, the same problem to me. At this time, one dropout layer between two fully-connected neural network would make results better. Have a try.

@psvrao
Copy link

psvrao commented Jan 14, 2018

embedding visualisation is not working for me. I can see both label and sprite image files, but tensorboard is unable to load them, it just says loading forever... I have downloaded the files from https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial
labels file does not have a header in the first line, it simply has label(digit) in each row. Could that be a problem?
I am able to see all other graphs without any issue... Any help appreciated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment