Skip to content

Instantly share code, notes, and snippets.

@eladshabi
Last active February 10, 2019 14:33
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save eladshabi/13da6314e5e54c795aea4dfd45002fb7 to your computer and use it in GitHub Desktop.
Save eladshabi/13da6314e5e54c795aea4dfd45002fb7 to your computer and use it in GitHub Desktop.
Create model using float 16 data type
# source: https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html
import tensorflow as tf
def create_simple_model(nbatch, nin, nout, dtype):
"""A simple softmax model."""
data = tf.placeholder(dtype, shape=(nbatch, nin))
weights = tf.get_variable('weights', (nin, nout), dtype)
biases = tf.get_variable('biases', nout, dtype, initializer=tf.zeros_initializer())
logits = tf.matmul(data, weights) + biases
target = tf.placeholder(tf.float32, shape=(nbatch, nout))
# Note: The softmax should be computed in float32 precision
loss = tf.losses.softmax_cross_entropy(target, tf.cast(logits, tf.float32))
return data, target, loss
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment