Skip to content

Instantly share code, notes, and snippets.

@sayakpaul
Created November 20, 2019 13:14
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save sayakpaul/e595b12b09dbb40e452db0289a6aa76c to your computer and use it in GitHub Desktop.
Save sayakpaul/e595b12b09dbb40e452db0289a6aa76c to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@tlkh
Copy link

tlkh commented Nov 20, 2019

Note that just calling opt = tf.keras.mixed_precision.experimental.LossScaleOptimizer(opt, "dynamic") alone does not activate mixed precision. Moving forward, you should call tf.keras.mixed_precision.experimental.set_policy('mixed_float16') before constructing the model. To fix the current "dtype" error, set the dtype of the softmax layer to tf.float32 manually.

@sayakpaul
Copy link
Author

Thanks for passing this along @tlkh. Will incorporate your suggestions and report back.

@tlkh
Copy link

tlkh commented Nov 20, 2019

@sayakpaul can you also test with larger images? CIFAR's 32x32 is "too small".

@sayakpaul
Copy link
Author

@tlkh will do! Your materials here (https://github.com/NVAITC/pycon-sg19-tensorflow-tutorial) are just fantastic. Do you mind if I used them in my decks with a proper citation?

@tlkh
Copy link

tlkh commented Nov 20, 2019

@sayakpaul no problem! Please feel free to modify and use for your own purposes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment