Skip to content

Instantly share code, notes, and snippets.

View ziwenhan's full-sized avatar

Ziwenhan Song ziwenhan

View GitHub Profile
@ziwenhan
ziwenhan / gist:89b76b00417eff428a6bc9a635487b52
Created November 14, 2018 06:26
Deep-Learning Nan loss reasons
https://stackoverflow.com/questions/40050397/deep-learning-nan-loss-reasons
There are lots of things I have seen make a model diverge.
Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity.
I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue.
Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root who's derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier.