RNN_Addition_1stgrade
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This comment has been minimized.
This comment has been minimized.
I found the documentation deep in the tensorflow ops code |
This comment has been minimized.
This comment has been minimized.
Got it. Thank you so much |
This comment has been minimized.
This comment has been minimized.
Hi Rajiv, |
This comment has been minimized.
This comment has been minimized.
Hmm, its a good question. This was one of my first RNNs and I just grabbed code from other projects. I am thinking that it would work like dropout generally, it would help against overfitting and get a better sense of how addition works. If you have the time, I would be curious if you played around with the dropout whether it works like that. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This comment has been minimized.
Hi Rajiv, thank you so much for your example.
May I know the reason for using seq2seq.rnn_decoder() in your code? I've tried search on the web and many of the examples are related to language translation. And I really cannot find documentation which talks about the TensorFlow seq2seq class.
Thanks a lot.