Skip to content

Instantly share code, notes, and snippets.

@rajshah4
Last active March 4, 2018 01:43
Show Gist options
  • Save rajshah4/aa6c67944f4a43a7c9a1204301788e0c to your computer and use it in GitHub Desktop.
Save rajshah4/aa6c67944f4a43a7c9a1204301788e0c to your computer and use it in GitHub Desktop.
RNN_Addition_1stgrade
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@genho
Copy link

genho commented Apr 21, 2016

Hi Rajiv, thank you so much for your example.
May I know the reason for using seq2seq.rnn_decoder() in your code? I've tried search on the web and many of the examples are related to language translation. And I really cannot find documentation which talks about the TensorFlow seq2seq class.
Thanks a lot.

@rajshah4
Copy link
Author

I found the documentation deep in the tensorflow ops code
It explains how the decoder operates. Does this help?

@genho
Copy link

genho commented May 9, 2016

Got it. Thank you so much 👍

@JackMedley
Copy link

Hi Rajiv,
Can I ask what the purpose of the dropout layer is in a problem such as this? When training for something like addition don't we need to know all of the inputs?
Thanks,
Jack

@rajshah4
Copy link
Author

Hmm, its a good question. This was one of my first RNNs and I just grabbed code from other projects. I am thinking that it would work like dropout generally, it would help against overfitting and get a better sense of how addition works. If you have the time, I would be curious if you played around with the dropout whether it works like that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment