Skip to content

Instantly share code, notes, and snippets.

@awjuliani
Last active July 18, 2023 19:18
Show Gist options
  • Star 44 You must be signed in to star a gist
  • Fork 16 You must be signed in to fork a gist
  • Save awjuliani/35d2ab3409fc818011b6519f0f1629df to your computer and use it in GitHub Desktop.
Save awjuliani/35d2ab3409fc818011b6519f0f1629df to your computer and use it in GitHub Desktop.
An implementation of a Deep Recurrent Q-Network in Tensorflow.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@chhung3
Copy link

chhung3 commented Jul 28, 2017

Thank you very much for the code.

I got a question. It seems that the "deep" part is on the CNN but not on the recurrent part. And I found that the multiple-layer RNN doesn't quite popular such that I hardly found examples on the web. Is it true?

@samsenyang
Copy link

Thanks for these amazing codes!
Why did you define LSTMCell outside the class Qnetwork, rather than inside the Qnetwork?

@samsenyang
Copy link

samsenyang commented Apr 27, 2019

and why did you split the outputs of recurrent layer? can I use the outputs of recurrent layer directly for the inputs of advantage layer and value layer?

@samsenyang
Copy link

Because I didn't see the connections between targetQN and targetOps in the codes. So I really would like to know how exactly to update the parameters of targetQN by updateTarget(targetOps,sess).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment