Skip to content

Instantly share code, notes, and snippets.

@awjuliani
Last active October 25, 2022 07:57
Show Gist options
  • Star 39 You must be signed in to star a gist
  • Fork 30 You must be signed in to fork a gist
  • Save awjuliani/9024166ca08c489a60994e529484f7fe to your computer and use it in GitHub Desktop.
Save awjuliani/9024166ca08c489a60994e529484f7fe to your computer and use it in GitHub Desktop.
Q-Table learning in OpenAI grid world.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@axb2035
Copy link

axb2035 commented May 7, 2019

Thanks for the code. Question: Frozen lake returns r=1 when the agent reaches the last square and r=0 in all other cases. If rAll[] is meant to be the running sum of total reward shouldn't it be initialized before starting all the episodes? Otherwise it is redundant as you could just have rList.append[r]...

@j-w-yun
Copy link

j-w-yun commented May 26, 2019

Thanks for the code. Question: Frozen lake returns r=1 when the agent reaches the last square and r=0 in all other cases. If rAll[] is meant to be the running sum of total reward shouldn't it be initialized before starting all the episodes? Otherwise it is redundant as you could just have rList.append[r]...

rAll[] used to calculate the average reward per episode at the end of training, which I don't think is helpful. It makes more sense to instead calculate moving averages to track the change in the rate of success over training episodes. This info is helpful when coupled with historical values of j (number of steps in each episode) to visualize how the number of steps changes during training. I don't know why that part was commented out in the code. Anyways, in order to calculate the moving averages of the model's success rate, you either need to keep track of j or add up the rewards in each episode (although its 0 or 1 in this case, other problems may be more complex--such as chess, where the episode reward may vary according to enemy pieces killed, draw/loss, etc) and be able to refer to the index of the said episode through rAll[].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment