Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Q-Table learning in OpenAI grid world.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@OscarBalcells

This comment has been minimized.

Copy link

OscarBalcells commented Sep 29, 2018

Hello Juliani, thanks for the nice post in Medium. I know this code is already very old, but I still wanted to ask you a question anyways. When you update the QValue of the state you took the action in Q[s,a] = Q[s,a] + lr*( r + y*np.max(Q[s1,:1]) - Q[s,a] ) you are in theory multiplying gamma by the expected future rewards after you've taken action a, however in the code you multiply gamma by the biggest value in the next state's q values np.max(Q[s1,:]). Am I understanding something wrong about "plus the maximum discounted (γ) future reward expected according to our own table for the next state (s’) we would end up in" or is there a mistake in the code? (I'm probably wrong haha)

@alexandervandekleut

This comment has been minimized.

Copy link

alexandervandekleut commented Jan 4, 2019

Hey! I was trying to figure out why my implementation of this wasn't working and I found out that this code only works if you add noise. Even epsilon-greedy approaches fail to get any reward. Removing + np.random.randn(1,env.action_space.n)*(1./(i+1))) results in 0 reward. I understand the importance of visiting as many s, a pairs as possible, but it seems strange to me that this process working at all depends heavily on noise.

@tykurtz

This comment has been minimized.

Copy link

tykurtz commented Jan 14, 2019

@alexandervandekleut

It makes sense that the randomness is necessary. If there's no randomness, then a = np.argmax(Q[s,:]) always returns 0 (or move left) as Q is initialized with all zeros in this setup. Since the reward is only ever given if the goal is reached and not from intermediate goals, there will never be any feedback to update Q unless at some point the agent reaches the goal. This isn't possible if it never tries to move right.

@axb2035

This comment has been minimized.

Copy link

axb2035 commented May 7, 2019

Thanks for the code. Question: Frozen lake returns r=1 when the agent reaches the last square and r=0 in all other cases. If rAll[] is meant to be the running sum of total reward shouldn't it be initialized before starting all the episodes? Otherwise it is redundant as you could just have rList.append[r]...

@Jaewan-Yun

This comment has been minimized.

Copy link

Jaewan-Yun commented May 26, 2019

Thanks for the code. Question: Frozen lake returns r=1 when the agent reaches the last square and r=0 in all other cases. If rAll[] is meant to be the running sum of total reward shouldn't it be initialized before starting all the episodes? Otherwise it is redundant as you could just have rList.append[r]...

rAll[] used to calculate the average reward per episode at the end of training, which I don't think is helpful. It makes more sense to instead calculate moving averages to track the change in the rate of success over training episodes. This info is helpful when coupled with historical values of j (number of steps in each episode) to visualize how the number of steps changes during training. I don't know why that part was commented out in the code. Anyways, in order to calculate the moving averages of the model's success rate, you either need to keep track of j or add up the rewards in each episode (although its 0 or 1 in this case, other problems may be more complex--such as chess, where the episode reward may vary according to enemy pieces killed, draw/loss, etc) and be able to refer to the index of the said episode through rAll[].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.