Skip to content

Instantly share code, notes, and snippets.

@awjuliani
Created August 25, 2016 20:30
Show Gist options
  • Star 38 You must be signed in to star a gist
  • Fork 23 You must be signed in to fork a gist
  • Save awjuliani/4d69edad4d0ed9a5884f3cdcf0ea0874 to your computer and use it in GitHub Desktop.
Save awjuliani/4d69edad4d0ed9a5884f3cdcf0ea0874 to your computer and use it in GitHub Desktop.
Basic Q-Learning algorithm using Tensorflow
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@matlabninja
Copy link

matlabninja commented Feb 23, 2021

I've come across an oddity that I'm having trouble understanding. Running this code as written in Tensorflow works just fine for me, but trying to re-implement it in another framework (Pytorch, Keras) left me with a network that seemed unable to learn the game. It looks to me like the randomly initialized weights in the linear layer pass on bogus future reward estimates when the agent loses the game.
Explained another way, when the agent lost the game, instead of getting 0 reward for that action, it was getting 0 plus the max future reward for the "next step" of the game instead of just 0. I was able to get the agent to learn the game with this modification:

        if d == True:
            targetQ[0,a] = r
        else:
            targetQ[0,a] = r + y*maxQ1

Based on the explanation I've come up with, this modification makes perfect sense to me, but I'm left wondering why the example here does not have the same issue. Thoughts?
Reference code:

class FrozenLakeNet(nn.Module):
    def __init__(self):
        super(FrozenLakeNet,self).__init__()
        self.fc = nn.Linear(16,4,bias=False)
        self.fc.weight.data.uniform_(0,.01)
    def forward(self,xIn):
        x = self.fc(xIn)
        return(x)

# Create list of state vecotrs on device
states = []
device = torch.device('cuda:0')
for s in range(16):
    sv = torch.tensor(np.identity(16)[s:s+1].astype(np.float32))
    svG = sv.to(device)
    states.append(svG)

# Initialize network
net = FrozenLakeNet()
net.to(device)
# Setup loss
criterion = nn.MSELoss(reduction='sum')
# Setup optimizer
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0)

# Set learning parameters
y = .99
num_episodes = 50000
#create lists to contain total rewards and steps per episode
jList = []
rList = []
e = .25
randSel = 0
tot = 0
for i in range(num_episodes):
    #Reset environment and get first new observation
    s = env.reset()
    rAll = 0
    d = False
    j = 0
    sSeq = []
    # Set random epsilon for episode
    #eps = 1-(i/num_episodes)
    #The Q-Table learning algorithm
    while j < 99:
        j+=1
        # Zero gradients
        optimizer.zero_grad()
        #Choose an action by greedily (with e chance of random action) from the Q-network
        #a,allQ = sess.run([predict,Qout],feed_dict={inputs1:np.identity(16)[s:s+1]})
        allQ = net(states[s])
        # Convert the state to an action
        a = int(torch.argmax(allQ).cpu().detach())
        tot += 1
        if np.random.rand(1) < e:
            randSel+=1
            a = env.action_space.sample()
        #Get new state and reward from environment
        s1,r,d,_ = env.step(a)
        # Get predicted Q values from new state
        #Q1 = sess.run(Qout,feed_dict={inputs1:np.identity(16)[s1:s1+1]})
        Q1 = net(states[s1])
        # Get the value of the 'best' action from the network
        maxQ1 = torch.max(Q1)
        # Get the target Q from the initial state
        targetQ = allQ.clone()
        # Update the target Q with new information
        if d == True:
            targetQ[0,a] = r
        else:
            targetQ[0,a] = r + y*maxQ1 ### Using this reward regardless of "done" output results in not learning the game.
        #Train our network using target and predicted Q values
        #_,W1 = sess.run([updateModel,W],feed_dict={inputs1:np.identity(16)[s:s+1],nextQ:targetQ})
        # Compute the loss
        loss = criterion(targetQ,allQ)
        # Compute gradients
        loss.backward()
        # Apply learnings
        optimizer.step()
        rAll += r
        s = s1
        
        if d == True:
            e = 1./((i/50.) + 4)
            break
    jList.append(j)
    rList.append(rAll)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment