Skip to content

Instantly share code, notes, and snippets.

@Pocuston
Pocuston / Cartpole-v0.py
Last active May 10, 2021 02:25
Solution of Open AI gym environment "Cartpole-v0" (https://gym.openai.com/envs/CartPole-v0) using DQN and Pytorch. It is is slightly modified version of Pytorch DQN tutorial from http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html. For results see: https://gym.openai.com/evaluations/eval_KYrmuUX4TWGOWYsJl8i6Kg.
# Solution of Open AI gym environment "Cartpole-v0" (https://gym.openai.com/envs/CartPole-v0) using DQN and Pytorch.
# It is is slightly modified version of Pytorch DQN tutorial from
# http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html.
# The main difference is that it does not take rendered screen as input but it simply uses observation values from the \
# environment.
import gym
from gym import wrappers
import random
import math
@Pocuston
Pocuston / Cartpole-v0.py
Created September 3, 2017 15:51
Cartpole-v0 using Pytorch and DQN
# Solution of Open AI gym environment "Cartpole-v0" (https://gym.openai.com/envs/CartPole-v0) using DQN and Pytorch.
# It is is slightly modified version of Pytorch DQN tutorial from
# http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html.
# The main difference is that it does not take rendered screen as input but it simply uses observation values from the \
# environment.
import gym
from gym import wrappers
import random
import math