Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

View ceteri's full-sized avatar

Paco Nathan ceteri

View GitHub Profile
rllib rollout \
 tmp/ppo/cart/checkpoint_40/checkpoint-40 \
 - config "{\"env\": \"CartPole-v1\"}" \
 - run PPO \
 - steps 2000
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
observations (InputLayer) [(None, 4)] 0
__________________________________________________________________________________________________
fc_1 (Dense) (None, 256) 1280 observations[0][0]
__________________________________________________________________________________________________
fc_value_1 (Dense) (None, 256) 1280 observations[0][0]
__________________________________________________________________________________________________
fc_2 (Dense) (None, 256) 65792 fc_1[0][0]
N_ITER = 40
s = "{:3d} reward {:6.2f}/{:6.2f}/{:6.2f} len {:6.2f} saved {}"
for n in range(N_ITER):
result = agent.train()
file_name = agent.save(CHECKPOINT_ROOT)
print(s.format(
n + 1,
result["episode_reward_min"],
SELECT_ENV = "CartPole-v1"
config = ppo.DEFAULT_CONFIG.copy()
config["log_level"] = "WARN"
agent = ppo.PPOTrainer(config, env=SELECT_ENV)
CHECKPOINT_ROOT = "tmp/ppo/cart"
shutil.rmtree(CHECKPOINT_ROOT, ignore_errors=True, onerror=None)
ray_results = os.getenv("HOME") + "/ray_results/"
shutil.rmtree(ray_results, ignore_errors=True, onerror=None)
rllib rollout \
 tmp/ppo/froz/checkpoint_10/checkpoint-10 \
 - config "{\"env\": \"FrozenLake-v0\"}" \
 - run PPO \
 - steps 2000
_____________________________________________________________________________
Layer (type) Output Shape Param # Connected to
=============================================================================
observations (InputLayer) [(None, 16)] 0
_____________________________________________________________________________
fc_1 (Dense) (None, 256) 4352 observations[0][0] 
_____________________________________________________________________________
fc_value_1 (Dense) (None, 256) 4352 observations[0][0] 
_____________________________________________________________________________
fc_2 (Dense) (None, 256) 65792 fc_1[0][0] 
N_ITER = 10
s = "{:3d} reward {:6.2f}/{:6.2f}/{:6.2f} len {:6.2f} saved {}"
for n in range(N_ITER):
result = agent.train()
file_name = agent.save(CHECKPOINT_ROOT)
print(s.format(
n + 1,
result["episode_reward_min"],
SELECT_ENV = "FrozenLake-v0"
config = ppo.DEFAULT_CONFIG.copy()
config["log_level"] = "WARN"
agent = ppo.PPOTrainer(config, env=SELECT_ENV)
CHECKPOINT_ROOT = "tmp/ppo/froz"
shutil.rmtree(CHECKPOINT_ROOT, ignore_errors=True, onerror=None)
ray_results = os.getenv("HOME") + "/ray_results/"
shutil.rmtree(ray_results, ignore_errors=True, onerror=None)