Skip to content

Instantly share code, notes, and snippets.

View ceteri's full-sized avatar

Paco Nathan ceteri

View GitHub Profile
rllib rollout \
 tmp/ppo/taxi/checkpoint_30/checkpoint-30 \
 - config "{\"env\": \"Taxi-v3\"}" \
 - run PPO \
 - steps 2000
_____________________________________________________________________________
Layer (type) Output Shape Param # Connected to
=============================================================================
observations (InputLayer) [(None, 500)] 0 
_____________________________________________________________________________
fc_1 (Dense) (None, 256) 128256 observations[0][0] 
_____________________________________________________________________________
fc_value_1 (Dense) (None, 256) 128256 observations[0][0] 
_____________________________________________________________________________
fc_2 (Dense) (None, 256) 65792 fc_1[0][0] 
policy = agent.get_policy()
model = policy.model
print(model.base_model.summary())
tensorboard - logdir=$HOME/ray_results/
N_ITER = 30
s = "{:3d} reward {:6.2f}/{:6.2f}/{:6.2f} len {:6.2f} saved {}"
for n in range(N_ITER):
result = agent.train()
file_name = agent.save(CHECKPOINT_ROOT)
print(s.format(
 n + 1,
 result["episode_reward_min"],
SELECT_ENV = "Taxi-v3"
config = ppo.DEFAULT_CONFIG.copy()
config["log_level"] = "WARN"
agent = ppo.PPOTrainer(config, env=SELECT_ENV)
import shutil
CHECKPOINT_ROOT = "tmp/ppo/taxi"
shutil.rmtree(CHECKPOINT_ROOT, ignore_errors=True, onerror=None)
ray_results = os.getenv("HOME") + "/ray_results/"
shutil.rmtree(ray_results, ignore_errors=True, onerror=None)
print("Dashboard URL: http://{}".format(ray.get_webui_url()))
import ray
import ray.rllib.agents.ppo as ppo
ray.shutdown()
ray.init(ignore_reinit_error=True)
@ceteri
ceteri / rl1.py
Created July 7, 2020 05:12
rl1.py
pip install ray[rllib]
pip install gym
pip install tensorflow