Skip to content

Instantly share code, notes, and snippets.

@Tonylin1998
Created June 28, 2022 08:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Tonylin1998/6ded9dffdbd109b4cbc21ec5969b5854 to your computer and use it in GitHub Desktop.
Save Tonylin1998/6ded9dffdbd109b4cbc21ec5969b5854 to your computer and use it in GitHub Desktop.
mtrl error
$ PYTHONPATH=. python3 -u main.py setup=metaworld agent=state_sac env=metaworld-mt10 agent.multitask.num_envs=10 agent.multitask.should_use_disentangled_alpha=True
/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/gym/envs/registration.py:416: UserWarning: WARN: The `registry.env_specs` property along with `EnvSpecTree` is deprecated. Please use `registry` directly as a dictionary instead.
"The `registry.env_specs` property along with `EnvSpecTree` is deprecated. Please use `registry` directly as a dictionary instead."
setup:
seed: 42
setup: metaworld
base_path: /home/tony/lab/mtrl
save_dir: ${setup.base_path}/logs/${setup.id}
device: cuda:0
id: 90f2497ff4cee27c0d30fbc66e6ba205f94808ba4ea16e057df58e73_issue_None_seed_42
description: Sample Task
tags: null
git:
commit_id: e282c3f5a205970b4ad7d1c1ebae7aa9b4d56218
has_uncommitted_changes: null
issue_id: null
date: '2022-06-28 16:30:39'
slurm_id: '-1'
debug:
should_enable: false
experiment:
name: metaworld
builder:
_target_: mtrl.experiment.${experiment.name}.Experiment
init_steps: 1500
num_train_steps: 1000000
eval_freq: 10000
num_eval_episodes: 10
should_resume: true
save:
model:
retain_last_n: 1
buffer:
should_save: true
size_per_chunk: 10000
num_samples_to_save: -1
save_dir: ${setup.save_dir}
save_video: false
envs_to_exclude_during_training: null
agent:
name: state_sac
encoder_feature_dim: 50
num_layers: 0
num_filters: 0
builder:
_target_: mtrl.agent.sac.Agent
actor_cfg: ${agent.actor}
critic_cfg: ${agent.critic}
multitask_cfg: ${agent.multitask}
alpha_optimizer_cfg: ${agent.optimizers.alpha}
actor_optimizer_cfg: ${agent.optimizers.actor}
critic_optimizer_cfg: ${agent.optimizers.critic}
discount: 0.99
init_temperature: 1.0
actor_update_freq: 1
critic_tau: 0.005
critic_target_update_freq: 1
encoder_tau: 0.05
actor:
_target_: mtrl.agent.components.actor.Actor
num_layers: 3
hidden_dim: 400
log_std_bounds:
- -20
- 2
encoder_cfg: ${agent.encoder}
multitask_cfg: ${agent.multitask}
critic:
_target_: mtrl.agent.components.critic.Critic
hidden_dim: ${agent.actor.hidden_dim}
num_layers: ${agent.actor.num_layers}
encoder_cfg: ${agent.encoder}
multitask_cfg: ${agent.multitask}
encoder:
type_to_select: identity
identity:
type: identity
feature_dim: ${agent.encoder_feature_dim}
feedforward:
type: feedforward
hidden_dim: 50
num_layers: 2
feature_dim: ${agent.encoder_feature_dim}
should_tie_encoders: true
film:
type: film
hidden_dim: 50
num_layers: 2
feature_dim: ${agent.encoder_feature_dim}
should_tie_encoders: true
moe:
type: moe
encoder_cfg:
type: feedforward
hidden_dim: 50
num_layers: 2
feature_dim: ${agent.encoder_feature_dim}
should_tie_encoders: true
num_experts: 9
task_id_to_encoder_id_cfg:
mode: cluster
num_envs: ${env.num_envs}
gate:
embedding_dim: 50
hidden_dim: 50
num_layers: 2
temperature: 1.0
should_use_soft_attention: false
topk: 2
task_encoder_cfg:
should_use_task_encoding: true
should_detach_task_encoding: true
attention:
embedding_dim: 50
hidden_dim: 50
num_layers: 2
temperature: 1.0
should_use_soft_attention: true
task_encoder_cfg:
should_use_task_encoding: true
should_detach_task-encoding: true
cluster:
env_name: ${env.name}
task_description: ${env.description}
ordered_task_list: ${env.ordered_task_list}
mapping_cfg: ${agent.task_to_encoder_cluster}
num_eval_episodes: ${experiment.num_eval_episodes}
batch_size: ${replay_buffer.batch_size}
identity:
num_eval_episodes: ${experiment.num_eval_episodes}
batch_size: ${replay_buffer.batch_size}
ensemble:
num_eval_episodes: ${experiment.num_eval_episodes}
batch_size: ${replay_buffer.batch_size}
factorized_moe:
type: fmoe
encoder_cfg: ${agent.encoder.feedforward}
num_factors: 2
num_experts_per_factor:
- 5
- 5
pixel:
type: pixel
feature_dim: ${agent.encoder_feature_dim}
num_filters: ${agent.num_filters}
num_layers: ${agent.num_layers}
transition_model:
_target_: mtrl.agent.components.transition_model.make_transition_model
transition_cfg:
type: ''
feature_dim: ${agent.encoder_feature_dim}
layer_width: 512
multitask_cfg: ${agent.multitask}
mask:
num_tasks: ${env.num_envs}
num_eval_episodes: ${experiment.num_eval_episodes}
batch_size: ${replay_buffer.batch_size}
multitask:
num_envs: 10
should_use_disentangled_alpha: true
should_use_task_encoder: false
should_use_multi_head_policy: false
should_use_disjoint_policy: false
task_encoder_cfg:
model_cfg:
_target_: mtrl.agent.components.task_encoder.TaskEncoder
pretrained_embedding_cfg:
should_use: false
path_to_load_from: /private/home/sodhani/projects/mtrl/metadata/task_embedding/roberta_small/${env.name}.json
ordered_task_list: ${env.ordered_task_list}
num_embeddings: ${agent.multitask.num_envs}
embedding_dim: 50
hidden_dim: 50
num_layers: 2
output_dim: 50
optimizer_cfg: ${agent.optimizers.actor}
losses_to_train:
- critic
- transition_reward
- decoder
- task_encoder
multi_head_policy_cfg:
mask_cfg: ${agent.mask}
actor_cfg:
should_condition_model_on_task_info: false
should_condition_encoder_on_task_info: true
should_concatenate_task_info_with_encoder: true
moe_cfg:
mode: soft_modularization
num_experts: 4
should_use: false
critic_cfg: ${agent.multitask.actor_cfg}
gradnorm:
alpha: 1.0
task_to_encoder_cluster:
mt10:
cluster:
action_close:
- close
action_default:
- insert
- pick and place
- press
- reach
action_open:
- open
action_push:
- push
object_default:
- button
- door
- peg
- revolving joint
object_drawer:
- drawer
object_goal:
- goal
object_puck:
- puck
object_window:
- window
mt50:
cluster:
action_close:
- close
action_default:
- insert
- pick and place
- press
- reach
action_open:
- open
action_push:
- push
object_default:
- button
- door
- peg
- revolving joint
object_drawer:
- drawer
object_goal:
- goal
object_puck:
- puck
object_window:
- window
optimizers:
actor:
_target_: torch.optim.Adam
lr: 0.0003
betas:
- 0.9
- 0.999
alpha:
_target_: torch.optim.Adam
lr: 0.0003
betas:
- 0.9
- 0.999
critic:
_target_: torch.optim.Adam
lr: 0.0003
betas:
- 0.9
- 0.999
decoder:
_target_: torch.optim.Adam
lr: 0.0003
betas:
- 0.9
- 0.999
weight_decay: 1.0e-07
encoder:
_target_: torch.optim.Adam
lr: 0.0003
betas:
- 0.9
- 0.999
env:
name: metaworld-mt10
num_envs: 10
benchmark:
_target_: metaworld.MT10
builder:
make_kwargs:
should_perform_reward_normalization: true
dummy:
_target_: metaworld.MT1
env_name: pick-place-v1
description:
reach-v1: Reach a goal position. Randomize the goal positions.
push-v1: Push the puck to a goal. Randomize puck and goal positions.
pick-place-v1: Pick and place a puck to a goal. Randomize puck and goal positions.
door-open-v1: Open a door with a revolving joint. Randomize door positions.
drawer-open-v1: Open a drawer. Randomize drawer positions.
drawer-close-v1: Push and close a drawer. Randomize the drawer positions.
button-press-topdown-v1: Press a button from the top. Randomize button positions.
peg-insert-side-v1: Insert a peg sideways. Randomize peg and goal positions.
window-open-v1: Push and open a window. Randomize window positions.
window-close-v1: Push and close a window. Randomize window positions.
ordered_task_list: null
replay_buffer:
_target_: mtrl.replay_buffer.ReplayBuffer
env_obs_shape: null
action_shape: null
capacity: 1000000
batch_size: 128
logger:
_target_: mtrl.logger.Logger
logger_dir: ${setup.save_dir}
use_tb: false
metrics:
train:
- - episode
- E
- int
- average
- - step
- S
- int
- average
- - duration
- D
- time
- average
- - episode_reward
- R
- float
- average
- - success
- Su
- float
- average
- - batch_reward
- BR
- float
- average
- - actor_loss
- ALOSS
- float
- average
- - critic_loss
- CLOSS
- float
- average
- - ae_loss
- RLOSS
- float
- average
- - ae_transition_loss
- null
- float
- average
- - reward_loss
- null
- float
- average
- - actor_target_entropy
- null
- float
- average
- - actor_entropy
- null
- float
- average
- - alpha_loss
- null
- float
- average
- - alpha_value
- null
- float
- average
- - contrastive_loss
- MLOSS
- float
- average
- - max_rat
- MR
- float
- average
- - env_index
- ENV
- str
- constant
- - episode_reward_env_index_
- R_
- float
- average
- - success_env_index_
- Su_
- float
- average
- - env_index_
- ENV_
- str
- constant
- - batch_reward_agent_index_
- null
- float
- average
- - critic_loss_agent_index_
- AGENT_
- float
- average
- - actor_distilled_agent_loss_agent_index_
- null
- float
- average
- - actor_loss_agent_index_
- null
- float
- average
- - actor_target_entropy_agent_index_
- null
- float
- average
- - actor_entropy_agent_index_
- null
- float
- average
- - alpha_loss_agent_index_
- null
- float
- average
- - alpha_value_agent_index_
- null
- float
- average
- - ae_loss_agent_index_
- null
- float
- average
eval:
- - episode
- E
- int
- average
- - step
- S
- int
- average
- - episode_reward
- R
- float
- average
- - env_index
- ENV
- str
- constant
- - success
- Su
- float
- average
- - episode_reward_env_index_
- R_
- float
- average
- - success_env_index_
- Su_
- float
- average
- - env_index_
- ENV_
- str
- constant
- - batch_reward_agent_index_
- AGENT_
- float
- average
logbook:
_target_: ml_logger.logbook.make_config
write_to_console: false
logger_dir: ${setup.save_dir}
create_multiple_log_files: false
[2022-06-28 16:30:40,084][default_logger][INFO] - {"setup": {"seed": 42, "setup": "metaworld", "base_path": "/home/tony/lab/mtrl", "save_dir": "${setup.base_path}/logs/${setup.id}", "device": "cuda:0", "id": "90f2497ff4cee27c0d30fbc66e6ba205f94808ba4ea16e057df58e73_issue_None_seed_42", "description": "Sample Task", "tags": null, "git": {"commit_id": "e282c3f5a205970b4ad7d1c1ebae7aa9b4d56218", "has_uncommitted_changes": null, "issue_id": null}, "date": "2022-06-28 16:30:39", "slurm_id": "-1", "debug": {"should_enable": false}}, "experiment": {"name": "metaworld", "builder": {"_target_": "mtrl.experiment.${experiment.name}.Experiment"}, "init_steps": 1500, "num_train_steps": 1000000, "eval_freq": 10000, "num_eval_episodes": 10, "should_resume": true, "save": {"model": {"retain_last_n": 1}, "buffer": {"should_save": true, "size_per_chunk": 10000, "num_samples_to_save": -1}}, "save_dir": "${setup.save_dir}", "save_video": false, "envs_to_exclude_during_training": null}, "agent": {"name": "state_sac", "encoder_feature_dim": 50, "num_layers": 0, "num_filters": 0, "builder": {"_target_": "mtrl.agent.sac.Agent", "actor_cfg": "${agent.actor}", "critic_cfg": "${agent.critic}", "multitask_cfg": "${agent.multitask}", "alpha_optimizer_cfg": "${agent.optimizers.alpha}", "actor_optimizer_cfg": "${agent.optimizers.actor}", "critic_optimizer_cfg": "${agent.optimizers.critic}", "discount": 0.99, "init_temperature": 1.0, "actor_update_freq": 1, "critic_tau": 0.005, "critic_target_update_freq": 1, "encoder_tau": 0.05}, "actor": {"_target_": "mtrl.agent.components.actor.Actor", "num_layers": 3, "hidden_dim": 400, "log_std_bounds": [-20, 2], "encoder_cfg": "${agent.encoder}", "multitask_cfg": "${agent.multitask}"}, "critic": {"_target_": "mtrl.agent.components.critic.Critic", "hidden_dim": "${agent.actor.hidden_dim}", "num_layers": "${agent.actor.num_layers}", "encoder_cfg": "${agent.encoder}", "multitask_cfg": "${agent.multitask}"}, "encoder": {"type_to_select": "identity", "identity": {"type": "identity", "feature_dim": "${agent.encoder_feature_dim}"}, "feedforward": {"type": "feedforward", "hidden_dim": 50, "num_layers": 2, "feature_dim": "${agent.encoder_feature_dim}", "should_tie_encoders": true}, "film": {"type": "film", "hidden_dim": 50, "num_layers": 2, "feature_dim": "${agent.encoder_feature_dim}", "should_tie_encoders": true}, "moe": {"type": "moe", "encoder_cfg": {"type": "feedforward", "hidden_dim": 50, "num_layers": 2, "feature_dim": "${agent.encoder_feature_dim}", "should_tie_encoders": true}, "num_experts": 9, "task_id_to_encoder_id_cfg": {"mode": "cluster", "num_envs": "${env.num_envs}", "gate": {"embedding_dim": 50, "hidden_dim": 50, "num_layers": 2, "temperature": 1.0, "should_use_soft_attention": false, "topk": 2, "task_encoder_cfg": {"should_use_task_encoding": true, "should_detach_task_encoding": true}}, "attention": {"embedding_dim": 50, "hidden_dim": 50, "num_layers": 2, "temperature": 1.0, "should_use_soft_attention": true, "task_encoder_cfg": {"should_use_task_encoding": true, "should_detach_task-encoding": true}}, "cluster": {"env_name": "${env.name}", "task_description": "${env.description}", "ordered_task_list": "${env.ordered_task_list}", "mapping_cfg": "${agent.task_to_encoder_cluster}", "num_eval_episodes": "${experiment.num_eval_episodes}", "batch_size": "${replay_buffer.batch_size}"}, "identity": {"num_eval_episodes": "${experiment.num_eval_episodes}", "batch_size": "${replay_buffer.batch_size}"}, "ensemble": {"num_eval_episodes": "${experiment.num_eval_episodes}", "batch_size": "${replay_buffer.batch_size}"}}}, "factorized_moe": {"type": "fmoe", "encoder_cfg": "${agent.encoder.feedforward}", "num_factors": 2, "num_experts_per_factor": [5, 5]}, "pixel": {"type": "pixel", "feature_dim": "${agent.encoder_feature_dim}", "num_filters": "${agent.num_filters}", "num_layers": "${agent.num_layers}"}}, "transition_model": {"_target_": "mtrl.agent.components.transition_model.make_transition_model", "transition_cfg": {"type": "", "feature_dim": "${agent.encoder_feature_dim}", "layer_width": 512}, "multitask_cfg": "${agent.multitask}"}, "mask": {"num_tasks": "${env.num_envs}", "num_eval_episodes": "${experiment.num_eval_episodes}", "batch_size": "${replay_buffer.batch_size}"}, "multitask": {"num_envs": 10, "should_use_disentangled_alpha": true, "should_use_task_encoder": false, "should_use_multi_head_policy": false, "should_use_disjoint_policy": false, "task_encoder_cfg": {"model_cfg": {"_target_": "mtrl.agent.components.task_encoder.TaskEncoder", "pretrained_embedding_cfg": {"should_use": false, "path_to_load_from": "/private/home/sodhani/projects/mtrl/metadata/task_embedding/roberta_small/${env.name}.json", "ordered_task_list": "${env.ordered_task_list}"}, "num_embeddings": "${agent.multitask.num_envs}", "embedding_dim": 50, "hidden_dim": 50, "num_layers": 2, "output_dim": 50}, "optimizer_cfg": "${agent.optimizers.actor}", "losses_to_train": ["critic", "transition_reward", "decoder", "task_encoder"]}, "multi_head_policy_cfg": {"mask_cfg": "${agent.mask}"}, "actor_cfg": {"should_condition_model_on_task_info": false, "should_condition_encoder_on_task_info": true, "should_concatenate_task_info_with_encoder": true, "moe_cfg": {"mode": "soft_modularization", "num_experts": 4, "should_use": false}}, "critic_cfg": "${agent.multitask.actor_cfg}"}, "gradnorm": {"alpha": 1.0}, "task_to_encoder_cluster": {"mt10": {"cluster": {"action_close": ["close"], "action_default": ["insert", "pick and place", "press", "reach"], "action_open": ["open"], "action_push": ["push"], "object_default": ["button", "door", "peg", "revolving joint"], "object_drawer": ["drawer"], "object_goal": ["goal"], "object_puck": ["puck"], "object_window": ["window"]}}, "mt50": {"cluster": {"action_close": ["close"], "action_default": ["insert", "pick and place", "press", "reach"], "action_open": ["open"], "action_push": ["push"], "object_default": ["button", "door", "peg", "revolving joint"], "object_drawer": ["drawer"], "object_goal": ["goal"], "object_puck": ["puck"], "object_window": ["window"]}}}, "optimizers": {"actor": {"_target_": "torch.optim.Adam", "lr": 0.0003, "betas": [0.9, 0.999]}, "alpha": {"_target_": "torch.optim.Adam", "lr": 0.0003, "betas": [0.9, 0.999]}, "critic": {"_target_": "torch.optim.Adam", "lr": 0.0003, "betas": [0.9, 0.999]}, "decoder": {"_target_": "torch.optim.Adam", "lr": 0.0003, "betas": [0.9, 0.999], "weight_decay": 1e-07}, "encoder": {"_target_": "torch.optim.Adam", "lr": 0.0003, "betas": [0.9, 0.999]}}}, "env": {"name": "metaworld-mt10", "num_envs": 10, "benchmark": {"_target_": "metaworld.MT10"}, "builder": {"make_kwargs": {"should_perform_reward_normalization": true}}, "dummy": {"_target_": "metaworld.MT1", "env_name": "pick-place-v1"}, "description": {"reach-v1": "Reach a goal position. Randomize the goal positions.", "push-v1": "Push the puck to a goal. Randomize puck and goal positions.", "pick-place-v1": "Pick and place a puck to a goal. Randomize puck and goal positions.", "door-open-v1": "Open a door with a revolving joint. Randomize door positions.", "drawer-open-v1": "Open a drawer. Randomize drawer positions.", "drawer-close-v1": "Push and close a drawer. Randomize the drawer positions.", "button-press-topdown-v1": "Press a button from the top. Randomize button positions.", "peg-insert-side-v1": "Insert a peg sideways. Randomize peg and goal positions.", "window-open-v1": "Push and open a window. Randomize window positions.", "window-close-v1": "Push and close a window. Randomize window positions."}, "ordered_task_list": null}, "replay_buffer": {"_target_": "mtrl.replay_buffer.ReplayBuffer", "env_obs_shape": null, "action_shape": null, "capacity": 1000000, "batch_size": 128}, "logger": {"_target_": "mtrl.logger.Logger", "logger_dir": "${setup.save_dir}", "use_tb": false}, "metrics": {"train": [["episode", "E", "int", "average"], ["step", "S", "int", "average"], ["duration", "D", "time", "average"], ["episode_reward", "R", "float", "average"], ["success", "Su", "float", "average"], ["batch_reward", "BR", "float", "average"], ["actor_loss", "ALOSS", "float", "average"], ["critic_loss", "CLOSS", "float", "average"], ["ae_loss", "RLOSS", "float", "average"], ["ae_transition_loss", null, "float", "average"], ["reward_loss", null, "float", "average"], ["actor_target_entropy", null, "float", "average"], ["actor_entropy", null, "float", "average"], ["alpha_loss", null, "float", "average"], ["alpha_value", null, "float", "average"], ["contrastive_loss", "MLOSS", "float", "average"], ["max_rat", "MR", "float", "average"], ["env_index", "ENV", "str", "constant"], ["episode_reward_env_index_", "R_", "float", "average"], ["success_env_index_", "Su_", "float", "average"], ["env_index_", "ENV_", "str", "constant"], ["batch_reward_agent_index_", null, "float", "average"], ["critic_loss_agent_index_", "AGENT_", "float", "average"], ["actor_distilled_agent_loss_agent_index_", null, "float", "average"], ["actor_loss_agent_index_", null, "float", "average"], ["actor_target_entropy_agent_index_", null, "float", "average"], ["actor_entropy_agent_index_", null, "float", "average"], ["alpha_loss_agent_index_", null, "float", "average"], ["alpha_value_agent_index_", null, "float", "average"], ["ae_loss_agent_index_", null, "float", "average"]], "eval": [["episode", "E", "int", "average"], ["step", "S", "int", "average"], ["episode_reward", "R", "float", "average"], ["env_index", "ENV", "str", "constant"], ["success", "Su", "float", "average"], ["episode_reward_env_index_", "R_", "float", "average"], ["success_env_index_", "Su_", "float", "average"], ["env_index_", "ENV_", "str", "constant"], ["batch_reward_agent_index_", "AGENT_", "float", "average"]]}, "logbook": {"_target_": "ml_logger.logbook.make_config", "write_to_console": false, "logger_dir": "${setup.save_dir}", "create_multiple_log_files": false}, "status": "RUNNING", "logbook_id": "0", "logbook_timestamp": "04:30:40PM CST Jun 28, 2022", "logbook_type": "metadata"}
Starting Experiment at Tue Jun 28 16:30:40 2022
torch version = 1.7.1+cu110
/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/gym/envs/registration.py:416: UserWarning: WARN: The `registry.env_specs` property along with `EnvSpecTree` is deprecated. Please use `registry` directly as a dictionary instead.
"The `registry.env_specs` property along with `EnvSpecTree` is deprecated. Please use `registry` directly as a dictionary instead."
/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/gym/spaces/box.py:112: UserWarning: WARN: Box bound precision lowered by casting to float32
logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
Traceback (most recent call last):
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/utils.py", line 63, in call
return _instantiate_class(type_or_callable, config, *args, **kwargs)
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/_internal/utils.py", line 500, in _instantiate_class
return clazz(*args, **final_kwargs)
File "/home/tony/lab/mtrl/mtrl/experiment/metaworld.py", line 20, in __init__
super().__init__(config, experiment_id)
File "/home/tony/lab/mtrl/mtrl/experiment/multitask.py", line 25, in __init__
super().__init__(config, experiment_id)
File "/home/tony/lab/mtrl/mtrl/experiment/experiment.py", line 33, in __init__
self.envs, self.env_metadata = self.build_envs()
File "/home/tony/lab/mtrl/mtrl/experiment/metaworld.py", line 48, in build_envs
config=self.config, benchmark=benchmark, mode=mode, env_id_to_task_map=None
File "/home/tony/lab/mtrl/mtrl/env/builder.py", line 51, in build_metaworld_vec_env
from mtenv.envs.metaworld.env import (
File "/home/tony/lab/mtrl/src/mtenv/mtenv/envs/__init__.py", line 15, in <module>
"invalid_env_kwargs": [],
File "/home/tony/lab/mtrl/src/mtenv/mtenv/envs/registration.py", line 74, in register
return mtenv_registry.register(id, **kwargs)
File "/home/tony/lab/mtrl/src/mtenv/mtenv/envs/registration.py", line 66, in register
self.env_specs[id] = MultitaskEnvSpec(id, **kwargs)
File "/home/tony/lab/mtrl/src/mtenv/mtenv/envs/registration.py", line 47, in __init__
kwargs=kwargs,
File "<string>", line 10, in __init__
AttributeError: can't set attribute
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/_internal/utils.py", line 198, in run_and_report
return func()
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/_internal/utils.py", line 350, in <lambda>
overrides=args.overrides,
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 112, in run
configure_logging=with_log_configuration,
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/core/utils.py", line 128, in run_job
ret.return_value = task_function(task_cfg)
File "main.py", line 15, in launch
return run(config)
File "/home/tony/lab/mtrl/mtrl/app/run.py", line 35, in run
experiment_utils.prepare_and_run(config=config)
File "/home/tony/lab/mtrl/mtrl/experiment/utils.py", line 24, in prepare_and_run
config.experiment.builder, config
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/utils.py", line 70, in call
raise HydraException(f"Error calling '{cls}' : {e}") from e
hydra.errors.HydraException: Error calling 'mtrl.experiment.metaworld.Experiment' : can't set attribute
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 19, in <module>
launch()
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/main.py", line 37, in decorated_main
strict=strict,
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/_internal/utils.py", line 347, in _run_hydra
lambda: hydra.run(
File "/home/tony/miniconda3/envs/mtrl/lib/python3.7/site-packages/hydra/_internal/utils.py", line 237, in run_and_report
assert mdl is not None
AssertionError
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment