-
-
Save matthiasplappert/1e4860600275efd00b9b80bcc4a4e2bc to your computer and use it in GitHub Desktop.
import numpy as np | |
import gym | |
env = gym.make('FetchReach-v0') | |
# Simply wrap the goal-based environment using FlattenDictWrapper | |
# and specify the keys that you would like to use. | |
env = gym.wrappers.FlattenDictWrapper( | |
env, dict_keys=['observation', 'desired_goal']) | |
# From now on, you can use the wrapper env as per usual: | |
ob = env.reset() | |
print(ob.shape) # is now just an np.array |
I tried to use this example. First I indicated, that in the current state of the gym lib they use 'FetchReach-v1' instead of 'v0'. I'm very new to RL and Mashine Learning. But Reading the documentary of gym lib leads to the point, there I understand how to use classic controler enviroments but not the more complex enviroments like Robotics. I hoped that I will get a simple idea how robotics work with your example, but I run from one error to another. Maybe you are not connected to OpenAI but there are some informations missing, like what should I learn before starting with the enviroments.
You need to change FlattenDictWrapper(env, dict_keys=['observation', 'desired_goal'])
to FlattenObservation(FilterObservation(env, ["observation", "desired_goal"]))
You may want to remove the numpy import because its never used.