Last active
October 14, 2020 01:43
-
-
Save matthiasplappert/1e4860600275efd00b9b80bcc4a4e2bc to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
import gym | |
env = gym.make('FetchReach-v0') | |
# Simply wrap the goal-based environment using FlattenDictWrapper | |
# and specify the keys that you would like to use. | |
env = gym.wrappers.FlattenDictWrapper( | |
env, dict_keys=['observation', 'desired_goal']) | |
# From now on, you can use the wrapper env as per usual: | |
ob = env.reset() | |
print(ob.shape) # is now just an np.array |
You need to change FlattenDictWrapper(env, dict_keys=['observation', 'desired_goal'])
to FlattenObservation(FilterObservation(env, ["observation", "desired_goal"]))
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I tried to use this example. First I indicated, that in the current state of the gym lib they use 'FetchReach-v1' instead of 'v0'. I'm very new to RL and Mashine Learning. But Reading the documentary of gym lib leads to the point, there I understand how to use classic controler enviroments but not the more complex enviroments like Robotics. I hoped that I will get a simple idea how robotics work with your example, but I run from one error to another. Maybe you are not connected to OpenAI but there are some informations missing, like what should I learn before starting with the enviroments.