Skip to content

Instantly share code, notes, and snippets.

@matthiasplappert
Last active October 14, 2020 01:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save matthiasplappert/1e4860600275efd00b9b80bcc4a4e2bc to your computer and use it in GitHub Desktop.
Save matthiasplappert/1e4860600275efd00b9b80bcc4a4e2bc to your computer and use it in GitHub Desktop.
import numpy as np
import gym
env = gym.make('FetchReach-v0')
# Simply wrap the goal-based environment using FlattenDictWrapper
# and specify the keys that you would like to use.
env = gym.wrappers.FlattenDictWrapper(
env, dict_keys=['observation', 'desired_goal'])
# From now on, you can use the wrapper env as per usual:
ob = env.reset()
print(ob.shape) # is now just an np.array
@astier
Copy link

astier commented Jul 6, 2018

You may want to remove the numpy import because its never used.

@MaKaNu
Copy link

MaKaNu commented Jun 5, 2019

I tried to use this example. First I indicated, that in the current state of the gym lib they use 'FetchReach-v1' instead of 'v0'. I'm very new to RL and Mashine Learning. But Reading the documentary of gym lib leads to the point, there I understand how to use classic controler enviroments but not the more complex enviroments like Robotics. I hoped that I will get a simple idea how robotics work with your example, but I run from one error to another. Maybe you are not connected to OpenAI but there are some informations missing, like what should I learn before starting with the enviroments.

@OhkuboSGMS
Copy link

OhkuboSGMS commented Oct 14, 2020

You need to change FlattenDictWrapper(env, dict_keys=['observation', 'desired_goal']) to FlattenObservation(FilterObservation(env, ["observation", "desired_goal"]))

https://github.com/openai/baselines/pull/1034/files#diff-06f2d87316632bf8909f91507556c46a278bd62b6d60a5536cfb50eef2b0902bR130

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment