Contributor: Yaswanth Gonna
Mentor: Steve Peters
Repositories: benchmark(Gazebo Classic) simulation_benchmark(New Gazebo)
A wide variety of robotics applications strongly rely on simulator performance. While many open-source simulators/physics engines are available, each with its benefits and drawbacks, there is a lack of simulator-agnostic benchmarks to help developers and users compare and choose the right simulator/physics engine for their specific use case.
This project aims to introduce a small number of simple benchmarking worlds to compare various physics engines based on metrics like computational time and numerical accuracy. This project provides users with a set of features which make convinent scenerio generation, logging and performance metrics calculation.
This project extents on previous benchmarking efforts to make it fully simulator/physics engine independent.
Features | Bullet | DART | ODE | Mujoco | DRAKE |
---|---|---|---|---|---|
Contact | Rigid/Impulse | Rigid/Impulse | Rigid/Impulse | Soft | Rigid/Hydroelastic |
Coordinates | Maximal/Featherstone | Generalized | Maximal | Generalized | Generalised |
Integrator | Semi-implicit Euler | Semi-implicit Euler/RK4 | Explicit/Implicit Euler | Semi-implicit Euler/RK4 | Implicit & Explicit Euler/RK4 |
Friction model | Implicit friciton/Pyramid | Approximated Coulomb friction cone | Pyramid/Cone | Pyramidal/Elliptic | Coulomb Friction |
Note
These are only few of the avaible open-source physics engines.
graph LR;
G(Gazebo Classic) -..-> P(Physics Egines);
P(Physics engines) -..-> O(ODE - default);
P(Physics Engines) -..-> B(Bullet);
P(Physics Engines) -..-> D(DART);
P(Physics engines) -..-> S(Simbody);
graph LR;
G(Gazebo Ionic) -..-> P(Physics Egines);
P(Physics Engines) -..-> D(DART - default);
P(Physics Engines) -..-> B(Bullet);
P(Physics engines) -..-> S(Bullet-Featherstone);
Note
Currently Bullet has not be benchmarked in Gazebo Ionic because it doesn't support SetWorldLinearVelocity
for freeGroup
as of now.
This project builds up on the previous benchmarking infrastructure to make it simulator independent and to add new benchmarking tests.
-
Improvements
-
Dynamic world generation: This feature generates simulation worlds in SDF format at runtime and saves them in the respective test directory. This allows for sharing of benchmarked worlds across the different simulators, as most of them support SDF format or conversion, and makes it easy to check/inspect parameters used for a particular test case.
-
MCAP/CSV logging: This is an optional feature that logs states of the model (e.g., position, velocity, acceleration, etc.). MCAP offers a small log file size, which can be used for sharing the test result, and the
mcap_to_csv.py
converts mcap to csv format, which makes it easier to read/inspect the log files and pin point the errors. Users/developers can use native simulator log format and are required to convert to csv for performance metric calculation.The logging feature stores simulation/test parameters (e.g., time step size, model count, wall time, etc.) and raw simulation data (e.g., velocity, position, etc.) for each simulation time step for each test. Each test has a separate
mcap
andcsv
file that is used for post-processing of log data. -
Post-processing: The log data for each test in
csv
format is used by the postprocessing script to calculate performance metrics. The test parameters should be stored in the top two rows of thecsv
file, and the rest of the rows contain simulation data for each time step (can be for each model also). Refer tocsv
log files present in repository for more details.The simulator/test parameters of each test acts as input for the post-processing script, which decides which benchmark scenario (e.g., with or without gravity) analytical solution will get generated. The simulation data or model states act as intermediates; these are compared against the analytical solution, and errors/metrics are calculated and stored along with the respective test parameters. The output of this post-processing is performance metrics (e.g., max position error, max velocity error, etc.) of a particular test scenario (e.g., boxes_model_count or boxes_dt) in
csv
format, which essentially contains a list of performance metrics and test parameters of each test in each row of thecsv
file. -
Set model state plugin and link velocity reset components for new gazebo: Added link velocity reset component in new gazebo which resets the states (linear velocity and angular velocity) of link entity and model state plugin allow for setting initial model states through
sdf
plugin tag. -
Migration to New Gazebo(gz-sim): The
boxes benchmark
have been migrated to New Gazebo(Ionic) and has been tested with DART and Bullet-Featherstone.
-
-
Pull requests:
- Added dynamic generation and loading sdf file into gazebo functionality
- Added mcap logging functionality
- Added post processing for boxes benchmark
- Boxes benchmark for gz sim
- Physics: set link velocity from *VelocityReset components
- Added world linear and angular velocity reset components for link to set_model_state plugin
- Added triball benchmark
Note
- Indicates open pull request.
- Boxes benchmark: Free-floating rigid bodies
- Varying parameters: Time step size, number of models and initial conditions
- Comparison metrics : Accuracy and computional speed.
- log msg:
box_msg.proto
- world:
boxes.world.erb
- Triball benchmark: Rigid bodies in contact
- Varying parameters: Centre of gravity height, intial conditions, and friction model.
- Comparison metrics : Contact forces accuracy, energy conservation and computional speed.
- log msg:
triball_msg.proto
- world:
triball_contact.world.erb
- Model: 1 x 4 x 9(dimensions)
- Constand gravity field
- Initial condition: Large velocity about y axis of body frame.
- Expected behaviour: Parabolic trajecttory (with gravity), straight line trajectory(without gravity), momentum conservation and energy conservation.
- Varying time step size
- Varying model count
- Varying time step size
- Varying model count
- Model: ball radius: 0.02m, cylinder radius: 0.25m, and face altitude of triangle: 0.15m.
- Constant gravity field
- Three contact points
- Initial condition: Linear velocity along y axis of body frame and angular velocity along z axis of body frame.
- Expected behaviour: Friction should be dissipative, Energy conversation, and analytical solution for normal forces.
- Migration of
triball-benchmark
to New Gazebo. - Post-processing script for
triball-benchmark
. - Migration of avaible benchmarks to Mujoco and DRAKE.
- Addition of new simple benchmarking worlds.
I'd like to thank my mentor, Steve Peters, for his continuous support and guidance. Working on this project was a whole new experience for me, as I learned how Gazebo works and how to contribute to the open-source community. Attending the weekly meeting organized by the gazebo team was an excellent experience. I'd also like to thank the Open Source Robotics Foundation and the GSoC team for giving me this opportunity. I'm confident that this experience has provided me with skills that will enable me to continue contributing to open source in the future.