Skip to content

Instantly share code, notes, and snippets.

@yaswanth1701
Last active August 25, 2024 20:16
Show Gist options
  • Save yaswanth1701/d9ac86875046d827d65f6f48963a13c7 to your computer and use it in GitHub Desktop.
Save yaswanth1701/d9ac86875046d827d65f6f48963a13c7 to your computer and use it in GitHub Desktop.
Benchmarking physics engines

Benchmarking Physics Engines

Contributor: Yaswanth Gonna

Mentor: Steve Peters


Why benchmarking physics engines ?

A wide variety of robotics applications strongly rely on simulator performance. While many open-source simulators/physics engines are available, each with its benefits and drawbacks, there is a lack of simulator-agnostic benchmarks to help developers and users compare and choose the right simulator/physics engine for their specific use case.

This project aims to introduce a small number of simple benchmarking worlds to compare various physics engines based on metrics like computational time and numerical accuracy. This project provides users with a set of features which make convinent scenerio generation, logging and performance metrics calculation.

This project extents on previous benchmarking efforts to make it fully simulator/physics engine independent.


Overview of open-source physics engines

Features Bullet DART ODE Mujoco DRAKE
Contact Rigid/Impulse Rigid/Impulse Rigid/Impulse Soft Rigid/Hydroelastic
Coordinates Maximal/Featherstone Generalized Maximal Generalized Generalised
Integrator Semi-implicit Euler Semi-implicit Euler/RK4 Explicit/Implicit Euler Semi-implicit Euler/RK4 Implicit & Explicit Euler/RK4
Friction model Implicit friciton/Pyramid Approximated Coulomb friction cone Pyramid/Cone Pyramidal/Elliptic Coulomb Friction

Note

These are only few of the avaible open-source physics engines.


List of benchmarked simulators/physics engines

graph  LR;
    G(Gazebo Classic) -..-> P(Physics Egines);
    P(Physics engines) -..-> O(ODE - default);
    P(Physics Engines) -..-> B(Bullet);
    P(Physics Engines) -..-> D(DART);
    P(Physics engines) -..-> S(Simbody);
    
    
Loading
graph  LR;
    G(Gazebo Ionic) -..-> P(Physics Egines);
    P(Physics Engines) -..-> D(DART - default);
    P(Physics Engines) -..-> B(Bullet);
    P(Physics engines) -..-> S(Bullet-Featherstone);
    
Loading

Note

Currently Bullet has not be benchmarked in Gazebo Ionic because it doesn't support SetWorldLinearVelocity for freeGroup as of now.


Improvements and contributions

This project builds up on the previous benchmarking infrastructure to make it simulator independent and to add new benchmarking tests.

  • Improvements

    • Dynamic world generation: This feature generates simulation worlds in SDF format at runtime and saves them in the respective test directory. This allows for sharing of benchmarked worlds across the different simulators, as most of them support SDF format or conversion, and makes it easy to check/inspect parameters used for a particular test case.

    • MCAP/CSV logging: This is an optional feature that logs states of the model (e.g., position, velocity, acceleration, etc.). MCAP offers a small log file size, which can be used for sharing the test result, and the mcap_to_csv.py converts mcap to csv format, which makes it easier to read/inspect the log files and pin point the errors. Users/developers can use native simulator log format and are required to convert to csv for performance metric calculation.

      The logging feature stores simulation/test parameters (e.g., time step size, model count, wall time, etc.) and raw simulation data (e.g., velocity, position, etc.) for each simulation time step for each test. Each test has a separate mcap and csv file that is used for post-processing of log data.

    • Post-processing: The log data for each test in csv format is used by the postprocessing script to calculate performance metrics. The test parameters should be stored in the top two rows of the csv file, and the rest of the rows contain simulation data for each time step (can be for each model also). Refer to csv log files present in repository for more details.

      The simulator/test parameters of each test acts as input for the post-processing script, which decides which benchmark scenario (e.g., with or without gravity) analytical solution will get generated. The simulation data or model states act as intermediates; these are compared against the analytical solution, and errors/metrics are calculated and stored along with the respective test parameters. The output of this post-processing is performance metrics (e.g., max position error, max velocity error, etc.) of a particular test scenario (e.g., boxes_model_count or boxes_dt) in csv format, which essentially contains a list of performance metrics and test parameters of each test in each row of the csv file.

    • Set model state plugin and link velocity reset components for new gazebo: Added link velocity reset component in new gazebo which resets the states (linear velocity and angular velocity) of link entity and model state plugin allow for setting initial model states through sdf plugin tag. 

    • Migration to New Gazebo(gz-sim): The boxes benchmark have been migrated to New Gazebo(Ionic) and has been tested with DART and Bullet-Featherstone.

  • Pull requests:

Note

  • Indicates open pull request.

Benchmark tests

  • Boxes benchmark: Free-floating rigid bodies
    • Varying parameters: Time step size, number of models and initial conditions
    • Comparison metrics : Accuracy and computional speed.
    • log msg: box_msg.proto
    • world: boxes.world.erb
  • Triball benchmark: Rigid bodies in contact
    • Varying parameters: Centre of gravity height, intial conditions, and friction model.
    • Comparison metrics : Contact forces accuracy, energy conservation and computional speed.
    • log msg: triball_msg.proto
    • world: triball_contact.world.erb

Benchmark 1: Free-floating rigid bodies

  • Model: 1 x 4 x 9(dimensions)
  • Constand gravity field
  • Initial condition: Large velocity about y axis of body frame.
  • Expected behaviour: Parabolic trajecttory (with gravity), straight line trajectory(without gravity), momentum conservation and energy conservation.


Gazebo Classic

  • Varying time step size



  • Varying model count


Gazebo Ionic

  • Varying time step size



  • Varying model count


Benchmark 2: Rigid bodies in contact

  • Model: ball radius: 0.02m, cylinder radius: 0.25m, and face altitude of triangle: 0.15m.
  • Constant gravity field
  • Three contact points
  • Initial condition: Linear velocity along y axis of body frame and angular velocity along z axis of body frame.
  • Expected behaviour: Friction should be dissipative, Energy conversation, and analytical solution for normal forces.


Future works:

  1. Migration of triball-benchmark to New Gazebo.
  2. Post-processing script for triball-benchmark.
  3. Migration of avaible benchmarks to Mujoco and DRAKE.
  4. Addition of new simple benchmarking worlds.

Acknowledgement:

I'd like to thank my mentor, Steve Peters, for his continuous support and guidance. Working on this project was a whole new experience for me, as I learned how Gazebo works and how to contribute to the open-source community. Attending the weekly meeting organized by the gazebo team was an excellent experience. I'd also like to thank the Open Source Robotics Foundation and the GSoC team for giving me this opportunity. I'm confident that this experience has provided me with skills that will enable me to continue contributing to open source in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment