Skip to content

Instantly share code, notes, and snippets.

@Vaibhav-Chopra-GT
Last active September 12, 2022 17:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Vaibhav-Chopra-GT/b4725011475fb0508791222424e89f6b to your computer and use it in GitHub Desktop.
Save Vaibhav-Chopra-GT/b4725011475fb0508791222424e89f6b to your computer and use it in GitHub Desktop.

Final Report for Google Summer of Code 2022

Benchmarks (PyBaMM), NumFOCUS

Details

Name: Vaibhav Chopra Organization: NumFOCUS Sub-Organization: PyBaMM Mentors: Robert Timms, Valentin Sulzer, Ferran Brosa Planella, Priyanshu Agarwal

Original project idea

Benchmarks

PyBaMM currently has a very basic benchmark framework. The aim of this project is to improve the benchmark suite, the website displaying the results and the analysis tools.

Expected outcomes

[Easy] Improve the benchmark website and populate it with new tests [Hard/stretch goal] Develop tools to analyse the data stored in the repository

Abstract

The main aim of this project was to improve the existing benchmark framework of PyBaMM, so the contributors and developers of PyBaMM to compare the relative performance of code within the PyBaMM repository. This will also allow them to identify inefficiencies in the code after a particular commit.

Quick links

Work done

  • Benchmarking the parameter sets - I benchmarked battery models (Single Particle Model, Doyle Fuller Neuman, and Single Particle Model with electrolyte) by running, simulating, and solving them with different parameter sets. Initially, I wrote all the benchmarks individually, generating a lot of repetitive code. I was able to simplify these benchmarks using the params feature provided by asv, which allowed me to change parameter sets without repeating the code.
    Relevant blog posts - Community Bonding and week 1
    pybamm-team/PyBaMM#2086 - Benchmarks for model X parameter set combinations.

  • Creating unit benchmarks - The next thing that we decided to benchmark was a battery problem (PyBaMM example) by dividing it into different parts or units. The different units created were creating a PyBaMM expression, parameterizing the model, discretizing the model and solving the model.
    Relevant blog posts - Community Bonding and week 1
    pybamm-team/PyBaMM#2092 - add unit benchmarks.

  • Benchmarking experiments - I benchmarked the CCCV and GITT experiments by simulating them with the Single Particle Model and Doyle Fuller Neuman model. Furthermore, the parameter sets provided to the model were also varied between Marquis2019 and Chen2020.
    Relevant blog posts - Community Bonding and week 1
    pybamm-team/PyBaMM#2106 - add experiment bnchmarks.

  • Documentation for benchmarks - The benchmarks directory needed documentation about running and adding more benchmarks using asv.
    Relevant blog posts - Week 2-3
    pybamm-team/PyBaMM#2120 - docs for asv.

  • Memory benchmarks - In PyBaMM, there were memory leaks when performing simulations repeatedly, so I added memory benchmarks using the Memory Benchmarks feature of ASV. After writing the benchmarks I realized that the memory benchmarks track the size of an object instead of the ram consumption of a function, which is not what my mentors planned to benchmarks, but we decided to add them anyways.
    Relevant blog posts - Week 2-3
    pybamm-team/PyBaMM#2121 - add memory benchmarks for creating expression, parameterising, discretising and solving a model.

  • Different model options benchmarks - PyBaMM has different model options which can be passed while initializing the model. My task was to build, simulate and solve the Single Particle Model and the Doyle Fuller Neuman model with different model options (such as loss of active material, lithium plating, SEI, particle, thermal, and surface form) with their respective values and compatible parameter sets. The main challenge here was the total number of different possible configurations. In order to prevent code duplication, I decided to use a function as a template and passed all the settings as arguments. Lastly, I used the params feature of asv to cycle through the arguments and benchmark all the possible configurations with minimal code.
    Relevant blog posts - Week 4-5
    pybamm-team/PyBaMM#2132 - add degradation models benchmarks.

  • Work precision sets and an automated workflow for the same - Work precision sets included benchmarking solver settings against time. It started with varying the value of absolute tolerances and later on expanded to different settings, which are - relative tolerances, dt_max, mesh size and number of states. The workflow added ran all the work precision sets on every release, saved images and displayed them in a markdown file.
    Relevant blog posts - Week 2-3, Week 4-5, Week 6-7
    pybamm-team/PyBaMM#2157 - Add work precision sets and an automated workflow for the same.

  • Validation repository - Moved all the work precision sets to s different repository, as the validation benchmarks(comsol comparison, ecker comparison and discharge curve) that were added to PyBaMM by me were not working in GitHub actions. This happened because the comsol results directory is not installed with PyBaMM. Another eason to move work precision sets to another repository was that the plot images were taking a lot of space in the PyBaMM's repository.
    pybamm-team/pybamm-validation#1 - Add validation benchmarks and work precision sets.

  • Workflows for validation repository - I added workflows in the validation repository which gets triggered everytime there is a push or a release in the main PyBaMM repository. On every push, it runs all the work precision sets and updates the readme file, which shows all the recent plots. On every release in the main PyBaMM repository, it creates a release in the validation repository using the readme file which contains the plots for the last commit before the release.
    pybamm-team/pybamm-validation#1 - Add workflow in the validation repository. pybamm-team/PyBaMM#2274 - Add workflow in PyBaMM's repository which triggers the workflow in the validation repository

What's next

Some things that I plan to carry on or start after GSoC -

  • Maintaining the newly added benchmarking suite.
  • Adding more benchmarks for any new functionality added to PyBaMM.
  • Guiding new contributors to develop benchmarks while they add new functionality.
  • Guiding developers and users interested specifically in PyBaMM’s benchmarking suite.
  • Maybe implementing a similar benchmark suite for liionpack.

Further, some of the functionalities added during GSoC have not been tested yet. I plan to fix the bugs or errors that surface when this functionality is tested. Last but not the least I am still working on a couple of final PRs which should be merged by the next week.

Final words

This summer was easily the most productive summer I have ever had. I am grateful for all the help, mentorship and guidance that my mentors provided me with. The project was extra challenging because my college held classes throughout the summer, but my mentors were very understanding and supportive. I had a lot of fun working with and under PyBaMM team, and I hope to continue contributing to PyBaMM in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment