Skip to content

Instantly share code, notes, and snippets.

@danieljfarrell
Last active December 22, 2015 13:59
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save danieljfarrell/6482713 to your computer and use it in GitHub Desktop.
Save danieljfarrell/6482713 to your computer and use it in GitHub Desktop.
Fleshing out a ode solver interface for scipy.
# Instantiation.
# Make a solver object, created automatically with default options. The solve is instantiated
# only with the variables that are common to all of the solvers. This means 1) the r.h.s. of
# function to solve, 2) a start time, 3) initial conditions, 4) Jacobian function. Strictly
# speaking the Jacobian is optional here because only some solvers will require it. As a
# compromise lets pass it as a keyword argument, this also gets around any awkward 'use_jac'
# options because the solver either has a Jacobian function at init time or it does not.
solver = odesolver("cvodes", ode_function, time_start, initial_conditions, jacobian_function=None)
# Solver options.
# I would like each solver to have a default set of sensible options that are contained in a
# simple python dictionary. This is similar to the MATLAB approach where they have the 'odeset'
# integrator options. The nice things about using a dictionary rather than using fixed keyword
# arguments, as is currently done with the `set_integrator` method, is that it allows different
# options for different solvers. However, it makes sense to have a unified set of options that
# are homogenous across all solvers. But where specialisation is needed it can be easily
# achieved this way (with a dictionary).
options = solver.options
# Change solver options (optional).
# We can change the options simply by changing the values in the dictionary.
options["method"] = "bdf" | "adams" # Pick a method for the solver
options["method_order"] = 5 | 12 # Pick a order for the method
options["abstol"] = 1e-6 # Exit conditions...
options["reltol"] = 1e-6 #
options["max_step"] = (1,2,3) # For the step options I am assuming that we
options["initial_step"] = (0.1, 0.2, 0.3) # have 3 coupled ODEs that need solving.
# Solver specific options can be prefixed with the solver name.
options["cvodes_include_magic_juice"] = True
# Internal sanity checks can be peformed when the new values are extracted from the dict.
options["method_order"] = 42 # 'method_order must be between 1 and 5'
options["twonk"] = "blah" # '"twonk" is not a valid option for the cvodes solver'
# Get a clear overview of how the problems is being treated. After all options _is_ just a
# dictionary.
print(options)
# Time step the solver.
# I prefer scipy's method of time stepping the solver over MATLABs method of `events`. In the
# MATLAB method one must specify a list of intermediate values of interest. The solution
# from these time points is then exported when the solver is run. I much prefer the iterative
# approach because it gives me more control of solution iteration. There is even the possibility
# to update the options during the iteration...
dt = 1
for i in range(10):
success = solver.integrate(solver.t + dt)
options['method_order'] = 2 # Maybe we make dynamic changes to the solver.
if not success:
report = solver.error_report() # It would be nice to return report that gives the user
# some advice on where the solver bailed out. This is
# more than just forwarding the FORTRAN error message.
# If the solver is iterated again an error is raised.
# Export the solution.
y = solver.y
@astrojuanlu
Copy link

I gave a look on this but I still haven't had time to think carefully about it. I'll let you know later this week (hopefully), thank you very much for pressing on this! :)

@moorepants
Copy link

What are the speed implications of having the python for loop calling .integrate?

In many cases you would want the integration loop to run at a low level, and fast. It would also be great to somehow pass in a low level function to integrate (c or fortran) so that you wouldn't get the overhead of call backs to python for the function at every time step in the solver.

@danieljfarrell
Copy link
Author

@moorepants I don't think there are any speed implications for calling integrate inside a for loop provided the C or FORTRAN underpinning the integrate call can operate in this manner. What we want to avoid is re-initalising the C solver just so the Python interface can be implemented, this would introduce unnecessary overhead.

It's an interesting idea to pass a C function directly. We should certainly look into this. But first we should probably focus on fixing the API and implementing some additional functionality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment