Skip to content

Instantly share code, notes, and snippets.

@moble
Last active July 6, 2021 18:32
Show Gist options
  • Save moble/3aa44230256b66956587 to your computer and use it in GitHub Desktop.
Save moble/3aa44230256b66956587 to your computer and use it in GitHub Desktop.
Show how to speed up scipy.integrate.odeint simply by decorating the right-hand side with numba's jit function
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@iurisegtovich
Copy link

this gist is old (2014) but still relevant, imo.

i currently am not able to not use ode with that same error "TypeError: not enough arguments: expected 2, got 1"

note there is a suggestion to bypass the arg error using a python wrapper function to the jited function < https://stackoverflow.com/questions/32744658/using-numba-jit-with-scipy-integrate-ode >

i was able to use jit with odeint, as mentioned here, but i had problem with the following statement:

" Fortunately, the first argument to odeint is an array that gets thrown away anyway, so we can just replace the values in that array and return it."

in that case I had different results (absurd results) when messing with the original y memory,
instead i made a scratch variable preallocated outside to work the dy

scratch_dN = np.zeros((5,)) #scratch memory allocated externally to dNi, reused between calls
...
sol = odeint(dNi, Ni0, t, args=(scratch_dN,) )
...

where dNi had been defined as:

def dNi(N,t,scratch_dN): #!!    
    ... 
    vector_dNi = scratch_dN #rename reuse memory
   ... 
    vector_dNi[0] = ...
    ...#up to vector_dNi0[5]
    ...
    return vector_dNi  

@moble
Copy link
Author

moble commented Mar 24, 2021

@iurisegtovich Yeah, I think something has changed internally with scipy. Also note that odeint and ode are actually considered the "Old API" now; the new one uses solve_ivp and friends — but these examples about 1,000 times slower with the new API!!! (I can imagine it's all in the overhead.)

Anyway, your method is a good way to go. Numba also supports jitclass now, so you could also pass a more complicated object as one of the args, with all sorts of fancy capabilities.

But maybe more importantly, python itself has sped up significantly, so that even using the naive approach in this notebook gives nearly the same speed as when using numba. Obviously, really complicated functions will still benefit from numba, but in this example numba is actually a little bit slower in my tests.

@Nicholaswogan
Copy link

Nicholaswogan commented Jul 6, 2021

I wrote a wrapper to LSODA which has no overhead: https://github.com/Nicholaswogan/NumbaLSODA . During an ODE solve, the python interpreter is never used, so everything is fast for small problems:

from NumbaLSODA import lsoda_sig, lsoda
import numba as nb

@nb.cfunc(lsoda_sig)
def RHS_nb(t, y, dy, p):
    dy[0],dy[1] = t*y[1],y[0]

funcptr = RHS_nb.address

@nb.njit()
def test():
    sol, success = lsoda(funcptr, y0_, t)
    
y0_ = np.array(y0)
%timeit test()

result is

26.2 µs ± 342 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment