-
-
Save moble/3aa44230256b66956587 to your computer and use it in GitHub Desktop.
@iurisegtovich Yeah, I think something has changed internally with scipy. Also note that odeint
and ode
are actually considered the "Old API" now; the new one uses solve_ivp
and friends — but these examples about 1,000 times slower with the new API!!! (I can imagine it's all in the overhead.)
Anyway, your method is a good way to go. Numba also supports jitclass
now, so you could also pass a more complicated object as one of the args, with all sorts of fancy capabilities.
But maybe more importantly, python itself has sped up significantly, so that even using the naive approach in this notebook gives nearly the same speed as when using numba. Obviously, really complicated functions will still benefit from numba, but in this example numba is actually a little bit slower in my tests.
I wrote a wrapper to LSODA which has no overhead: https://github.com/Nicholaswogan/NumbaLSODA . During an ODE solve, the python interpreter is never used, so everything is fast for small problems:
from NumbaLSODA import lsoda_sig, lsoda
import numba as nb
@nb.cfunc(lsoda_sig)
def RHS_nb(t, y, dy, p):
dy[0],dy[1] = t*y[1],y[0]
funcptr = RHS_nb.address
@nb.njit()
def test():
sol, success = lsoda(funcptr, y0_, t)
y0_ = np.array(y0)
%timeit test()
result is
26.2 µs ± 342 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
this gist is old (2014) but still relevant, imo.
i currently am not able to not use ode with that same error "TypeError: not enough arguments: expected 2, got 1"
i was able to use jit with odeint, as mentioned here, but i had problem with the following statement:
in that case I had different results (absurd results) when messing with the original y memory,
instead i made a scratch variable preallocated outside to work the dy
where dNi had been defined as: