Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
SciPy+Numba odeint vs Julia ODE vs NumbaLSODA: 50x performance difference on stiff ODE

SciPy+Numba odeint vs Julia DifferentialEquations.jl vs NumbaLSODA Summary

All are solved at reltol=1e-3, abstol=1e-6 using the fastest ODE solver of the respective package for the given problem.

Absolute Performance Numbers:

  • SciPy LSODA through odeint takes ~489μs
  • SciPy LSODA through odeint with Numba takes ~257μs
  • NumbaLSODA takes ~25μs
  • DifferentialEquations.jl Rosenbrock23 takes ~9.2μs

Relative Performance Numbers:

  • SciPy LSODA through odeint takes 53x as long
  • SciPy LSODA through odeint with Numba takes 28x as long
  • numbalsoda takes 2.7x as long
from numbalsoda import lsoda_sig, lsoda
from numba import njit, cfunc
import numpy as np
import timeit
@cfunc(lsoda_sig)
def rhs(t, u, du, p):
k1 = p[0]
k2 = p[1]
k3 = p[2]
du[0] = -k1*u[0]+k3*u[1]*u[2]
du[1] = k1*u[0]-k2*u[1]**2-k3*u[1]*u[2]
du[2] = k2*u[1]**2
u0 = np.r_[1.0, 0.0, 0.0]
t = np.r_[0.0, 1e5]
p = np.r_[0.04, 3e7, 1e4]
rhs_address = rhs.address
usol, success = lsoda(rhs_address, u0, t, data=p, rtol=1.0e-3, atol=1.0e-6)
assert success
@njit
def time_func():
usol, success = lsoda(rhs_address, u0, t, data=p, rtol=1.0e-3, atol=1.0e-6)
assert success
time_func()
print('NumbaLSODA', timeit.Timer(time_func).timeit(number=1000)/1000, 'seconds')
print('NumbaLSODA', timeit.Timer(time_func).timeit(number=1000)/1000, 'seconds')
using OrdinaryDiffEq, StaticArrays, BenchmarkTools
function rober(u,p,t)
y₁,y₂,y₃ = u
k₁,k₂,k₃ = p
du1 = -k₁*y₁+k₃*y₂*y₃
du2 = k₁*y₁-k₂*y₂^2-k₃*y₂*y₃
du3 = k₂*y₂^2
SA[du1,du2,du3]
end
prob = ODEProblem{false}(rober,SA[1.0,0.0,0.0],(0.0,1e5),SA[0.04,3e7,1e4])
# Defaults to reltol=1e-3, abstol=1e-6
@btime sol = solve(prob,Rosenbrock23(),save_everystep=false) # 9.300 μs (26 allocations: 3.03 KiB)
from scipy.integrate import odeint
import numba
import timeit
def rober(u,t):
k1 = 0.04
k2 = 3e7
k3 = 1e4
y1, y2, y3 = u
dy1 = -k1*y1+k3*y2*y3
dy2 = k1*y1-k2*y2*y2-k3*y2*y3
dy3 = k2*y2*y2
return [dy1,dy2,dy3]
u0 = [1.0,0.0,0.0]
t = [0.0, 1e5]
numba_f = numba.jit(rober,nopython=True)
sol = odeint(numba_f,u0,t,rtol=1e-3,atol=1e-6)
def time_func():
sol = odeint(numba_f,u0,t,rtol=1e-3,atol=1e-6)
timeit.Timer(time_func).timeit(number=100)/100 # 0.00025674299999081993 seconds
timeit.Timer(time_func).timeit(number=100)/100 # 0.0002520930000173394 seconds
from scipy.integrate import odeint
import timeit
def rober(u,t):
k1 = 0.04
k2 = 3e7
k3 = 1e4
y1, y2, y3 = u
dy1 = -k1*y1+k3*y2*y3
dy2 = k1*y1-k2*y2*y2-k3*y2*y3
dy3 = k2*y2*y2
return [dy1,dy2,dy3]
u0 = [1.0,0.0,0.0]
t = [0.0, 1e5]
sol = odeint(rober,u0,t,rtol=1e-3,atol=1e-6)
def time_func():
sol = odeint(rober,u0,t,rtol=1e-3,atol=1e-6)
timeit.Timer(time_func).timeit(number=100)/100 # 0.00048923599999398 seconds
timeit.Timer(time_func).timeit(number=100)/100 # 0.00047277800000301796 seconds
@maedoc
Copy link

maedoc commented Dec 6, 2021

I realise this is an older gist but it should be pointed out that timeit.Timer(time_func).timeit(number=100) doesn't return the average time taken, but the total for the 100 iterations, so for your numbers here, the Numba example is ~6x slower than Julia, not 500x. I reran that example and rewrote it for NumbaLSODA, and the latter is ~6x faster, so equivalent to Julia per the numbers here,

from NumbaLSODA import lsoda_sig, lsoda
from numba import njit, cfunc
import numpy as np

@cfunc(lsoda_sig)
def rhs(t, u, du, p):
  k1 = 0.04
  k2 = 3e7
  k3 = 1e4
  y1 = u[0]
  y2 = u[1]
  y3 = u[2]
  du[0] = -k1*y1+k3*y2*y3
  du[1] =  k1*y1-k2*y2*y2-k3*y2*y3
  du[2] =  k2*y2*y2

u0 = np.r_[1.0, 0.0, 0.0]
t = np.r_[0.0, 1e5]
usol, success = lsoda(rhs.address, u0, t)
assert success

def time_func():
    usol, success = lsoda(rhs.address, u0, t)
    assert success

print('NumbaLSODA', timeit.Timer(time_func).timeit(number=1000))

@ChrisRackauckas
Copy link
Author

Thanks! Yeah this benchmark wasn't on the SciMLBenchmarks so it hasn't been tracked or gotten much love. I fixed that issue and I updated it since the Julia version now takes about 9.4us. I also added the NumbaLSODA code. I'll add this to the tracker now.

@maedoc
Copy link

maedoc commented Dec 10, 2021

It could be worth adding a comment about why this apples-oranges comparison is meaningful, e.g. analytic Jacobian inlined in Rosenbrock vs finite differencing done by lsoda, which is certainly interesting, but more interesting comparisons would be with lsoda in Julia, or pushing SymPy derived Jacobian into a Rosenbrock defined in Numba.

@ChrisRackauckas
Copy link
Author

Alright, updated. Note that symbolic Jacobians don't actually matter for static array problems since for that size forward mode autodiff (which is automatic in the backend) tends to be as efficient if under the default chunk size (in fact you get a little bit better SIMD). I was just lazy in the updating.

but more interesting comparisons would be with lsoda in Julia

Why would that be interesting? I assume most users would use the fastest and most accurate method for the given problem, not be like "oh in DifferentialEquations.jl I should use a slow method because Python only has slow methods". Maybe I'm not the most standard user, but I would think most people looking at this kind of case (i.e. clinical pharmacology) care about picking the method that is most efficient 🤷

@Nicholaswogan
Copy link

Nicholaswogan commented Feb 28, 2022

@ChrisRackauckas I updated the numblsoda benchmark. Using @njit on the timing function improves speed by a lot. Also I changed the name of numbalsoda to all lowercase. Lower case is the standard apparently.

https://gist.github.com/Nicholaswogan/443c7d1778385b5b71bbd35a74289423

@ChrisRackauckas
Copy link
Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment