This example is a 4-dimensional geometric brownian motion. The code
for the torchsde version is pulled directly from the
torchsde README
so that it would be a fair comparison against the author's own code.
The only change to that example is the addition of a dt
choice so that
the simulation method and time step matches between the two different programs.
The SDE is solved 100 times. The summary of the results is as follows:
- torchsde: 1.87 seconds
- DifferentialEquations.jl: 0.00115 seconds
This demonstrates a 1,600x performance difference in favor of Julia on the Python library's README example. Further testing against torchsde was not able to be completed because of these performance issues.
We note that the performance difference in the context of neural SDEs is likely smaller due to the ability to time spent in matrix multiplication kernels. However, given that full SDE training examples like demonstrated here generally take about a minute, we still highly expect a major performance difference but currently do not have the compute time to run a full demonstration.
Speaking of jitting, here's an example of how one would jit the SDE:
For now scripting actually makes it slightly slower. Though in the ideal scenario when there's some control-flow or indexing in
f
andg
, there might be an improvement.