Last active
June 29, 2018 04:49
-
-
Save jwmerrill/9715447 to your computer and use it in GitHub Desktop.
Faster geometric brownian motion
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
function genS_jl(I) | |
s0 = 600.0 | |
r = 0.02 | |
sigma = 2.0 | |
T = 1.0 | |
M = 100 | |
dt = T/M | |
a = (r - 0.5*sigma^2)*dt | |
b = sigma*sqrt(dt) | |
paths = zeros(Float64, M, I) | |
for i in 1:I | |
paths[1, i] = st = s0 | |
for j in 2:M | |
st *= exp(a + b*randn()) | |
paths[j, i] = st | |
end | |
end | |
return paths | |
end | |
genS_jl(10) # Warm up JIT | |
@elapsed genS_jl(100000) # Outputs 0.538962298 |
Thank you! I'm still new to julia so this helps a lot.
Also note that if you just want to calculate the option price, you don't need to allocate any arrays at all because the option price only depends on the final value in each trajectory, and you can compute the average incrementally. You could write blazing fast straight scalar code that probably keeps everything in registers throughout.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is an optimized version of the Julia code in the post Julia vs. Python: Monte Carlo Simulations of Bitcoin Options.
The original unvectorized Julia code runs in the same time for me as it does for the author, so this is a bit more than a 4x speedup, and puts this code well under the vectorized code in either language.
In general, carefully written devectorized Julia will be faster than equivalent vectorized code because it is possible to avoid allocating containers at intermediate stages of the computation. This is surprising for people coming from e.g. Matlab, R, or Numpy, because vectorized code is often faster in those environments. In those languages, direct scalar code is slow, and vectorization is about expressing the computation in a form that will call out to fast C operations, but the C still pays a price to allocate intermediates. In Julia, the JIT is able to emit fast machine code for scalar operations, so you don't need to express your code in a form that calls out to C. There are JIT compilers for Matlab and Python too, but Julia's JIT is very high quality and everyone uses it all the time.
The main changes I made here were hoisting the computations of a and b out of the loop, and preallocating a 2D Float64 array to hold the results. These were worth roughly 2x each. I've chosen the storage format so that it is accessed in column order. Both optimizations have their own sections in the Julia Manual's performance tips section.