Skip to content

Instantly share code, notes, and snippets.

@matthieubulte
Created April 2, 2021 22:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save matthieubulte/ad9cfad1bbf5b6f1c123f5b44ea021c1 to your computer and use it in GitHub Desktop.
Save matthieubulte/ad9cfad1bbf5b6f1c123f5b44ea021c1 to your computer and use it in GitHub Desktop.
Using Symbolics.jl to compile sparse matrix operations
using LinearAlgebra, BenchmarkTools, SparseArrays, SymbolicUtils, Symbolics
## Problem setup
const z = zeros;
wx = Float64[ 0 -1 0;
1 0 0;
0 0 0;];
Ak_1 = [ wx -I zeros(3,12);
zeros(3,3) .01*I zeros(3,12);
zeros(12,18) ;];
Fk_1 = exp(Ak_1); sFk_1 = sparse(Fk_1);
# Compiling a Symbolics expression is just a couple of lines of code!
function compile(f, m, n)
@variables M[1:m, 1:n]
eval.(build_function(f(M), M))[2]
end
aux1 = zeros(18,18); aux2 = zeros(18,18);
f = compile(M -> sFk_1 * M, 18, 18)
@btime mul!(aux1, Fk_1, Pu);
# 617.824 ns (0 allocations: 0 bytes)
@btime f(aux2, Pu);
# 217.575 ns (0 allocations: 0 bytes)
# Already a ~3x speed-up!
# And we see even more benefits on more complicated functions
g = compile(M -> sFk_1 * M * sFk_1', 18, 18)
@btime begin
mul!(aux1, Fk_1, Pu)
mul!(aux2, aux1, Fk_1', 1.0, 0.0)
end;
# 1.347 μs (2 allocations: 48 bytes)
@btime g(aux1, Pu);
# 227.832 ns (0 allocations: 0 bytes)
# Here, compiling the chained operation brings a ~6x speed-up!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment