Skip to content

Instantly share code, notes, and snippets.

@davidssmith
Last active December 29, 2015 14:09
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save davidssmith/7681854 to your computer and use it in GitHub Desktop.
Save davidssmith/7681854 to your computer and use it in GitHub Desktop.
This is 10x slower in Julia than in Python/Anaconda. Why?
function tv_fista(X::Array{Complex{Float64},2}; mu=0.1, niter=10)
m, n = size(X);
P = Any[zeros(Complex{Float64}, (m-1,n)), zeros(Complex{Float64}, (m,n-1))];
R = Any[zeros(Complex{Float64}, (m-1,n)), zeros(Complex{Float64}, (m,n-1))];
tk = 1;
tkp1 = 1;
D = zeros(Complex{Float64}, (m,n));
fval = Inf;
obj = zeros(niter);
for t = 1:niter
println("$t");
fold = fval;
Pold = P; # Computing the gradient of the objective function
D = X - mu*Lforward(R);
Q = Ltrans(D);
# Taking a step towards minus of the gradient
f = 0.125 / mu;
P[2] = R[2] + f*Q[2];
P[1] = R[1] + f*Q[1];
# Peforming the projection step
P[1] = projP(P[1]);
P[2] = projP(P[2]);
# Updating R and t
tk = tkp1;
tkp1 = (1.0 + sqrt(1.0 + 4.0*tk*tk)) ./ 2.0;
f = (tk - 1.0)/tkp1;
R[1] = P[1]*(1 + f) - f*Pold[1];
R[2] = P[2]*(1 + f) - f*Pold[2];
C = X - mu*Lforward(P);
obj[t] = norm(C)^2;
end
X_den = D;
return X_den, obj;
end
function projP(X::Array{Complex{Float64},2})
m,n = size(X);
for j = 1:n, i = 1:m
if abs(X[i,j]) > 1.0
X[i,j] /= abs(X[i,j]);
end
end
return X;
end
function Lforward(P::Array{Any,1})
m2, n2 = size(P[1]);
m1, n1 = size(P[2]);
if n2 != n1 + 1
error("dimensions are not consistent");
elseif m1 != m2 + 1
error("dimensions are not consistent");
end
m = m2 + 1;
n = n2;
X = zeros(eltype(P[1]), (m, n));
X[1:end-1,:] = P[1];
X[:,1:end-1] = X[:,1:end-1] + P[2];
X[2:end,:] = X[2:end,:] - P[1];
X[:,2:end] = X[:,2:end] - P[2];
return X;
end
Ltrans(X::Array{Complex{Float64},2}) =
Any[X[1:end-1,:] - X[2:end,:], X[:,1:end-1] - X[:,2:end]];
# run a demo
n = 256;
X = complex(rand(n, n), rand(n, n));
@time Y, obj = tv_fista(X);
@ivarne
Copy link

ivarne commented Nov 27, 2013

I tried to run the @time line two times on the bottom. From first to second, the timing went from 2.31 to 0.80 seconds, because the compilation only occurs on the first run in a session. This seems like cheating, but that 1.5 seconds does not increase when you increase the number of iterations.

I don't understand the code, but I also see that you use slicing a lot. Currently Julia takes a copy when slicing, and that might be different from what numpy does. One possibility might be to exchange some of the of them with calls to sub()

@davidssmith
Copy link
Author

Thanks for the advice. I will give that a try.

What is the difference between slice and sub?

@davidssmith
Copy link
Author

I don't appear to be able to assign to views of the array using slice and sub.

@ViralBShah
Copy link

Currently, sub is the way to get around the copying of slices problem. This is one of the things that is planned for 0.3 - slices are views by default.

@davidsmith Would you be able to submit a PR to add this example to the julia perf benchmar?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment