Skip to content

Instantly share code, notes, and snippets.

@moble
Created November 16, 2017 19:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save moble/119c740e204a31dd0920ba592f465b4d to your computer and use it in GitHub Desktop.
Save moble/119c740e204a31dd0920ba592f465b4d to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@hugohadfield
Copy link

This is really interesting, the matrices are ludicrously large and very sparse. I had a quick look at scipys sparse matrix library to see if there was an easy two or three line fix but unfortunately they do not like dealing with 3d arrays!

@chrisjldoran
Copy link

Matrices are nearly always going to be the slowest way to implement the geometric product - they are a worse-case scenario. The only reason you might want to consider them is if there is some serious hardware optimisation you can take advantage of (such as 4x4 matrix multiplication on a GPU). There is also some overhead in converting to and from matrices and multivectors. That involves a bunch of further multiplies and traces. I would always go for a sparse representation in terms of explicit multivector components, and then figure out how to optimise the (highly parallel ) multivector product.

@moble
Copy link
Author

moble commented Nov 17, 2017

@chrisjldoran There's another reason to consider matrices: Someone else has already done the coding! :) But yes, the goal is certainly to get to a better implementation of the products.

@hugohadfield Have you tried these timings on your installation? In particular, I'm wondering if the @ operator really is faster for many configurations, or if it's something peculiar to mine.

Copy link

ghost commented Nov 20, 2017

@moble Hi guys, I'm also really keen to see better performance in the clifford library. Just chiming in to say I executed the notebook and and the @ is fastest for me too (dot 56 µs ± 1.29 µs, einsum 64.4 µs ± 804 ns, @ 43.1 µs ± 457 ns).
Output of show_config():

blas_mkl_info: NOT AVAILABLE blis_info: NOT AVAILABLE openblas_info: libraries = ['libopenblas_v0.2.20_mingwpy', 'libopenblas_v0.2.20_mingwpy'] library_dirs = ['c:\\opt\\64\\lib'] language = c define_macros = [('HAVE_CBLAS', None)] blas_opt_info: libraries = ['libopenblas_v0.2.20_mingwpy', 'libopenblas_v0.2.20_mingwpy'] library_dirs = ['c:\\opt\\64\\lib'] language = c define_macros = [('HAVE_CBLAS', None)] lapack_mkl_info: NOT AVAILABLE openblas_lapack_info: libraries = ['libopenblas_v0.2.20_mingwpy', 'libopenblas_v0.2.20_mingwpy'] library_dirs = ['c:\\opt\\64\\lib'] language = c define_macros = [('HAVE_CBLAS', None)] lapack_opt_info: libraries = ['libopenblas_v0.2.20_mingwpy', 'libopenblas_v0.2.20_mingwpy'] library_dirs = ['c:\\opt\\64\\lib'] language = c define_macros = [('HAVE_CBLAS', None)]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment