Skip to content

Instantly share code, notes, and snippets.

@russelljjarvis
Last active May 17, 2023 02:15
Show Gist options
  • Save russelljjarvis/0642145cc0e0e71603534577794dad22 to your computer and use it in GitHub Desktop.
Save russelljjarvis/0642145cc0e0e71603534577794dad22 to your computer and use it in GitHub Desktop.
Pre-print draft Example SNN.

Spike Time

Subtitle: Exploiting Modern Language Features for High Throughput Spiking Network Simulations at a Lower Tech Debt

Tags: Simulation of Spiking Neural Networks, Computational Neuroscience, Large Scale Modelling and Simulation

authors:

* Undecided order, or all equal order, I am flexible about this.
For example, in order that other other authors agree with,
name: Russell Jarvis affiliation: International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University 
name: Yeshwanth Bethi affiliation: "International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University

date: June 2023

Bibliography: paper.bib Summary

Some gains in biologically faithful neuronal network simulation can be achieved by applying recent computer language features. For example, the Julia language supports Sparse Compressed Arrays, Static Arrays, furthermore Julia provides very extensive support for CUDA GPU, as well as a plethora of reduced precision types. Julia also provides a high-level syntax that facilitates high code reuse while simplifying plotting and data analysis. These features lend themselves towards high-performance large-scale Spiking Neural Network simulation. Therefore, we are using Julia to develop an open-source software package that enables the simulation of networks with millions to billions of synapses on a computer with a minimum of 64GB of memory and an NVIDIA GPU.

Some other important advantages of choosing to implement SNN simulations in the Julia language are: technical debt, and the ability to minimise total energy consumption of simulations. The simulation code we are developing at ICNS is both faster and less complicated to read compared with some other simulation frameworks. The simplicity of the code base encompasses a simple installation process. Ease of installation is an important part of neuronal simulators that is often overlooked when evaluating merit, GPU simulation environments are notoriously difficult to install. The Julia language facilitates the ease of installation to solve the “two language problem” of scientific computing. The simulator encompasses a singular language environment, which includes a reliable, versatile, and monolithic package manager. The simulator installation includes no external compilation tools or steps.

To demonstrate the veracity and performance of this new simulation approach, I compare the Brunel model and the Potjans and Diesmann model as implemented in the NEST and GENN simulators. In a pending analysis, we compare simulation execution speeds and spike train raster plots to NEST and GENN using the discussed models as benchmarks. References

B. Illing, W. Gerstner & J. Brea, Biologically plausible deep learning - but how far can we go with shallow networks?, Neural Networks 118 (2019) 90-101

Similar Simulators With Different Goals

https://github.com/FabulousFabs/Spike.jl (looking very inspired by Brian2 except in Julia, nice use of code generation)
https://github.com/SpikingNetwork/TrainSpikingNet.jl (nice use of CUDA and reduced precision types)
https://github.com/leaflabs/WaspNet.jl (nice use of block arrays)
https://github.com/darsnack/SpikingNN.jl (nice use of multidispatch abstract types and type restriction)
https://github.com/wsphillips/Conductor.jl (nice use of DiffEq.jl and code generation)
https://github.com/FabulousFabs/AdEx (interesting and different)
https://github.com/ominux/pub-illing2019-nnetworks (research orientated code)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment