Skip to content

Instantly share code, notes, and snippets.

View russelljjarvis's full-sized avatar
🔨
code carpentry

Russell Jarvis russelljjarvis

🔨
code carpentry
View GitHub Profile
using JLD2
using ProgressMeter
using PyCall
using Random
function build_data_set_native(events,cnt,input_shape)#,l_change_cnt,l_old)
xx = Vector{Int32}([])
yy = Vector{Int32}([])
using CUDA
using Adapt
# Check if CUDA is available
if !CUDA.has_cuda()
error("CUDA is not available on this system.")
else
CUDA.allowscalar(false) # Disallow scalar indexing for performance
end
@russelljjarvis
russelljjarvis / LIF_Neuron_As_CUDA_Kernel.jl
Last active May 10, 2024 08:44
LIF_Neuron_As_CUDA_Kernel
"""
This is an update rule for the Leaky Integrate and Fire Neuronal Model. It is an integration step for a forward Euler solver. The update step is implemented as a CUDA kernel and it is written in Julia. The cuda kernel is designed to update a whole population of LIF neuron models in an embarrasingly parallel manner.
Further down in the code there is also a mutable struct container called IFNF which has a constructor method. The constructor method uses multi dispatch, so it is expressed as a series of related functions, and each function is distinguishable because it has different types.
In fact the constructor itself utilizes parametric types (ie the types used to make the struct are parameters).
Briefly describe what you learned when you created this code sample.
I learned how to convert conventional code meant for CPU execution, to Cuda/GPU code in the Julia language. I also learned how to make a constructor for my own type definition. The constructor in question utilizes multiple dispatch and param
@russelljjarvis
russelljjarvis / reading_long_term.jl
Created October 18, 2023 01:32
Reading Long term stability of cortical ensembles
using Plots
using MAT
using StatsBase
using JLD2
using OnlineStats
using SparseArrays
using DelimitedFiles
using DataFrames
using Revise
@russelljjarvis
russelljjarvis / JuliaNativeNMNISTrepresentation.jl.jl
Last active September 11, 2023 04:47
Hi @yeshwanthravitheja this is where I made the julia native representation of NMNIST (no pycall). If you wanted to make it really efficient, I think polarity only needs 8bits, or just a booltype. Int32 could be unsigned (UInt32). Another thing that could help is the related concepts of streaming/circular-buffer and lazy evalutaion. Tables.jl ha…
# Hi @yeshwanthravitheja this is where I made the julia native representation of NMNIST (no pycall).
# If you wanted to make it really efficient, I think polarity only needs 8bits, or just a booltype. Int32 could be unsigned (UInt32).
# Another thing that could help is the related concepts of streaming/circular-buffer and lazy evalutaion.
# Tables.jl has a syntax for lazy loading big data sets (don't store all of the NMNIST in memory, just store in the currently accessed samples of NMNIST), this can be done with lazy loading.
# Similar magic is implemented by CircBuff type defined in OnlineStats.jl here https://github.com/joshday/OnlineStatsBase.jl/blob/master/src/stats.jl#L17-L60
# https://github.com/russelljjarvis/SpikeTime.jl/blob/restructure/examples2Vec/train_nmnist_performance_bm.jl
# https://github.com/russelljjarvis/SpikeTime.jl/blob/restructure/examples2Vec/train_nmnist.jl
## The entry point to call it, and iteratively save as it s made is here:
@russelljjarvis
russelljjarvis / spike2VecSTDP.jl
Last active August 15, 2023 03:58
spike2VecSTDP.jl
using CSV
using DataFrames
using Plots
using OnlineStats
df2= CSV.read("output_spikes.csv",DataFrame)
# make nodes of type float.
nodes = df2.id
times = df2.time_ms
nodes = Vector{Float32}(nodes)
@russelljjarvis
russelljjarvis / changed_main.jl_sim!_function.jl
Created May 17, 2023 04:55
changed_main.jl_sim!_function.jl
function sim!(p,dt,verbose=true;current_stim=0.0)
for (ind,p) in enumerate(P.post_synaptic_targets)
p.fire = Vector{Bool}([false for i in 1:length(p.fire)])
integrate_neuron!(p.N, p.v, dt, p.ge, p.gi, p.fire, p.u, p.tr)
record!(p)
pre_synaptic_cell_fire_map = copy(p.fire)
g = zeros(sizeof(pre_fire_map))
forwards_euler_weights!(p,W,pre_synaptic_cell_fire_map,g)
pre_synaptic_cell_fire_map = Vector{Bool}([false for i in 1:length(pre_fire_map)])
@russelljjarvis
russelljjarvis / NMNIST_DATA_INTO_SNN.jl
Last active May 17, 2023 04:47
NMNIST_DATA_INTO_SNN.jl
using PyCall
using Revise
using Odesa
using Random
using ProgressMeter
using JLD
using NumPyArrays
using LoopVectorization
using Plots
@russelljjarvis
russelljjarvis / GPUComplianceModifications.jl
Last active May 17, 2023 04:55
GPUComplianceModifications.jl
using Revise
using CUDA
"""
# * Changes to the cells struct as follows:
# * Make the typing used dynamic and parametric, ie types can be either CuArray or regular Array, depending the input
# arguments that are passed to the constructor.
# TODO, I have good things about KernelAbstractions.jl it possible that kernel abstractions will remove my reliance on different method dispatch to choose CPU/GPU backend.
@russelljjarvis
russelljjarvis / Pre-print_draft_Example_SNN.md
Last active May 17, 2023 02:15
Pre-print draft Example SNN.

Spike Time

Subtitle: Exploiting Modern Language Features for High Throughput Spiking Network Simulations at a Lower Tech Debt

Tags: Simulation of Spiking Neural Networks, Computational Neuroscience, Large Scale Modelling and Simulation

authors:

* Undecided order, or all equal order, I am flexible about this.
For example, in order that other other authors agree with,
name: Russell Jarvis affiliation: International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University 

name: Yeshwanth Bethi affiliation: "International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University