Skip to content

Instantly share code, notes, and snippets.

@jerlich
Created May 31, 2021 08:16
Show Gist options
  • Save jerlich/89d1bb96d5e10b7d42a4b7756c878958 to your computer and use it in GitHub Desktop.
Save jerlich/89d1bb96d5e10b7d42a4b7756c878958 to your computer and use it in GitHub Desktop.
### A Pluto.jl notebook ###
# v0.14.5
using Markdown
using InteractiveUtils
# This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error).
macro bind(def, element)
quote
local el = $(esc(element))
global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : missing
el
end
end
# ╔═╡ 4258729c-bde7-11eb-26b2-6be481b178a1
begin
import Pkg
Pkg.activate(mktempdir())
pkgs = ["StatsPlots","Statistics","StatsBase","PlutoUI",
"Distributions", "Plots"]
Pkg.add(pkgs)
end; md"""#### Add Packages
"""
# ╔═╡ 49c31004-514a-418a-8efd-35e6c2044589
begin
using StatsPlots, StatsBase, PlutoUI, Distributions, Plots.PlotMeasures, Random
md"""##### Import Packages
StatsPlots, StatsBase, PlutoUI, Distributions, Plots.PlotMeasures, Random
"""
end
# ╔═╡ 2e1c7860-1421-466d-b1ef-f00768e7fcfc
md"""
# Understanding representational similarity analysis in visually guided orienting
Representational Similarity Analysis (RSA) is a [technique](https://pillowlab.wordpress.com/tag/representational-similarity-analysis/) for comparing how similar the actvitiy in a neural network (artificial or real) is across different stimuli or inputs. The main advantage of RSA seems to be that it allows one to compare the "internals" of two (or more) networks with very different architectures, _as long as the same stimulus set can be presented to both networks_.
In our lab, we have been using RSA to compare how artificial neural networks trained with different inputs/labels differentially represent positional and directional information.
This project has been led by Xuan Wen, with Liujunli Li (who collected and analyzed the neurophysiology data that inspired the project).
In our task, there is a sequence of images presented to a [convLSTM network](https://arxiv.org/abs/1506.04214). The images contain some pixels of different intensities (that can be ignored). But in one frame a `Start Port` is lit and then in a later frame a `Target Port` is lit. Depending on the task the network either needs to report `target(x,y)` or `direction(x,y) = target(x,y) - start(x,y)`.
The `target` networks can ignore the start position, so we expected them to not have any structure when generating an RSA based on start position and indeed that was the case. We expected the `direction` networks to represent both the start and target with diagonal structure but instead the RSA looked blocked.
The goal of this notebook was to:
+ See if the blocked structure was due to a bug (it wasn't)
+ Try to understand where the blocked structure came from
"""
# ╔═╡ e1228d82-e8dd-4145-949b-83d392d07130
begin
md"""
### Setup some parameters of the simulation.
Define `meta` and `dir_meta`
+ number of units = $(@bind n_units Slider(4:4:64, show_value=true))
+ number of stim = $(@bind n_stim Slider(500:500:10000, show_value=true))
+ pixel range = $(@bind range Slider(16:8:64, show_value=true))
"""
end
# ╔═╡ ac004c8e-c64a-484b-81ca-6d36d066c279
md"""
AMP = $( @bind AMP Slider(1:20))
tuning = $( @bind tuning Select(["bump","sigmoid"]))
1/noise = $( @bind noise Slider(1:100, show_value=true))
σ = $( @bind σ_range Slider(1:50, show_value=true) )
1/slope = $( @bind slope_range Slider(0.1:0.1:20, show_value=true))
seed = $( @bind seed Slider(1:20, show_value=true))
"""
# ╔═╡ d29ba28b-88f7-425f-8e63-c7ae6adb1769
md"""
### Functional Appendix
Cells with useful code are below.
"""
# ╔═╡ eebda37e-81b8-4b2c-a443-1c896eedfd8b
begin
meta = rand(-range:range, (n_stim, 4))
get_dir(r) = atan(r[4]-r[2],r[3]-r[1])
dir_meta = map(get_dir, eachrow(meta))
md"Setup `meta` and `dir_meta`"
end
# ╔═╡ f12aee54-2b76-40f4-9eef-189f0f077f1b
begin
bin_range = -range:4:range
start_x_H = fit(Histogram, meta[:,1], bin_range)
start_x_map = StatsBase.binindex.(Ref(start_x_H), meta[:,1])
target_x_H = fit(Histogram, meta[:,2], bin_range)
target_x_map = StatsBase.binindex.(Ref(target_x_H), meta[:,3])
dir_H = fit(Histogram, dir_meta, -π:0.1:π)
dir_map = StatsBase.binindex.(Ref(dir_H), dir_meta)
end; md"Map stimuli to bins"
# ╔═╡ 17983dc3-3eb3-4561-9a75-15b121210a94
begin
"""
activity(a,b,c) = (x)->a * (1/(1 + exp(-b * (x - c))) + randn()/10);
Returns a sigmoidal function with intercept `c`, slope `b`, and amplitude `a`.
"""
activity(a,b,c) = (x)->a * (1/(1 + exp(-b * (x + c))) + randn()/noise);
md"##### Define `activity`
a function which takes 3 inputs and returns a function representing a _neuron_ with sigmoidal tuning."
end
# ╔═╡ 3e5ddac3-b036-4241-90b6-c481a1cb4d82
begin
"""
bump(a,μ,σ) = (x)->a * (pdf(Normal(μ,σ),x) + randn()/10)
Returns a function with gaussian tuning.
"""
bump(a,μ,σ) = (x)->a * (pdf(Normal(μ,σ),x) + randn()/(10*noise))
md"##### Define bump tuning function `bump`"
end
# ╔═╡ 305cc925-d2e0-475b-9332-2b24063d858b
A = let A=zeros(n_stim, n_units)
Random.seed!(seed)
μ = rand(-range:range, n_units)
σ = rand(n_units) * σ_range
amp = randn(n_units)*AMP
intp = rand(-range:range, n_units)
slope = randn(n_units) ./ slope_range
for i = 1:n_units
if tuning=="bump"
this_activity = bump(amp[i], μ[i], σ[i])
else
this_activity = activity(amp[i], slope[i], intp[i])
end
A[:,i] .= this_activity.(meta[:,1])
end
A
end; md"#### Generate $(tuning) synthetic data"
# ╔═╡ 4d21ce29-09be-41c1-aa60-1ead7a15bb41
begin
zA = copy(A)
for c = 1:size(A)[2]
#zA[:,c] = A[:,c] ./ maximum(abs.(A[:,c]))
zA[:,c] = zscore(A[:,c])
end
end; md"zScore Activity"
# ╔═╡ 34dd9dc8-d619-40c6-a2ec-1ec68a7064ad
ha=heatmap(zA[sortperm(meta[:,1])[1:8:end],:],
ylabel="Subsampled Stimuli",
xlabel="Units"); md"""
### Plot the activity of the units sorted by `start x`
$(ha)
This plot clearly shows the tuning of the units. With `bump` tuning there is a peak (or valley) of activity for some values of `start x`. With `sigmoid` tuning the activity peaks or falls from small to large values of `x`.
"""
# ╔═╡ 96b29277-0686-4dce-bc7d-e4eec7715031
C = cor(zA, dims=2); md"Compute Correlation"
# ╔═╡ cc792223-0839-4388-834c-3ef24e76deed
RSA(C, binmap) = begin
bins = unique(binmap)
M = fill(0.0, (length(bins), length(bins)))
for i in bins
for j in bins
try
M[i,j] = mean(
map(x->abs(x) == 1 ? sign(x)*3 : atanh(x) ,C[binmap .== i, binmap .== j])
)
catch
end
end
end
M
end; md"##### RSA function
use Fisher's Z transform"
# ╔═╡ edda11fa-5569-42e0-b808-61d7a780f13b
plotmapdir() = begin
heatmap(dir_H.edges ./ π, dir_H.edges ./ π, RSA(C, dir_map),
clim=(-1,1), yflip=true,
xlabel="Stimulus 1 (Radians/π)",
title="Direction RSA", size=(320,300),
rightmargin=20pt,
c=cgrad(:vik, rev=true))
end; md"""
Since we made all of the units only dependent on `start x` the RSA for direction and target should be at chance. It seems that there is correlation between start and direction in order to have this kind of structure emerge.
$(plotmapdir())
"""
# ╔═╡ 6b0d570a-316e-4e64-b2f7-44c21bff599b
plotmap(x,h, tit) = begin
heatmap(h.edges, h.edges, tanh.(RSA(C, x)),
clim=(-1,1),
yflip=true,
xlabel="Stimulus 1 (pixels)",
title=tit, size=(320,300),
rightmargin=20pt,c=cgrad(:vik, rev=true))
end; md"##### plotmap"
# ╔═╡ 2f399035-b2cf-4978-91bb-aa30ae785298
sx=plotmap(start_x_map, start_x_H, ""); md"""
#### `Start x` RSA based on $(tuning) tuning
$(sx)
When the tuning is bump, the diagonal structure of the RSA emerges.
When the tuning is sigmoid, the block structure of the RSA emerges.
"""
# ╔═╡ 66e23f63-94c6-4cc3-9561-5c49cd9485c4
plotmap(target_x_map, target_x_H, "Target")
# ╔═╡ 1025c462-ac1b-40b1-a6d5-efb4f64207ef
PlutoUI.TableOfContents(depth=6)
# ╔═╡ 71c3dad3-4a0f-442d-b3da-238f42bdb76f
# ╔═╡ Cell order:
# ╟─2e1c7860-1421-466d-b1ef-f00768e7fcfc
# ╟─e1228d82-e8dd-4145-949b-83d392d07130
# ╟─305cc925-d2e0-475b-9332-2b24063d858b
# ╟─ac004c8e-c64a-484b-81ca-6d36d066c279
# ╟─f12aee54-2b76-40f4-9eef-189f0f077f1b
# ╟─2f399035-b2cf-4978-91bb-aa30ae785298
# ╠═34dd9dc8-d619-40c6-a2ec-1ec68a7064ad
# ╟─edda11fa-5569-42e0-b808-61d7a780f13b
# ╠═66e23f63-94c6-4cc3-9561-5c49cd9485c4
# ╟─d29ba28b-88f7-425f-8e63-c7ae6adb1769
# ╟─4258729c-bde7-11eb-26b2-6be481b178a1
# ╟─49c31004-514a-418a-8efd-35e6c2044589
# ╟─eebda37e-81b8-4b2c-a443-1c896eedfd8b
# ╠═17983dc3-3eb3-4561-9a75-15b121210a94
# ╠═3e5ddac3-b036-4241-90b6-c481a1cb4d82
# ╟─4d21ce29-09be-41c1-aa60-1ead7a15bb41
# ╟─96b29277-0686-4dce-bc7d-e4eec7715031
# ╟─cc792223-0839-4388-834c-3ef24e76deed
# ╟─6b0d570a-316e-4e64-b2f7-44c21bff599b
# ╠═1025c462-ac1b-40b1-a6d5-efb4f64207ef
# ╠═71c3dad3-4a0f-442d-b3da-238f42bdb76f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment