Skip to content

Instantly share code, notes, and snippets.

#
# I was learning about Hebbian learning and figured, why not make an artifical neural network
# based on it? Actually, I should have been preparing a presentation on synaptic plasticity, and
# this seemed more interesting. At any rate, here it is.
#
# It doesn't work because of the limitations of Hebb's model, namely that
# at some point the network "discovers" something that sort of works, and from then on it feeds
# back on itself until the synaptic weights enter a positive feedback loop and cascade to
# infinity.
#