Skip to content

Instantly share code, notes, and snippets.

@DataWraith
Created July 1, 2010 19:14
Show Gist options
  • Save DataWraith/460418 to your computer and use it in GitHub Desktop.
Save DataWraith/460418 to your computer and use it in GitHub Desktop.
Example of a _very_ simple feed-forward neural network.
class Link
def initialize(to, weight)
@to = to
@weight = weight
end
def propagate(activation)
puts " propagating #{activation} * #{@weight} = #{@weight * activation} to #{@to.name}"
puts " old activation: #{@to.activation}"
@to.activation += @weight * activation
puts " new activation: #{@to.activation}"
end
end
class Neuron
attr_accessor :activation
attr_reader :name
def initialize(name)
@name = name
@links = []
@activation = 0.0
puts "New Node '#{name}'"
end
def add_link_to(neuron, weight)
puts "Adding Link from #{@name} to #{neuron.name}, weight #{weight}."
@links << Link.new(neuron, weight)
end
def activate
print "Activating Neuron #{@name}: #{@activation} -> "
# Simple threshold function
@activation = (@activation > 0.5 ? 1.0 : 0.0)
puts "#{@activation}"
@links.each do |link|
link.propagate(@activation)
end
end
end
if $0 == __FILE__
# Boring, simple network that computes AND
nodes = [ Neuron.new("A"), Neuron.new("B"), # Input Neurons
# No Hidden Neurons
Neuron.new("Result") ] # Output Neuron
nodes[0].add_link_to(nodes[2], 0.26) # A -> Result
nodes[1].add_link_to(nodes[2], 0.26) # B -> Result
for a in [0.0, 1.0]
for b in [0.0, 1.0]
# Reset the network
nodes.each { |node| node.activation = 0.0 }
# Feed the input in
nodes[0].activation = a
nodes[1].activation = b
puts "\nComputing #{a} AND #{b}...\n"
# Evaluate the network by activating each node
nodes.each { |n| n.activate }
# Extract output from the activation of the output node
output = nodes[2].activation
puts "\n#{a} AND #{b} = #{output}"
end
end
end
# Output:
#
# New Node 'A'
# New Node 'B'
# New Node 'Result'
# Adding Link from A to Result, weight 0.26.
# Adding Link from B to Result, weight 0.26.
#
# Computing 0.0 AND 0.0...
# Activating Neuron A: 0.0 -> 0.0
# propagating 0.0 * 0.26 = 0.0 to Result
# old activation: 0.0
# new activation: 0.0
# Activating Neuron B: 0.0 -> 0.0
# propagating 0.0 * 0.26 = 0.0 to Result
# old activation: 0.0
# new activation: 0.0
# Activating Neuron Result: 0.0 -> 0.0
#
# 0.0 AND 0.0 = 0.0
#
# Computing 0.0 AND 1.0...
# Activating Neuron A: 0.0 -> 0.0
# propagating 0.0 * 0.26 = 0.0 to Result
# old activation: 0.0
# new activation: 0.0
# Activating Neuron B: 1.0 -> 1.0
# propagating 1.0 * 0.26 = 0.26 to Result
# old activation: 0.0
# new activation: 0.26
# Activating Neuron Result: 0.26 -> 0.0
#
# 0.0 AND 1.0 = 0.0
#
# Computing 1.0 AND 0.0...
# Activating Neuron A: 1.0 -> 1.0
# propagating 1.0 * 0.26 = 0.26 to Result
# old activation: 0.0
# new activation: 0.26
# Activating Neuron B: 0.0 -> 0.0
# propagating 0.0 * 0.26 = 0.0 to Result
# old activation: 0.26
# new activation: 0.26
# Activating Neuron Result: 0.26 -> 0.0
#
# 1.0 AND 0.0 = 0.0
#
# Computing 1.0 AND 1.0...
# Activating Neuron A: 1.0 -> 1.0
# propagating 1.0 * 0.26 = 0.26 to Result
# old activation: 0.0
# new activation: 0.26
# Activating Neuron B: 1.0 -> 1.0
# propagating 1.0 * 0.26 = 0.26 to Result
# old activation: 0.26
# new activation: 0.52
# Activating Neuron Result: 0.52 -> 1.0
#
# 1.0 AND 1.0 = 1.0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment