Skip to content

Instantly share code, notes, and snippets.

@vmchale
Last active June 6, 2021 14:19
Show Gist options
  • Save vmchale/de4caea5ef990fa8aa61d44f209e8153 to your computer and use it in GitHub Desktop.
Save vmchale/de4caea5ef990fa8aa61d44f209e8153 to your computer and use it in GitHub Desktop.
Port a Python neural network to J
NB. I used https://towardsdatascience.com/implementing-the-xor-gate-using-backpropagation-in-neural-networks-c1f255b4f20d
NB. input data
X =: 4 2 $ 0 0 0 1 1 0 1 1
NB. target data, ~: is 'not-eq' aka xor
Y =: , (i.2) ~:/ (i.2)
scale =: (-&1)@:(*&2)
NB. initialize weights b/w _1 and 1
NB. see https://code.jsoftware.com/wiki/Vocabulary/dollar#dyadic
init_weights =: 3 : 'scale"0 y ?@$ 0'
w_hidden =: init_weights 2 2
w_output =: init_weights 2
b_hidden =: init_weights 2
b_output =: scale ? 0
NB. matrix product
mp =: +/ . *
sigmoid =: monad define
% 1 + ^ - y
)
sigmoid_ddx =: 3 : 'y * (1-y)'
NB. forward prop
forward =: dyad define
'WH WO BH BO' =. x
hidden_layer_output =. sigmoid (BH +"1 X (mp "1 2) WH)
prediction =. sigmoid (BO + WO mp"1 hidden_layer_output)
(hidden_layer_output;prediction)
)
train =: dyad define
'X Y' =. x
'WH WO BH BO' =. y
'hidden_layer_output prediction' =. y forward X
l1_err =. Y - prediction
l1_delta =. l1_err * sigmoid_ddx prediction
hidden_err =. l1_delta */ WO
hidden_delta =. hidden_err * sigmoid_ddx hidden_layer_output
WH_adj =. WH + (|: X) mp hidden_delta
WO_adj =. WO + (|: hidden_layer_output) mp l1_delta
BH_adj =. +/ BH,hidden_delta
BO_adj =. +/ BO,l1_delta
(WH_adj;WO_adj;BH_adj;BO_adj)
)
w_trained =: (((X;Y) & train) ^: 10000) (w_hidden;w_output;b_hidden;b_output)
guess =: >1 { w_trained forward X
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment