Skip to content

Instantly share code, notes, and snippets.

@veb-101
Last active November 11, 2022 10:31
Show Gist options
  • Save veb-101/5947cd7109a6d1adba872a089964b724 to your computer and use it in GitHub Desktop.
Save veb-101/5947cd7109a6d1adba872a089964b724 to your computer and use it in GitHub Desktop.
Radial basis function network for XOR
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@veb-101
Copy link
Author

veb-101 commented May 2, 2020

You can change points to use it for different logic gates
# points
x1 = np.array([....])
x2 = np.array([....])
ys = np.array([....])

and centers by changing
# centers
mu1 = np.array([..])
mu2 = np.array([..])

@veb-101
Copy link
Author

veb-101 commented May 2, 2020

In the end_to_end function, first I calculated the similarity between the inputs and the peaks.
Then, to find w used the equation Aw= Y in matrix form.
Each row of A (shape: (4, 2)) consists of

  • index[0]: similarity of point with peak1
  • index[1]: similarity of point with peak2
  • index[2]: Bias input (1)

Y: Output associated with the input (shape: (4, ))

W is calculated using the same equation we use to solve linear regression using a closed solution (normal equation).

This part is the same as using a neural network architecture of 2-2-1,

  • 2 node input (x1, x2) (input layer)
  • 2 node (each for one peak) (hidden layer)
  • 1 node output (output layer)

To find the weights for the edges to the 1-output unit. Weights associated would be:

  • edge joining 1st node (peak1 output) to the output node
  • edge joining 2nd node (peak2 output) to the output node
  • bias edge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment