Skip to content

Instantly share code, notes, and snippets.

@jjone36
Last active December 28, 2018 08:33
Show Gist options
  • Save jjone36/57cfe378d2ea9f9f5c77462a19e69541 to your computer and use it in GitHub Desktop.
Save jjone36/57cfe378d2ea9f9f5c77462a19e69541 to your computer and use it in GitHub Desktop.
# define activation function
def relu(x):
return np.where(x <= 0, 0, x)
# input data
input_layer = np.array([[5], [2]])
weights_1 = np.array([[2, -2], [3, 1]])
weights_2 = np.array([[1], [2]])
# computation of first hidden layer
node_0_output = np.dot(weights_1.T, input_layer)
node_1_input = relu(node_0_output)
# computation of second hidden layer
node_1_output = np.dot(weights_2.T, node_1_input)
output_layer = relu(node_1_output)
# print output_layer
output_layer
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment