Skip to content

Instantly share code, notes, and snippets.

@vihar
Created February 10, 2018 13:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save vihar/62a3c5f21b275c3a69aba35cf547628c to your computer and use it in GitHub Desktop.
Save vihar/62a3c5f21b275c3a69aba35cf547628c to your computer and use it in GitHub Desktop.
import numpy as np
print("Enter the two values for input layers")
print('a = ')
a = int(input())
# 2
print('b = ')
b = int(input())
weights = {
'node_0': np.array([2, 4]),
'node_1': np.array([[4, -5]]),
'output_node': np.array([2, 7])
}
input_data = np.array([a, b])
def relu(input):
# Rectified Linear Activation
output = max(input, 0)
return(output)
node_0_input = (input_data * weights['node_0']).sum()
node_0_output = relu(node_0_input)
node_1_input = (input_data * weights['node_1']).sum()
node_1_output = relu(node_1_input)
hidden_layer_outputs = np.array([node_0_output, node_1_output])
model_output = (hidden_layer_outputs * weights['output_node']).sum()
print(model_output)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment