Skip to content

Instantly share code, notes, and snippets.

@Muhammad-Yunus
Created April 10, 2020 06:51
Show Gist options
  • Save Muhammad-Yunus/7f5ea72f2ae1bfac64f67d0d12dabfb5 to your computer and use it in GitHub Desktop.
Save Muhammad-Yunus/7f5ea72f2ae1bfac64f67d0d12dabfb5 to your computer and use it in GitHub Desktop.
Single Layer Perceptron - logic AND (train)
for i in range(NUM_ITER):
y_pred = np.dot(x, W) + b
#apply activation
y_pred[y_pred > 0] = 1
y_pred[y_pred <= 0] = 0
#calculate error
err = y - y_pred
#stop if error = 0
if np.sum(err) == 0:
break
#update w & b
delta_W = learning_rate * np.dot(np.transpose(x) , err)
delta_b = learning_rate * np.sum(err)
W = W + delta_W
b = b + delta_b
print ("Iterasi ke-" + str(i), err, W, b)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment