Skip to content

Instantly share code, notes, and snippets.

@giuseppebonaccorso
Last active June 8, 2019 00:19
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save giuseppebonaccorso/7a8c60ffdd9426399a42bdca6dda7bf5 to your computer and use it in GitHub Desktop.
Save giuseppebonaccorso/7a8c60ffdd9426399a42bdca6dda7bf5 to your computer and use it in GitHub Desktop.
Oja's rule (Hebbian Learning)
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
# Set random seed for reproducibility
np.random.seed(1000)
# Create and scale dataset
X, _ = make_blobs(n_samples=500, centers=2, cluster_std=5.0, random_state=1000)
scaler = StandardScaler(with_std=False)
Xs = scaler.fit_transform(X)
# Compute eigenvalues and eigenvectors
Q = np.cov(Xs.T)
eigu, eigv = np.linalg.eig(Q)
# Apply the Oja's rule
W_oja = np.random.normal(scale=0.25, size=(2, 1))
prev_W_oja = np.ones((2, 1))
learning_rate = 0.0001
tolerance = 1e-8
while np.linalg.norm(prev_W_oja - W_oja) > tolerance:
prev_W_oja = W_oja.copy()
Ys = np.dot(Xs, W_oja)
W_oja += learning_rate * np.sum(Ys*Xs - np.square(Ys)*W_oja.T, axis=0).reshape((2, 1))
# Eigenvalues
print(eigu)
[ 0.67152209 1.33248593]
# Eigenvectors
print(eigv)
[[-0.70710678 -0.70710678]
[ 0.70710678 -0.70710678]]
# W_oja at the end of the training process
print(W_oja)
[[-0.70710658]
[-0.70710699]]
@biggzlar
Copy link

Doesn't work due to overflow. Where does the np.square(Ys) come from?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment