Skip to content

Instantly share code, notes, and snippets.

@gurbain
Last active October 29, 2020 18:01
Show Gist options
  • Save gurbain/ba52af78d7be6eb2a23f48af15da2ce0 to your computer and use it in GitHub Desktop.
Save gurbain/ba52af78d7be6eb2a23f48af15da2ce0 to your computer and use it in GitHub Desktop.
pyESN + keras readout divergence
# Scipy libraries
import numpy as np
import os
from matplotlib import pyplot as plt
plt.style.use('fivethirtyeight')
plt.rc('lines', linewidth= 1)
plt.rc('text', usetex=False)
plt.rc('axes', facecolor='white')
plt.rc('savefig', facecolor='white')
plt.rc('figure', autolayout=True)
# pyESN libraries
from pyESN import ESN
# Keras libraries
from keras.optimizers import Adam
from keras.models import Sequential
from keras.layers import Dense
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# Load the data
data = np.load('mackey_glass_t17.npy')
# Declare main model and data parameters
n_res = 300
n_out = 1
n_in = 1
trainlen = 2000
future = 2000
# Create the keras network for readout layer
nn = Sequential()
nn.add(Dense(100, input_dim=n_res+n_in, activation='relu'))
nn.add(Dense(1, activation='linear'))
nn.compile(loss='mse', optimizer='adam', metrics=['mse'])
nn.summary()
# Create the reservoir
esn = ESN(n_inputs = n_in,
n_outputs = n_out,
n_reservoir = n_res,
noise = 0.01,
spectral_radius = 1.4,
teacher_forcing = True,
random_state = 42,
keras_model = nn)
# Train
pred_training = esn.fit(np.ones(trainlen),data[:trainlen],
epochs=20, verbose=2)
print("Train error: \n" + \
str(np.sqrt(np.mean((pred_training.flatten() - data[:trainlen])**2))))
# Test
prediction = esn.predict(np.ones(future))
print("Test error: \n" + \
str(np.sqrt(np.mean((prediction.flatten() - data[trainlen:trainlen +future])**2))))
# Plot results
plt.figure(figsize=(11,1.5))
plt.plot(range(0,trainlen),data[0:trainlen],
lineStyle="--", linewidth=2, label="Train Prediction")
plt.plot(range(0,trainlen+future),data[0:trainlen+future],
label="Target System")
plt.plot(range(trainlen,trainlen+future),prediction,
label="Test Prediction")
lo,hi = plt.ylim()
plt.plot([trainlen,trainlen],[lo+np.spacing(1),hi-np.spacing(1)],'k:')
plt.legend(fontsize='x-small')
plt.show()
@gurbain
Copy link
Author

gurbain commented Oct 29, 2020

Hi Francesca,

I honnestly can't remember! :/ I completely forgot about this gist. I dates from August 2018, is it not then the one form here: https://github.com/cknd/pyESN? I think it was last updated slightly before I wrote this piece of code.

@FrancescaAlf
Copy link

Hi Gabriel,
I removed the comment because I have found it and I did not want to annoy you,
but, thank you!

It is here in case you need it in the future: cknd/pyESN@55cb273

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment