Skip to content

Instantly share code, notes, and snippets.

View njellinas's full-sized avatar

Nick Ellinas njellinas

View GitHub Profile
# Help for Step 8
# 1) Use MSE loss function
# 2) Inside the train loop, if you shaped your input features into a 2D array,
# augment their dimensionality by 1 before feeding a batch of them in the LSTM as the batch must be a 3D array, not 2D.
# The command for doing this is: your_batch.unsqueeze_(-1)
# and it is an inplace operation, you don't have to assign it to a new variable
# In the same way, you must .squeeze_() the outputs of the LSTM to reshape them into a 2D array.
# 3) In order to apply a neural network layer to a sequence you must use the given function: apply_layer_to_timesteps
# 4) The input sequences in the main part of the exercise will not be of the same length. For this reason, we use
@njellinas
njellinas / autoencoder_extra.py
Created October 26, 2017 14:46
Two Keras Layer-Class definitions for implementing Weight-Tying and for loading pretrained weights in Deep Autoencoders
import keras.backend as K
from keras.layers import Layer
from keras.legacy import interfaces
from keras.engine import InputSpec
from keras import activations, initializers, regularizers, constraints
class DenseTransposeTied(Layer):
@interfaces.legacy_dense_support