Skip to content

Instantly share code, notes, and snippets.

@ceshine
Created September 30, 2017 02:10
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ceshine/6d3fa7ee6b6d33edcca4222909404e55 to your computer and use it in GitHub Desktop.
Save ceshine/6d3fa7ee6b6d33edcca4222909404e55 to your computer and use it in GitHub Desktop.
Key Code Blocks of Keras LSTM Dropout Implementation
# https://github.com/tensorflow/tensorflow/blob/v1.3.0/tensorflow/contrib/keras/python/keras/layers/recurrent.py#L1197
class LSTM(Recurrent):
#...
def step(self, inputs, states):
#...
if self.implementation == 2:
#...
else:
if self.implementation == 0:
x_i = inputs[:, :self.units]
x_f = inputs[:, self.units:2 * self.units]
x_c = inputs[:, 2 * self.units:3 * self.units]
x_o = inputs[:, 3 * self.units:]
elif self.implementation == 1:
x_i = K.dot(inputs * dp_mask[0], self.kernel_i) + self.bias_i
x_f = K.dot(inputs * dp_mask[1], self.kernel_f) + self.bias_f
x_c = K.dot(inputs * dp_mask[2], self.kernel_c) + self.bias_c
x_o = K.dot(inputs * dp_mask[3], self.kernel_o) + self.bias_o
i = self.recurrent_activation(x_i + K.dot(h_tm1 * rec_dp_mask[0],
self.recurrent_kernel_i))
f = self.recurrent_activation(x_f + K.dot(h_tm1 * rec_dp_mask[1],
self.recurrent_kernel_f))
c = f * c_tm1 + i * self.activation(
x_c + K.dot(h_tm1 * rec_dp_mask[2], self.recurrent_kernel_c))
o = self.recurrent_activation(x_o + K.dot(h_tm1 * rec_dp_mask[3],
self.recurrent_kernel_o))
#...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment