Skip to content

Instantly share code, notes, and snippets.

@frenzy2106
Created March 18, 2020 09:31
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save frenzy2106/9324e22229de7dbaa8e7ea7611fd3ba1 to your computer and use it in GitHub Desktop.
Save frenzy2106/9324e22229de7dbaa8e7ea7611fd3ba1 to your computer and use it in GitHub Desktop.
# Building & Compiling the model
vocab_size = len(tokenizer.word_index) + 1
max_length = 25
model = keras.Sequential()
model.add(keras.layers.Embedding(input_dim=vocab_size,output_dim=50,input_length=max_length))
model.add(keras.layers.LSTM(units=50,dropout=0.2,recurrent_dropout=0.2))
model.add(keras.layers.Dense(units=1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment