Created
July 26, 2019 15:08
-
-
Save michelkana/4f6acd0b08f859792dcefcd3092cbb62 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
max_word_len = df.yb.str.len().max() | |
max_word_len_utf8 = df.yb_utf8.str.len().max() | |
nb_labels = len(df.word_type.unique()) | |
nb_words = df.shape[0] | |
print("Number of words: ", nb_words) | |
print("Number of labels: ", nb_labels) | |
print("Max word length: {} characters and {} bytes".format(max_word_len, max_word_len_utf8)) |
Further, can I ask how do we usually handle the situation where the training data misses some label categories while missing categories could happen in production?
@pancodia thanks for getting back to me. Sorry for the late reply. I was traveling. Did you find a fix? If yes, can you share or do you still need help?
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am following this article. When I execute
model_lstm.fit
to train the LSTM model, the following error occuredWhen I debug, I found
The training labels are created by
Y = to_categorical(Y)
and it convert labels to one-hot encoding. Because in the input dataset, the max index of the label is 11, the one-hot encoding's dimension is 12. However, in the dataset, there are only 10 unique label indices, thereforenb_labels
is 10.In order to resolve the issue, should we instead calculate
nb_labels
as follows?