- Accuracy: #sr
- Formula:
- total number of correctly classified points/ total number of points
- = TP+TN/(TP+FP+TN+FN)
- When not to use accuracy:
- in case of 90% imbalanced dataset: with a dumb model(which always returns True or returns False), you can still get 90% accuracy. This is bad
- Formula:
- AUC #sr
- What does AUC actually mean? #sr
- AUC measures how well the classifier separates the classes. Higher AUC means classifier performs better in separating the classes.
- What does AUC actually mean? #sr
- What does high value of AUC mean? #sr
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from keras.models import Sequential | |
from keras import optimizers | |
from keras.layers import LSTM | |
from keras.layers import Dense, Dropout, BatchNormalization, TimeDistributed | |
n_features = 4 | |
# choose a number of time steps | |
n_steps = None | |
model = Sequential() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
x_input = dataXScaler[2:30] | |
x_input = x_input.reshape((1, len(x_input), 4)) | |
print(scaler.inverse_transform(dataXScaler)[2:30]) | |
yhat = model.predict(x_input, verbose=0) | |
print(scaler.inverse_transform(yhat)) | |
print('expected: ', scaler.inverse_transform(dataYScaler)[30]) | |
>>> | |
[[72.33333333 70.22222222 67. 58.23529412] | |
[72.33333333 70.22222222 67. 58.23529412] | |
[73.66666667 70.22222222 60. 55.58823529] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
x_input = dataXScaler[2:30] | |
x_input = x_input.reshape((1, len(x_input), 4)) | |
print(scaler.inverse_transform(dataXScaler)[2:30]) | |
yhat = model.predict(x_input, verbose=0) | |
print('\nPredicted output: \n', scaler.inverse_transform(yhat)) | |
print('\nExpected: \n', scaler.inverse_transform(dataYScaler)[30]) | |
>>> | |
[[72.33333333 70.22222222 67. 58.23529412] | |
[72.33333333 70.22222222 67. 58.23529412] | |
[73.66666667 70.22222222 60. 55.58823529] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
n_steps = None | |
# convert into input/output | |
i=1 | |
n=1 | |
x_input = np.array([dataXScaler[0]]) | |
x_input = x_input.reshape((1, len(x_input), n_features)) | |
while i<len(dataXScaler): | |
# demonstrate prediction | |
print('Input: ') |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from keras.layers import Dense, SimpleRNN | |
model_rnn = Sequential() | |
model_rnn.add(TimeDistributed(Dense(128), input_shape=(None, n_features))) | |
model_rnn.add(SimpleRNN(100, input_shape=[None, n_features], return_sequences=True )) | |
model_rnn.add(SimpleRNN(100)) | |
model_rnn.add(Dense(n_features, activation='softmax')) | |
model_rnn.compile(optimizer='adam', loss='categorical_crossentropy') | |
model_rnn.fit_generator(generator, epochs=500, validation_data=validation_generator) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from keras.models import Sequential | |
from keras.layers import GRU, Dense | |
n_features = 4 | |
model_gru = Sequential() | |
model_gru.add(TimeDistributed(Dense(128), input_shape=(None, n_features))) | |
model_gru.add(GRU(64, activation='tanh', input_shape=(None, n_features), return_sequences=True)) | |
model_gru.add(BatchNormalization()) | |
model_gru.add(GRU(32 , activation = 'tanh')) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Evaluate the model on the test data using `evaluate` | |
print('\n# Evaluate on test data') | |
results = model.evaluate(x_test, y_test, batch_size=128) | |
print('test loss, test acc:', results) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1 | listing1 | email1 | addr1 | ip1a | |
---|---|---|---|---|---|
2 | listing1 | email1 | addr1 | ip1a | |
3 | listing1 | email1 | addr1 | ip1b | |
4 | listing2 | email1 | addr2 | ip2 | |
5 | listing3 | email3 | addr3 | ip1b | |
6 | listing4 | email4 | addr3 | ip4 | |
7 | listing5 | email5 | addr5 | ip5 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Label | Num requests | Avg | tp90 (ms) | tp99 | Min | Max | throughput | | |
|--------------|--------------|-------|-----------|--------|-------|--------|------------| | |
| HTTP Request | 60 | 77.25 | 96.75 | 110.67 | 68.25 | 126.75 | 12.71 | |