- Accuracy: #sr
- Formula:
- total number of correctly classified points/ total number of points
- = TP+TN/(TP+FP+TN+FN)
- When not to use accuracy:
- in case of 90% imbalanced dataset: with a dumb model(which always returns True or returns False), you can still get 90% accuracy. This is bad
- Formula:
- AUC #sr
- What does AUC actually mean? #sr
- AUC measures how well the classifier separates the classes. Higher AUC means classifier performs better in separating the classes.
- What does AUC actually mean? #sr
- What does high value of AUC mean? #sr
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch as th | |
class NLL_OHEM(th.nn.NLLLoss): | |
""" Online hard example mining. | |
Needs input from nn.LogSotmax() """ | |
def __init__(self, ratio): | |
super(NLL_OHEM, self).__init__(None, True) | |
self.ratio = ratio |
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Model | Size | Latency tp99 (top 99 %-ile) | Accuracy | | |
|-----------------|--------|-----------------------------|--------------------| | |
| BERT | 712 MB | >500 ms | 91.14 | | |
| DistilBERT | 528 MB | 164 ms | 90.48 | | |
| DistilBERT+ONNX | 516 MB | 116 ms | same as DistilBERT | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Label | Num requests | Avg | tp90 (ms) | tp99 | Min | Max | throughput | | |
|--------------|--------------|-------|-----------|--------|-------|--------|------------| | |
| HTTP Request | 60 | 77.25 | 96.75 | 110.67 | 68.25 | 126.75 | 12.71 | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1 | listing1 | email1 | addr1 | ip1a | |
---|---|---|---|---|---|
2 | listing1 | email1 | addr1 | ip1a | |
3 | listing1 | email1 | addr1 | ip1b | |
4 | listing2 | email1 | addr2 | ip2 | |
5 | listing3 | email3 | addr3 | ip1b | |
6 | listing4 | email4 | addr3 | ip4 | |
7 | listing5 | email5 | addr5 | ip5 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Evaluate the model on the test data using `evaluate` | |
print('\n# Evaluate on test data') | |
results = model.evaluate(x_test, y_test, batch_size=128) | |
print('test loss, test acc:', results) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from keras.models import Sequential | |
from keras.layers import GRU, Dense | |
n_features = 4 | |
model_gru = Sequential() | |
model_gru.add(TimeDistributed(Dense(128), input_shape=(None, n_features))) | |
model_gru.add(GRU(64, activation='tanh', input_shape=(None, n_features), return_sequences=True)) | |
model_gru.add(BatchNormalization()) | |
model_gru.add(GRU(32 , activation = 'tanh')) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from keras.layers import Dense, SimpleRNN | |
model_rnn = Sequential() | |
model_rnn.add(TimeDistributed(Dense(128), input_shape=(None, n_features))) | |
model_rnn.add(SimpleRNN(100, input_shape=[None, n_features], return_sequences=True )) | |
model_rnn.add(SimpleRNN(100)) | |
model_rnn.add(Dense(n_features, activation='softmax')) | |
model_rnn.compile(optimizer='adam', loss='categorical_crossentropy') | |
model_rnn.fit_generator(generator, epochs=500, validation_data=validation_generator) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
n_steps = None | |
# convert into input/output | |
i=1 | |
n=1 | |
x_input = np.array([dataXScaler[0]]) | |
x_input = x_input.reshape((1, len(x_input), n_features)) | |
while i<len(dataXScaler): | |
# demonstrate prediction | |
print('Input: ') |
NewerOlder