Skip to content

Instantly share code, notes, and snippets.

@kumar-abhishek
kumar-abhishek / NLL_OHEM.py
Created January 29, 2023 21:02 — forked from erogol/NLL_OHEM.py
Online hard example mining PyTorch
import torch as th
class NLL_OHEM(th.nn.NLLLoss):
""" Online hard example mining.
Needs input from nn.LogSotmax() """
def __init__(self, ratio):
super(NLL_OHEM, self).__init__(None, True)
self.ratio = ratio
@kumar-abhishek
kumar-abhishek / amqdn-crl.ipynb
Created July 26, 2022 14:17 — forked from amqdn/amqdn-crl.ipynb
Implementing Class Rectification Loss in fast.ai
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
| Model | Size | Latency tp99 (top 99 %-ile) | Accuracy |
|-----------------|--------|-----------------------------|--------------------|
| BERT | 712 MB | >500 ms | 91.14 |
| DistilBERT | 528 MB | 164 ms | 90.48 |
| DistilBERT+ONNX | 516 MB | 116 ms | same as DistilBERT |
| Label | Num requests | Avg | tp90 (ms) | tp99 | Min | Max | throughput |
|--------------|--------------|-------|-----------|--------|-------|--------|------------|
| HTTP Request | 60 | 77.25 | 96.75 | 110.67 | 68.25 | 126.75 | 12.71 |
  • Accuracy: #sr
    • Formula:
      • total number of correctly classified points/ total number of points
      • = TP+TN/(TP+FP+TN+FN)
    • When not to use accuracy:
      • in case of 90% imbalanced dataset: with a dumb model(which always returns True or returns False), you can still get 90% accuracy. This is bad
  • AUC #sr
    • What does AUC actually mean? #sr
      • AUC measures how well the classifier separates the classes. Higher AUC means classifier performs better in separating the classes.
  • What does high value of AUC mean? #sr
1 listing1 email1 addr1 ip1a
2 listing1 email1 addr1 ip1a
3 listing1 email1 addr1 ip1b
4 listing2 email1 addr2 ip2
5 listing3 email3 addr3 ip1b
6 listing4 email4 addr3 ip4
7 listing5 email5 addr5 ip5
# Evaluate the model on the test data using `evaluate`
print('\n# Evaluate on test data')
results = model.evaluate(x_test, y_test, batch_size=128)
print('test loss, test acc:', results)
from keras.models import Sequential
from keras.layers import GRU, Dense
n_features = 4
model_gru = Sequential()
model_gru.add(TimeDistributed(Dense(128), input_shape=(None, n_features)))
model_gru.add(GRU(64, activation='tanh', input_shape=(None, n_features), return_sequences=True))
model_gru.add(BatchNormalization())
model_gru.add(GRU(32 , activation = 'tanh'))
@kumar-abhishek
kumar-abhishek / simple_rnn.py
Last active January 12, 2020 05:27
SimpleRNN
from keras.layers import Dense, SimpleRNN
model_rnn = Sequential()
model_rnn.add(TimeDistributed(Dense(128), input_shape=(None, n_features)))
model_rnn.add(SimpleRNN(100, input_shape=[None, n_features], return_sequences=True ))
model_rnn.add(SimpleRNN(100))
model_rnn.add(Dense(n_features, activation='softmax'))
model_rnn.compile(optimizer='adam', loss='categorical_crossentropy')
model_rnn.fit_generator(generator, epochs=500, validation_data=validation_generator)
@kumar-abhishek
kumar-abhishek / part-2.py
Last active January 12, 2020 05:22
Part-2
n_steps = None
# convert into input/output
i=1
n=1
x_input = np.array([dataXScaler[0]])
x_input = x_input.reshape((1, len(x_input), n_features))
while i<len(dataXScaler):
# demonstrate prediction
print('Input: ')