Skip to content

Instantly share code, notes, and snippets.

@dgrahn
Created October 25, 2018 11:24
Show Gist options
  • Save dgrahn/f68447e6cc83989c51617571396020f9 to your computer and use it in GitHub Desktop.
Save dgrahn/f68447e6cc83989c51617571396020f9 to your computer and use it in GitHub Desktop.
Metrics removed from Keras in 2.0.
"""Keras 1.0 metrics.
This file contains the precision, recall, and f1_score metrics which were
removed from Keras by commit: a56b1a55182acf061b1eb2e2c86b48193a0e88f7
"""
from keras import backend as K
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision. Computes the precision, a
metric for multi-label classification of how many selected items are
relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall. Computes the recall, a metric
for multi-label classification of how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def f1_score(y_true, y_pred):
"""Computes the F1 Score
Only computes a batch-wise average of recall. Computes the recall, a metric
for multi-label classification of how many relevant items are selected.
"""
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
return (2 * p * r) / (p + r + K.epsilon())
@dgrahn
Copy link
Author

dgrahn commented Feb 20, 2020

@FrancescaAlf Please post code using code tags, instead of screenshots. I don't know where your precision_recall_curve or auc functions are coming from. Are they numpy functions?

@FrancescaAlf
Copy link

`from sklearn.metrics import auc, precision_recall_curve
import keras.backend as K

def precision(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true[:,:,1] * y_pred[:,:,1], 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred[:,:,1], 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision
def auc_pcr_1(y_true, y_pred):

    precision, recall, _ = precision_recall_curve(y_true[:,:,1] ,y_pred[:,:,1])
    area_under_curve_p_r = auc(recall, precision)
    return auc_1 `

@dgrahn no, they are sklearn functions

@dgrahn
Copy link
Author

dgrahn commented Feb 20, 2020

@FrancescaAlf Ah -- that's what I meant!. So those methods accept numpy matrices, not tensors. If you are using TensorFlow as the backend, you could use tf.keras.metrics.AUC and tf.keras.metrics.PrecisionAtRecall. If not, you might have to implement those functions with tensors.

@FrancescaAlf
Copy link

dgrahn Oh, ok. Thanks for your help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment