Skip to content

Instantly share code, notes, and snippets.

@dgrahn
Created October 25, 2018 11:24
Show Gist options
  • Save dgrahn/f68447e6cc83989c51617571396020f9 to your computer and use it in GitHub Desktop.
Save dgrahn/f68447e6cc83989c51617571396020f9 to your computer and use it in GitHub Desktop.
Metrics removed from Keras in 2.0.
"""Keras 1.0 metrics.
This file contains the precision, recall, and f1_score metrics which were
removed from Keras by commit: a56b1a55182acf061b1eb2e2c86b48193a0e88f7
"""
from keras import backend as K
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision. Computes the precision, a
metric for multi-label classification of how many selected items are
relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall. Computes the recall, a metric
for multi-label classification of how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def f1_score(y_true, y_pred):
"""Computes the F1 Score
Only computes a batch-wise average of recall. Computes the recall, a metric
for multi-label classification of how many relevant items are selected.
"""
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
return (2 * p * r) / (p + r + K.epsilon())
@thusinh1969
Copy link

F1 return 1.9 ?!? What can go wrong ?

Thanks,
Steve

@dgrahn
Copy link
Author

dgrahn commented Sep 17, 2019

@thusinh1969 I'd have to see your data. I can't debug it otherwise.

@Mariyamimtiaz
Copy link

Can we use same code for multi-class classification problem??
Thanks in anticipation

@dgrahn
Copy link
Author

dgrahn commented Nov 8, 2019

@Mariyamimtiaz If you're using tensorflow as a backend, I would recommend using tf.metrics

@Mariyamimtiaz
Copy link

@dgrahn no I am using keras.

@dgrahn
Copy link
Author

dgrahn commented Nov 11, 2019

@Mariyamimtiaz Keras is a frontend. What's your backend?

@Mariyamimtiaz
Copy link

I am using the same code for a multi-class problem but the recall is always 1 don't know why. Please find the attached screenshot below.
Screenshot_7

@dgrahn
Copy link
Author

dgrahn commented Nov 11, 2019

I don't know. Debug it? Or use something built-into your backend.

@FrancescaAlf
Copy link

@dgrahn Hi, I am using the same code for multicalss-classification problem, with a small modification because I want to pay more attention to class 1.
Here is the code:
Cattura

I would now create a new custom metrics to monitor the auc of precision recall curve for the same class . By using the following code, I get error: Cannot convert a symbolic Tensor (metrics_19/auc_pcr_1/strided_slice:0) to a numpy array.
Cattura1
Could you help me to find a solution?

@dgrahn
Copy link
Author

dgrahn commented Feb 20, 2020

@FrancescaAlf Please post code using code tags, instead of screenshots. I don't know where your precision_recall_curve or auc functions are coming from. Are they numpy functions?

@FrancescaAlf
Copy link

`from sklearn.metrics import auc, precision_recall_curve
import keras.backend as K

def precision(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true[:,:,1] * y_pred[:,:,1], 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred[:,:,1], 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision
def auc_pcr_1(y_true, y_pred):

    precision, recall, _ = precision_recall_curve(y_true[:,:,1] ,y_pred[:,:,1])
    area_under_curve_p_r = auc(recall, precision)
    return auc_1 `

@dgrahn no, they are sklearn functions

@dgrahn
Copy link
Author

dgrahn commented Feb 20, 2020

@FrancescaAlf Ah -- that's what I meant!. So those methods accept numpy matrices, not tensors. If you are using TensorFlow as the backend, you could use tf.keras.metrics.AUC and tf.keras.metrics.PrecisionAtRecall. If not, you might have to implement those functions with tensors.

@FrancescaAlf
Copy link

dgrahn Oh, ok. Thanks for your help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment