Skip to content

Instantly share code, notes, and snippets.

@christinebuckler
Created November 11, 2018 21:42
Show Gist options
  • Save christinebuckler/3815b39db4c9d061ac0aa330f88058d9 to your computer and use it in GitHub Desktop.
Save christinebuckler/3815b39db4c9d061ac0aa330f88058d9 to your computer and use it in GitHub Desktop.
multi-label metric functions: precision, recall, f1 score
def precision(y_true, y_pred):
i = set(y_true).intersection(y_pred)
len1 = len(y_pred)
if len1 == 0:
return 0
else:
return len(i) / len1
def recall(y_true, y_pred):
i = set(y_true).intersection(y_pred)
return len(i) / len(y_true)
def f1(y_true, y_pred):
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
if p + r == 0:
return 0
else:
return 2 * (p * r) / (p + r)
print(f1(['A', 'B', 'C'], ['A', 'B']))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment