Skip to content

Instantly share code, notes, and snippets.

@abhishek-shrm
Created October 4, 2020 17:30
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save abhishek-shrm/3eafb0f021befb104f8c94e7c1f142f0 to your computer and use it in GitHub Desktop.
Save abhishek-shrm/3eafb0f021befb104f8c94e7c1f142f0 to your computer and use it in GitHub Desktop.
def evaluate_micro_average(actual_keys,predicted_keys):
# Combining actual keywords
ground_truth=[]
for i in actual_keys:
ground_truth.extend(i)
# Combining extracted keywords
extracted_keywords=[]
for i in predicted_keys:
extracted_keywords.extend(i)
# Number of extracted keywords
num_extract=len(extracted_keywords)
# Number of keywords in ground truth
num_actual=len(ground_truth)
# Number of correctly extracted keywords
num_correct=0
for i,j in zip(actual_keys, predicted_keys):
num_correct+=len(set(i).intersection(set(j)))
# If no correct keywords were extracted
if num_correct==0:
return [0,0,0]
# Precision
precision=num_correct/num_extract
# Recall
recall=num_correct/num_actual
# F-measure
Fmeasure=(2*precision*recall)/(precision+recall)
return precision*100,recall*100,Fmeasure*100
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment