Skip to content

Instantly share code, notes, and snippets.

@alinazhanguwo
Created April 24, 2019 20:14
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save alinazhanguwo/d02c575281f36a914de2e1dc5b8d8edf to your computer and use it in GitHub Desktop.
Save alinazhanguwo/d02c575281f36a914de2e1dc5b8d8edf to your computer and use it in GitHub Desktop.
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import average_precision_score
from sklearn.metrics import recall_score
def evaluation_scores(y_val, predicted):
print ("Accracy={}".format(accuracy_score(y_val, predicted)))
print ("F1_macro={}".format(f1_score(y_val, predicted, average='macro')))
print ("F1_micro={}".format(f1_score(y_val, predicted, average='micro')))
print ("F1_wted={}".format(f1_score(y_val, predicted, average='weighted')))
print('Tfidf')
evaluation_scores(y_val, y_val_predicted_labels_tfidf)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment