Skip to content

Instantly share code, notes, and snippets.

@olinguyen
Last active June 15, 2017 14:39
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save olinguyen/156e0e372f67584ebaa674ab6a1e15e4 to your computer and use it in GitHub Desktop.
Save olinguyen/156e0e372f67584ebaa674ab6a1e15e4 to your computer and use it in GitHub Desktop.
import numpy as np
from modshogun import *
roc = ROCEvaluation()
roc.evaluate(y_pred, y_test)
auc = roc.get_auROC()
print(auc) # 0.845606993532
roc = ROCEvaluation()
roc.evaluate(BinaryLabels(y_pred.get_labels()), BinaryLabels(y_test.get_labels()))
auc = roc.get_auROC()
print(auc) # 0.563780270879
@karlnapf
Copy link

Try get_values rather than get_labels

@karlnapf
Copy link

The problem here is that the API doesnt nicely distinguish between the binary valued class labels, and the scores associated with them (those are used for the ROC computation). If you create a new labels object with the binary version get_labels(), then the AUC is different when you replicate the scores themselves.

@karlnapf
Copy link

Actually another candidate for API issues of SHogun :D

@olinguyen
Copy link
Author

get_values instead of get_labels fixed the issue -- the AUC both print 0.845.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment