Skip to content

Instantly share code, notes, and snippets.

@Harshit1694
Created June 21, 2019 18:24
Show Gist options
  • Save Harshit1694/2e1338ce08f678ec44da413ba4a8524d to your computer and use it in GitHub Desktop.
Save Harshit1694/2e1338ce08f678ec44da413ba4a8524d to your computer and use it in GitHub Desktop.
#Evaluate Model M0
actual1<-data_test
pred1<-popularity_rec
# create the confusion matrix
cm = as.matrix(table(Actual = actual1, Predicted = pred1))
cm
n = sum(cm) # number of instances
nc = nrow(cm) # number of classes
diag = diag(cm) # number of correctly classified instances per class
rowsums = apply(cm, 1, sum) # number of instances per class
colsums = apply(cm, 2, sum) # number of predictions per class
p = rowsums / n # distribution of instances over the actual classes
q = colsums / n # distribution of instances over the predicted classes
precision = diag / colsums
recall = diag / rowsums
df<-data.frame(precision, recall)
#Evaluate Model M1
actual1<-data_train
pred2<-reccom
# create the confusion matrix
cm1 = as.matrix(table(Actual = actual1, Predicted = pred2))
cm1
n1 = sum(cm1) # number of instances
nc1 = nrow(cm1) # number of classes
diag1 = diag(cm1) # number of correctly classified instances per class
rowsums1 = apply(cm1, 1, sum) # number of instances per class
colsums1 = apply(cm1, 2, sum) # number of predictions per class
p = rowsums1 / n1 # distribution of instances over the actual classes
q = colsums1 / n1 # distribution of instances over the predicted classes
precision = diag1 / colsums1
recall = diag1 / rowsums1
df1<-data.frame(precision, recall)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment