How do I calculate precision, recall, specificity, sensitivity manually?
I have actual class labels and predicted class labels:
2 , 1
0 , 0
2 , 1
2 , 1
2 , 1
1 , 1
2 , 1
1 , 1
0 , 0
2 , 1
2 , 1
2 , 1
2 , 1
3 , 1
2 , 1
2 , 1
2 , 1
2 , 1
2 , 1
1 , 1
2 , 1
2 , 1
1 , 1
1 , 1
0 , 0
1 , 1
1 , 1
2 , 1
2 , 1
1 , 1
2 , 1
1 , 1
0 , 0
2 , 1
1 , 1
0 , 0
2 , 1
2 , 1
0 , 0
2 , 1
2 , 1
2 , 1
I am using a scikit-learn function to generate the confusion matrix and get the accuracy:
print(classification_report(actual_label, pred_res))
Which yields:
precision recall f1-score support
0 1.00 1.00 1.00 6
1 1.00 0.28 0.43 36
2 0.00 0.00 0.00 0
3 0.00 0.00 0.00 0
accuracy 0.38 42
macro avg 0.50 0.32 0.36 42
weighted avg 1.00 0.38 0.52 4
Is there any other way to calculate the precision, recall, sensitivity, and specificity, without using the above function?
Topic confusion-matrix scikit-learn classification machine-learning
Category Data Science