Confusion matrix, when mistakes below diagonal are better than above the diagonal
I have a classification problem and I am producing a confusion matrix. Ideally one wants to get all results in the diagonal. I get quite many points around diagonal for different algorithms. Still for my use-case I want to favor algorithms that underpredict the class (I have ordinal data) and not overpredict.
Is there a metric that can measure under and overprediction and rate those errors with a different weight? The typical accuracy, precision terms assume that all mistakes are the same.
Of course I can try to implement my own metric but I am quite sure that I am not the first that is having this issue.
Any metric available that you know already? Thanks Alex
Topic confusion-matrix classification
Category Data Science