Tuning a classifier for high precision, with no regard for recall
I understand this falls under the decision making aspect, rather than the probabilistic, but for the purposes of some work I am doing, I need the classifier to have very high precision, as I can't afford a false positive. I do not care about false negatives, and consequently, do not care about recall. Since it is currently a binary classifier, some might say to play with the decision probability threshold from its current 0.5 value, but I will eventually need to add a third class, and will therefore need to switch to 3 outputs with softmax. I am unaware of traditional methods for shifting my pipeline towards a high precision outcome, and am looking for ways to achieve this.
If it is any help, the problem is classification of 256x256 grayscale images in a domain that is very difficult to classify, according to current whitepapers in the computer vision research area.
Topic finetuning multiclass-classification image-classification
Category Data Science