Average Precision if Target Class is Not in Evaluation

Suppose I have 5 classes, denoted by 1, 2, 3, 4, and 5, and this is used in object detection.

When evaluating an object detection performance, suppose I have classes 1, 2, and 3 present, but classes 4 and 5 are not present in the targeted values.

Will each of classes 4 and 5 have average precision of 0 (due to its precision being zero as no true positives can be identified)? Or perhaps there are other considerations to take when this case occurs?

Topic object-detection metric

Category Data Science


I'm not sure if there is a standard for this kind of case. I think it depends whether the system predicts any of these classes, i.e. whether there are any false positive cases:

  • if yes, then the precision must be zero indeed: out of all the instances predicted positive none was a true positive.
  • if no, I think it makes more sense to consider the precision as undefined (NaN), since no instance is predicted positive at all (that is, there is a division by zero in the calculation of precision).

This is a very borderline case, since it would be very questionable to use a test set which doesn't contain all the classes contained in the training set: such a test set cannot is not fit for the purpose of evaluation.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.