Is active learning able to detect challenging cases?
Let's say we have a set of data points that need to be labelled for a classification task. In the pool-based active learning, if we go with the uncertainty measure, is the AL approach able to detect challenging cases? By challenging cases I mean samples that receive a high prediction score for $\hat{y}$ (e.g. 90%) but, most probably, $\neg\hat{y}$ is the correct prediction.
The rationale behind my question is: does adding more samples to the training set always improve the performance of a classifier?
Topic active-learning classification
Category Data Science