Can you get a very good AUC-ROC score despite predicting all rows to have the same probability?

On the test set of a binary classification problem, the p25, p50 and p75 of the predictions are very close to each other (e.g. 0.123).

Is it possible that my model can achieve a high AUC-ROC (e.g. 0.85) despite giving the same probability prediction for almost the rows?

The data is imbalanced.

Topic roc class-imbalance classification

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.