Calibration of a few binary classifiers is not perfect - why?

I am working on a binary classifier using LightGBM. I try to see the results of the classifiers when changing the costs of false positives and false negatives, still working on the same training and validating datasets. As I want to have probabilities as a result of my modelling, I use isotonic regression as a final part of the pipeline.

Applying exactly the same methodology and code, but only changing those variables of customized objective function, I can see that most of the classifiers are perfectly calibrated, but just a few are not - all of those few are below the 'perfectly calibrated' line.

Why has that happened? How come the calibration cannot be perfect - overall and in this scenario?

Topic lightgbm objective-function probability-calibration scikit-learn classification

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.