How to interpret calibration curves for prediction models?

I am working on a binary classification using random forest with 977 records (77:23 is the class ratio).

After building the model and getting an AUC of 81, i thought of building a calibration curve and calculate brier score. My graph looks like below (without calibration model being used). I don't why my AUC is dropped here (when I use the below code)

Later, when I build the calibration model, I see the below output. You can see that my calib model's brier score is large than earlier one (after calibration) and AUC is also dropped

How should we interpret the calibration curves?

Topic probability-calibration random-forest classification data-mining machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.