How to interpret calibration curves for prediction models?
I am working on a binary classification using random forest with 977 records (77:23 is the class ratio).
After building the model and getting an AUC of 81, i thought of building a calibration curve and calculate brier score. My graph looks like below (without calibration model being used). I don't why my AUC is dropped here (when I use the below code)
Later, when I build the calibration model, I see the below output. You can see that my calib model's brier score is large than earlier one (after calibration) and AUC is also dropped
How should we interpret the calibration curves?