Why checkpoint loss is different?

I am training a Mask RCNN model in Keras. I used checkpoints to save weights so I can resume training with the last optimized values.

However, the loss is different when I save the checkpoint and resume training - it was ~1.66 at checkpoint time and ~2.62 when I resumed it. I assumed since I saved weights the loss would continue to drop from the point it stopped.

Could anyone explain this?

Topic weight-initialization keras loss-function deep-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.