Keras EarlyStopping callback: Why would I ever set restore_best_weights=False?

The point of EarlyStopping is to stop training at a point where validation loss (or some other metric) does not improve.

If I have set EarlyStopping(patience=10, restore_best_weights=False), Keras will return the model trained for 10 extra epochs after val_loss reached a minimum. Why would I ever want this? Has this model not just trained for 10 unnecessary epochs? Wouldn't it make more sense to give me back the model that was trained at the lowest validation loss i.e. with restore_best_weights=True?

Would love to hear situations where doing those extra 10 epochs of training is better than not doing them.

Topic early-stopping keras tensorflow deep-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.