Keras: How to restore initial weights when using EarlyStopping

Using Keras, I setup EarlyStoping like this:

EarlyStopping(monitor='val_loss', min_delta=0, patience=100, verbose=0, mode='min', restore_best_weights=True)

When I train it behaves almost as advertised. However, I am initializing my model weights before training using weights I know are a good baseline.

The problem is when I train, although EarlyStopping kicks in, it ignores my initial model and often picks the best model since training started (excluding initial model). The model that it picks is often worse than my initial one.

Is there a way to force it to consider the initial model?

Topic early-stopping training keras tensorflow neural-network

Category Data Science


Based on how the EarlyStopping callback is implemented there doesn't seem to be way to accomplish this. After an epoch ends (in your case more specifically the end of the first epoch) it checks if the value at the end of the epoch is an improvement over the current value (see this function, where the current value is stored in self.best. When the training of the model starts this variable is initialized to numpy.Inf of -numpy.Inf, depending on the mode that is used (see this function. This means that the value at the end of the first epoch is always an improvement over the value at the start of the training, therefore the callback will only restore weights back to the first epoch at maximum.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.