EarlyStopping based on the loss
When training my CNN model, based on the random initialization of weights, i get the prediction results. In other words, with the same training and test data i get different results every time when i run the code. When tracking the loss, i can know if the result would be acceptable or not. Based on this, i want to know if there is a way to stop the training if the loss begins by a value superior to a desired one in order to re-run it. The min_delta
of the EarlyStopping
does not treat this case.
Thanks in advance
Topic early-stopping cnn deep-learning
Category Data Science