Does `ReduceLROnPlateau()` has a way to know metric of previous epoch ..when training had to be restarted at say epoch 10 using epoch 9 h5 model?

I use shared GPU cluster for my NN training. There is a cap of 8 hours for training run. After that I have to restart it using model output of epoch it stopped at..I am using 'Keras.ReduceLROnPlateau()' for changing learning rate. Question is whether ReduceLROnPlateau() has a way to know metric of previous epoch at which training stopped or does the patience restarts again when I restart training? Is there a way to make patience not reset for each restart of training?

Thank you.

Topic learning-rate keras neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.