What is going on with this kind of validation loss graph?

I am using stock prices and a whole bunch of indicators values to try to get a tensorflow model to predict to buy,sell, or hold. I think im going about this right but when i train the model, first i set a learning rate scheduler to increase the learning rate until the model converges and i use the training rate from the graph where the train loss and val loss first make their steeppest slope down for the next training run. so then the losses are smoothly going down and eventually the training loss will keep going down but the validation loss starts to move back up again. I dont know what that means, if its bad or good haha. Ive been at this all day.

Topic learning-rate convergence tensorflow loss-function

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.