Validation loss keeps fluctuating about training loss
I am training a Keras model for multi-target regression by using a custom loss function with the goal of getting predictions accurate to below 0.01
with respect to that loss function. As can be seen from the below plot of the loss functions, both the training and validation loss quickly get below the target value and the training loss seems to converge rather quickly while the validation loss keeps fluctuating about the training loss value. Although the loss is below the target threshold I'm wondering whether such fluctuations could reflect some problems with the model fit? Or could it be that the validation set size is too small (compared to the training size); they are |training| = 13500; |validation| = 3400
respectively (so about 80% and 20%). I am using batch learning with a batch size of 16.
Topic convergence keras loss-function deep-learning
Category Data Science