Increasing (negative) R2 coincident with decreasing (positive) MSE during hyper parameter optimisation

I have a densely connected NN and I'm running a hyper parameter optimisation for multi-target output. During hyper parameter optimisation training, each epoch KerasTuner focuses on val_loss.

During training I can see that I have absurdly large negative R2 values (basically a terribly fitted model), that decrease to 0 (and hopefully continue to 1) mostly whilst MSE drops too. Occasionally I'll get extremely large (negative) jumps back up in the R2_val score, whilst all other metrics decrease. (including R2_train score) What causes an even worse R2 validation score, whilst every other metric train/val decreases/doesn't jump?

Epoch 1/50 loss: 0.0072 - r2: -4645.4219 - mae: 0.0220 - val_loss: 0.0067 - val_r2: -4447.6040

Epoch 7/50 loss: 0.0071 - r2: -2647.1272 - mae: 0.0210 - val_loss: 0.0067 - val_r2: -2932.4895

Epoch 8/50 loss: 0.0071 - r2: -2530.7327 - mae: 0.0210 - val_loss: 0.0067 - val_r2: -23403.6367

Epoch 9/50 loss: 0.0071 - r2: -2692.3059 - mae: 0.0210 - val_loss: 0.0067 - val_r2: -11448.6631

Epoch 10/50 loss: 0.0071 - r2: -2530.1213 - mae: 0.0210 - val_loss: 0.0067 - val_r2: -4318.5527

Epoch 11/50 loss: 0.0071 - r2: -2763.3567 - mae: 0.0210 - val_loss: 0.0067 - val_r2: -3317.2510

Topic mse r-squared loss-function neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.