Neural Network is non deterministic on validation

We have a regression problem we are trying to solve. We are using Transfer learning by using resnet50 and adding a linear activation layer at the end of it. each image input is 3 layers of synthetic wavelet images (not RGB). since resent uses Relu as an activation function and the fact that the wavelet transformation produces negative values, we have shifted all the data of our images (3Dmatrix) to be positive

our label data is between -5 and 5.

we discovered that once we run our train process several times and try to use it and predict the validation data set we get a huge different in the results. here are some example of training the model on the exact same train data set with exactly the same hyper parameters:

Train 1 validation set prediction results:
max Predictions Value    : [2.605783]
min Predictions Value    : [0.71650916]
avg Predictions Value    : 1.938421
medial Predictions Value : 1.9630035

Train 2 validation set prediction results:
max Predictions Value    : [3.7345936]
min Predictions Value    : [0.438244]
avg Predictions Value    : 1.1411991
medial Predictions Value : 1.0634146

Train 3 validation set prediction results:
max Predictions Value    : [1.6383451]
min Predictions Value    : [0.24169573]
avg Predictions Value    : 0.8020503
medial Predictions Value : 0.8167548

Train 4 validation set prediction results:
max Predictions Value    : [2.3159726]
min Predictions Value    : [0.6428349]
avg Predictions Value    : 1.0716639
medial Predictions Value : 1.0022478

we are using the same network, hyper parameters and data. In every model the train and validation loss are very similar +-5%.

  1. why are the prediction values rage are so different
  2. why don't we get any negative predictions (train dataset is balanced 50% positive and negative)?

Topic cnn regression

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.