Validation loss and validation accuracy stay the same in NN model
I am trying to train a keras NN regression model for music emotion prediction from audio features. (I am a beginner in NN and I am doing this as study project.) I have 193 features for training/prediction and it should predict valence and arousal values.
I have prepared a NN model with 5 layers:
model = Sequential()
model.add(Dense(100, activation='elu', input_dim=193))
model.add(Dense(200, activation='elu'))
model.add(Dense(200, activation='elu'))
model.add(Dense(100, activation='elu'))
model.add(Dense( 2, activation='elu'))
And this is my loss and optimizer metrics:
model.compile( loss = mean_squared_error, optimizer = 'RMSprop', metrics=['accuracy'] )
When I try to train this model, I get this graph for loss and validation:
So the model is trained and reaches accuracy of 0.9 on training data, but on test data accuracy wont fall, but it stays on ~0.5.
I don't know how to interpret this graph. I don't think this is overfitting, because validation accuracy wont fall, but it stays the same. How can I try fix this?
Update:
I tried to add dropout and regularization and it worked in a way that now I clearly see that I have a problem with over-fitting. But now I am stuck again.
I can not make my model to decrease validation loss. It always stops at about 0.3 validation loss.
I tried changing my model architecture, data preprocessing, optimizer function, and nothing helped.
Topic validation keras regression accuracy neural-network
Category Data Science