Validation loss and validation accuracy stay the same in NN model

I am trying to train a keras NN regression model for music emotion prediction from audio features. (I am a beginner in NN and I am doing this as study project.) I have 193 features for training/prediction and it should predict valence and arousal values.

I have prepared a NN model with 5 layers:

model = Sequential()
model.add(Dense(100, activation='elu', input_dim=193))
model.add(Dense(200, activation='elu'))
model.add(Dense(200, activation='elu'))
model.add(Dense(100, activation='elu'))
model.add(Dense(  2, activation='elu'))

And this is my loss and optimizer metrics:

model.compile( loss = mean_squared_error, optimizer = 'RMSprop', metrics=['accuracy'] )

When I try to train this model, I get this graph for loss and validation:

So the model is trained and reaches accuracy of 0.9 on training data, but on test data accuracy wont fall, but it stays on ~0.5.

I don't know how to interpret this graph. I don't think this is overfitting, because validation accuracy wont fall, but it stays the same. How can I try fix this?

Update: I tried to add dropout and regularization and it worked in a way that now I clearly see that I have a problem with over-fitting. But now I am stuck again. I can not make my model to decrease validation loss. It always stops at about 0.3 validation loss. I tried changing my model architecture, data preprocessing, optimizer function, and nothing helped.

Topic validation keras regression accuracy neural-network

Category Data Science


In case of any linear model one should not use any kind of activation function as by default nn provides linear output if we dont apply any activation on it. You nn should look like this:

model = Sequential()
model.add(Dense(100, activation='elu', input_dim=193))
model.add(Dense(200, activation='elu'))
model.add(Dense(200, activation='elu'))
model.add(Dense(100, activation='elu'))
model.add(Dense(2))

model.compile( loss = "mean_squared_error", optimizer = 'RMSprop', metrics=['mse'] )
```

You say it's a regression task, predicting valence and arousal values, although you use accuracy as a performance metric. This does not make much sense, so your accuracy graph doesn't really say much. MSE is a valid performance metric for regression tasks in general, so your loss graph is more descriptive of what is going on. The loss-graph most definitely displays the characteristic of over-fitting, so I would recommend you to add regularization to your model.

This can be done by for example incorporating dropout and L1/L2-regularisation.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.