Training loss decreasing while Validation loss is not decreasing

I am wondering why validation loss of this regression problem is not decreasing while I have implemented several methods such as making the model simpler, adding early stopping, various learning rates, and also regularizers, but none of them have worked properly. any suggestions would be appreciated. here is my code and my outputs:

optimizer = keras.optimizers.Adam(lr=1e-3)
model = Sequential()
model.add(LSTM(units=50, activation='relu', 
               activity_regularizer=tf.keras.regularizers.l2(1e-2), return_sequences=True, input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(Dropout(0.2))
model.add(LSTM(units=50, activation='relu', 
               activity_regularizer=tf.keras.regularizers.l2(1e-2), return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(y_train.shape[1]))
model.compile(optimizer=optimizer, loss='mae')
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
history = model.fit(x_train, y_train, epochs=10, batch_size=16, validation_split=0.3, verbose=1)

Topic machine-learning-model validation overfitting training keras

Category Data Science


Looks like the model overfits, very rapidly (in just a few epoches). I would start with combining all approaches you mentioned: making the model simpler, adding early stopping, various learning rates, and also regularizer. If it does not work, make the model ever simpler (e.g. removed an LSTM layer), lower the learning rate further, add stronger regularization.

On the other hand, the callback of EarlyStopping is defined, but I do not see where it is used.

Also, it is often a good idea to go back and do some checks on the input data.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.