Training loss decreasing while Validation loss is not decreasing
I am wondering why validation loss of this regression problem is not decreasing while I have implemented several methods such as making the model simpler, adding early stopping, various learning rates, and also regularizers, but none of them have worked properly. any suggestions would be appreciated. here is my code and my outputs:
optimizer = keras.optimizers.Adam(lr=1e-3)
model = Sequential()
model.add(LSTM(units=50, activation='relu',
activity_regularizer=tf.keras.regularizers.l2(1e-2), return_sequences=True, input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(Dropout(0.2))
model.add(LSTM(units=50, activation='relu',
activity_regularizer=tf.keras.regularizers.l2(1e-2), return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(y_train.shape[1]))
model.compile(optimizer=optimizer, loss='mae')
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
history = model.fit(x_train, y_train, epochs=10, batch_size=16, validation_split=0.3, verbose=1)
Topic machine-learning-model validation overfitting training keras
Category Data Science