model loss is less but prediction is wrong

I have 100 samples having following data

[1, 2, 3, 4] = [4, 8]
[5, 6, 7, 8] = [12, 48]
[9, 10, 11, 12] = [20, 120]
...
[397, 398, 399, 400] = [796, 159200]

Data on left of = is training data and output is 2 timestamp which is (0th element + 2nd element, 1st element*3rd element)

Ex. Given:[1, 2, 3, 4]

Solution: 1+3=4, 2*4=8.

So output of [1, 2, 3, 4] is [4,8]

And my model is as follows

model = Sequential()
model.add(LSTM(10, activation='relu', return_sequences=False, input_shape = (4,1)))
model.add(Dropout(0.2))
# model.add(LSTM(100, activation='linear', return_sequences=True))
# model.add(LSTM(25, activation='linear', return_sequences=False))

model.add(Dense(2))
model.build()
model.summary()
model.compile(optimizer='adam', loss='mse')
model.fit(x_scaled,y_scaled, epochs=100)

After 100 epochs I get loss of 4.8234e-04 but when I test on [41,42,43,44] It outputs [47202.832 7965998.] instead of [84, 1848]

I tried dropout layer but prediction is wrong. How can I fix this?

Topic lstm keras dropout python

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.