Should the model be defined again before training it to new data?

I wanted to fit the LSTM model on new data set in a loop so I have implemented it like this

#................................define model...........................
model =Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_input,n_features)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()

for k, v in enumerate(nse.get_fno_lot_sizes()):
    if v not in ('^NSEI','NIFTYMIDCAP150.NS','NIFTY_FIN_SERVICE.NS','^NSEBANK'):
        #-----------Create Training--------------------
        train = df[['close']].iloc[:int(len(df)*0.8)]
        scaler = MinMaxScaler()
        scaler.fit(train)
        scaled_train = scaler.transform(train)
         
        #------------------------------------------------------
        generator = TimeseriesGenerator(scaled_train,scaled_train,length=n_input, batch_size=1)

        #-----------------------------------------------------
        #fit model
        model.fit(generator,epochs=10)

or should the model definition be inside the for loop?

I am asking this because I do not see any signification change in loss when model is being trained on subsequent data in the for loop.

Topic loss lstm training

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.