LSTM for multiple time series regression with extremely large ranges
I have the following question for those which encountered the same dilemma as me:
My target is to develop a LSTM RNN for multi-step prediction for multiple time series representing daily sales of different products. The problem that I face is that the time series have very different ranges, some below 100 units per time observations, other more than 10000. Taking into account that I want to have only one model which should learn all the different time-series, I built a common array with all predictor sequences and another one for target sequences.
My latest approach was to scale in the range (0,1) only the predictor sequences in the same time, with the scope that the target sequences will be proportional with the predictor sequences. What I mean is that: higher input scaled predictors should conduct to higher predictions. Initially I tried to scale individually the input sequences and the target sequences, before I stacked them in the input array but I was not satisfied with the prediction accuracy. But after I changed the approach as described above (scaling all predictor sequences together) and left unscaled the target sequences, now my problem got bigger: the network does not converge anymore and the loss (mse) is not decreasing, althoutgh I am using a fairly large stacked LSTM network with 5 hidden layers and 10.000.000 parameters.
I would like to get an advise about how is best to handle regression for multiple time series which take values in very different ranges.
Thank you
Topic stacked-lstm keras time-series python
Category Data Science