Batch processing with variable length sequences

I have a lot of time series with different lengths. I would like to know what are the best practices to fit them to a Bidirectional LSTM model. The problem is a Binary Classification of Sequence to Sequence. So for every time step, I want to predict the binary class.

Currently, I create a tensor for each data frame with the shape of (1, None, #Features). Then I fit every tensor separately to the model.

Is it better to combine every data frame to a tensor of the shape (#Time Series, None, #Features) and fit them all at once? Does this make even a difference?

Or could it be better to go with the sliding window approach and split one time-series into smaller windows?

I can't specify a max-length of the time-series so I think I cannot use pad-sequence from Keras.

Topic keras tensorflow classification time-series

Category Data Science


Best practices is to have each time series be the same length for Bidirectional LSTM.

You say not specify the maximum length so another approach is to pick a fixed length. If the data is too short, pad. If the data is too long, trim.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.