LSTM with variable time steps

I'm reading this post that describes how to train LSTMs with variable time step lengths. But does that have repercussions? Should I preprocess the time series in to varying permutations? e.g. Should the input user_a,bad,[(t1,req1), (t2,req2), (t3,req3)] be divided into something like:

user_a,bad,[(t1,req1)]
user_a,bad,[(t1,req1), (t2,req2)]
user_a,bad,[(t1,req1), (t2,req2), (t3,req3)]

For context, I'm looking to classify bad users based on their requests. All requests from bad users are bad, but presumably, more requests gives the model more signals for evaluations. But at prediction time, they come in one at a time.


Or some other ways to preprocess the time-series:

user_a,bad,[(t1,req1)]
user_a,bad,[(t2,req2), (t3,req3)]
user_a,bad,[(t1,req1), (t3,req3)]

Topic neural lstm rnn time-series

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.