What are the key differences between a MLP with lagged features and a RNN

I've been working with MLP's for a while. Whenever I assumed that the past values of a feature might be useful for predicting the future values of Y, I would just create a new column in my data frame with Feature(t-1). This process would be repeated for further lags t-2,t-3...t-n.

Besides the obvious problem of the curse of dimensionality, I am worried that the MLP doesn't know how to weight those time lagged features that are now in a new separate column.

So in a nutshell:

  1. Is the above approach wrong?

  2. How does an RNN solve this?

Topic mlp rnn deep-learning time-series machine-learning

Category Data Science


Let's assume you have 3 features + considering 5 timestamps for each feature. That means you have 15 features as X values.

Machine learning techniques other than RNN can utilize the features as individual columns i.e. 15 columns.

RNN can use these data as 5 arrays containing 3 feature in them. We can feed this 5 array in RNN cell at each time step for a single record. The way RNN cell works when we feed second-time stamp data or subsequent time stamp data it gets processed important information from last time stamps.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.