Composite Input into Seq2Seq LSTM Network

Given that we have a seq2seq problem, where the input sequence is indeed multiple inputs and not only one as in traditional seq2seq problems. For example, in language translation, we usually give input to LSTM encoder to encoder input and we try on the other side to decode that and compare it with target language. So obviously here the input is basically a matrix of one-hot encoded vectors.

Now, is it possible please or do you you know a variation of LSTM autoencoder or attention mechanism that takes into consideration that we have multiple inputs and not only one as in the example above? I am not sure if my question is clear? If not please let me know.

Topic stacked-lstm sequence-to-sequence lstm

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.