When to use GRU over LSTM?

The key difference between a GRU and an LSTM is that a GRU has two gates (reset and update gates) whereas an LSTM has three gates (namely input, output and forget gates).

Why do we make use of GRU when we clearly have more control on the network through the LSTM model (as we have three gates)? In which scenario GRU is preferred over LSTM?

Topic gru lstm deep-learning neural-network

Category Data Science


FULL GRU Unit

$ \tilde{c}_t = \tanh(W_c [G_r * c_{t-1}, x_t ] + b_c) $

$ G_u = \sigma(W_u [ c_{t-1}, x_t ] + b_u) $

$ G_r = \sigma(W_r [ c_{t-1}, x_t ] + b_r) $

$ c_t = G_u * \tilde{c}_t + (1 - G_u) * c_{t-1} $

$ a_t = c_t $

LSTM Unit

$ \tilde{c}_t = \tanh(W_c [ a_{t-1}, x_t ] + b_c) $

$ G_u = \sigma(W_u [ a_{t-1}, x_t ] + b_u) $

$ G_f = \sigma(W_f [ a_{t-1}, x_t ] + b_f) $

$ G_o = \sigma(W_o [ a_{t-1}, x_t ] + b_o) $

$ c_t = G_u * \tilde{c}_t + G_f * c_{t-1} $

$ a_t = G_o * tanh(c_t) $

As can be seen from the equations LSTMs have a separate update gate and forget gate. This clearly makes LSTMs more sophisticated but at the same time more complex as well. There is no simple way to decide which to use for your particular use case. You always have to do trial and error to test the performance. However, because GRU is simpler than LSTM, GRUs will take much less time to train and are more efficient.

Credits:Andrew Ng


GRU is related to LSTM as both are utilizing different way if gating information to prevent vanishing gradient problem. Here are some pin-points about GRU vs LSTM-

  • The GRU controls the flow of information like the LSTM unit, but without having to use a memory unit. It just exposes the full hidden content without any control.
  • GRU is relatively new, and from my perspective, the performance is on par with LSTM, but computationally more efficient (less complex structure as pointed out). So we are seeing it being used more and more.

For a detailed description, you can explore this Research Paper - Arxiv.org. The paper explains all this brilliantly.

Plus, you can also explore these blogs for a better idea-

Hope it helps!


Actually, the key difference comes out to be more than that: Long-short term (LSTM) perceptrons are made up using the momentum and gradient descent algorithms. When you reconcile LSTM perceptrons with their recursive counterpart RNNs, you come up with GRU which is really just a generalized recurrent unit or Gradient Recurrent Unit (depending on the context) that more closely integrates the momentum and gradient descent algorithms. Were I you, I'd do more research on AdamOptimizers.

GRU is an outdated concept by the way. However, I can understand you researching it if you want moderate-advanced in-depth knowledge of TF.


GRU is better than LSTM as it is easy to modify and doesn't need memory units, therefore, faster to train than LSTM and give as per performance.


*To complement already great answers above.

  • From my experience, GRUs train faster and perform better than LSTMs on less training data if you are doing language modeling (not sure about other tasks).

  • GRUs are simpler and thus easier to modify, for example adding new gates in case of additional input to the network. It's just less code in general.

  • LSTMs should in theory remember longer sequences than GRUs and outperform them in tasks requiring modeling long-distance relations.

*Some additional papers that analyze GRUs and LSTMs.


This answer actually lies on the dataset and the use case. It's hard to tell definitively which is better.

  • GRU exposes the complete memory unlike LSTM, so applications which that acts as advantage might be helpful. Also, adding onto why to use GRU - it is computationally easier than LSTM since it has only 2 gates and if it's performance is on par with LSTM, then why not?
  • This paper demonstrates excellently with graphs the superiority of gated networks over a simple RNN but clearly mentions that it cannot conclude which of the either are better. So, if you are confused as to which to use as your model, I'd suggest you to train both and then get the better of them.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.