Using recurrent neural networks for modeling errors in IMUs

Inertial measurement units (IMU), usually composed of accelerometers and gyroscopes; are well known to have inherent errors in their data, originating from bias, random walk noise, temperature dependence etc. creating a highly non-linear dependence. Typically, extended Kalman filters are used to estimate and remove these errors for stable measurement of orientations and angular velocities: but even this is not entirely accurate, as some higher order errors are ignored or approximated, and the fact that the Markov assumption ignores the effect of values older than the immediately previous value while predicting a future value. The highest levels of accuracy in IMUs are usually obtained after a rigorous factory calibration (which, in turn, makes the good ones very expensive).

In a scenario like this, how applicable would recurrent neural networks be in modeling these errors? Assuming my 'training data' consists of the accelerometer and gyroscope values, which can be fused to obtain a noisy orientation estimate, and at the same time, a much more precise orientation estimate coming from another sensor (for example, a very accurate GPS): would it be possible to replace the functionality of the Kalman filter by an RNN for error estimation and sensor output prediction?

Topic lstm estimators rnn

Category Data Science


in fact, you don't need an RNN model. You can map the input (gyroscope, accelerometer and initial angles and frequency) to output given you have training datas, with the help of standard ANN.


Yes you can do sensor fusion, given that you have access to ground truth during training.

You should not try to model the errors with your RNN. Instead you should directly build a MISO RNN model (multiple inputs -> your sensors | single output -> the estimation of ground truth).


The only way to find out is to try it, but I doubt it. A Kalman makes a specific assumption about the "priors", i.e., about the nature of how the signals are related over time. This is based on knowledge about the physics of movement and how this affects the sensor values, combined with some assumptions about the likelihood of different kinds of movements. Assuming the model and assumptions are accurate, this helps provide a more accurate estimate of the orientation.

In contrast, a recurrent neural network has no knowledge built in, and no useful "priors". It has no model about the physics of movements and no assumptions. In principle, this makes it more general. But in practice, it means that, if you have a limited amount of data, it is likely to be less accurate.

In general, a strong prior and useful model can be very helpful. In the limit, as you have an unlimited amount of data, you might not need a model. But with a limited amount of data, the model can be helpful. The Kalman filter provides a pretty reasonable model, and when trying to work with data from IMUs, we typically have only a limited amount of data.

The place where RNNs could be more effective would be if the assumptions made by the Kalman filter's model do not correspond to reality. Then, you could imagine that RNNs could possibly learn the dynamics, because they don't assume a particular model, whereas Kalman filter is locked into a particular model. I don't particularly expect this to happen in practice, but it's a possibility.

Overall, from first principles, I would expect/predict RNNs to be less effective than Kalman filtering. But that's just speculation/guessing. If you really want to know, the way to find out for sure is to try some experiments.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.