How to apply Kalman Filter for Cleaning Timeseries Data effectively without much optimization?

Someone gave me a tip to use kalman filter for my dataset. How time intensive is it to get a good kalman filter running, compared to simple interpolation methods like

df.fillna(method=)

which takes basically no effort.

If one or two iterations is enough to get useful results, which come very near the real missing value, then I am willing to take the effort to implement it. (Dataset length 100.000 up to 200mio rows)

If it needs to be optimized like a Neural Network itself which can be costly in terms of time, isnt it better to simply use an LSTM?

Topic dataframe interpolation pandas python data-cleaning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.