Predicting Player Position from pervious positions and LSTMs

I am trying to use LSTMs to predict player positions in a field game. I try to overfit 8 slightly different time series. For this overfitting task I just use the positions of the players. A data sample looks like this:

[Blue are the player trajectories corresponding to the green targets I want to predict, while yellow are the enemies and red is the ball]

When trying to overfit 8 future positions for a single player ( one single player is a single blue track with one green target), this is happening:

[Where red are the targets and blue are the predictions. So all predictions are basically a single point]

So the model is basically collapsing to the mean of the targets. Actually I dont know why this is happening.

My model looks like this:

model = Sequential()
model.add(LSTM(128 ,return_sequences=True))
model.add(LSTM(128 , return_sequences=False))
model.add(Dense(64,activation='relu'))
model.add(Dense(2, activation = 'linear') )
model.compile(loss='mae', optimizer='adam')

while my training data takes the form of input: [BatchSize, SequenceLength, FeatureLen] and output: [BatchSize, Target] which result in [8,20, 28] and [8,2] respectively. I also tried for the output the input shape of [8,1,2], yet this is not changing anything.

then I'll train the model with

loss = model.fit(inp, out,epochs =150, verbose=False)

Do you guys have an idea, why the LSTM is collapsing to the mean?

Topic lstm keras sports time-series

Category Data Science


As I tried to predict the position without normalizing... the error was in the data. After normalizing the positions everything worked as expected.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.