How to measure accuracy of a route prediction
I developed a new route prediction algorithm and I am trying to find a metric that informs on how well a prediction was.
This metric is meant to be used offline, meaning that the goal is not to measure the quality of the prediction when it is need in real time. Instead, We are given a set $R=\{r_1,r_2,...r_{|R|}\}$ of routes that occurred in the past and for each $r_i\in R$ we take a small prefix of $r_i$ and provide it as input to the algorithm which in turn outputs a prediction route $p_i$.
Therefore, given the set $R=\{r_1,r_2,...r_{|R|}\}$ and the corresponding predictions set $P=\{p_1,p_2,...p_{|P|}\}$, I want to compare each pair of routes $(p_i,r_i)$ to determine how different $p_i$ is from $r_i$.
Can anyone point me in the right direction?
My first idea was to compute the area comprised between the two routes $p_i$ and $r_i$, but I really do not know how to interpret the output in terms of good or bad result.
Thanks for the attention
Topic metric prediction machine-learning
Category Data Science