Which error metric is good for measuring accuracy

I am estimating water depth with satellite data (predicted value) and would like to validate my result using bathymetry lidar data collected on the field and believed to be more accurate (observed value). I have different observations at each water depth. For example, number of observations at water depth range of 0-10 m are 300, where as values at deeper depth range (10 - 20 m) are less (~50 points). I have been using RMSE (as I would like to penalize larger error) to measure my accuracy but wondering if there is a better error metric out that is not sensitive to number of observations. In other words, for water depth 10 - 20 m with 50 points, I have RMSE of ~6m, and I was thinking the value could be lower if I have more observations. Where as for shallow water depth (0-10 m), my RMSE are much lower, perhaps because I have lots of observation.

Topic rmse metric

Category Data Science


Since you have a regression problem, you can use many possible evaluation metrics. Other regression evaluation metrics include mean squared error (MSE), mean absolute value (MAE), and mean absolute percentage error (MAPE).

However, it sounds like the issue is variance across different target values. If you switch to Bayesian Regression, it will better model variance and uncertainty across the range of target values.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.