How to evaluate model accuracy at tail of empirical distribution?

I am making a nonlinear regression on stationary dependent variable and I want to precisely forecast extreme values of this variable. So when my model predicts extreme values I want them to be highly accurate. Less extreme forecasts (eg. positioned near mean) don't need to be as much accurate.

What are some useful metrics with favorable statistical properties, used to compare multiple models when tail accuracy matters?

Topic model-evaluations cross-validation predictive-modeling

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.