Listwise learning to rank with negative sample relevance

Typical listwise learning to rank (L2R) algorithm tries to learn the rank of docs $\{x_i\}_{i=1}^m$ corresponding to a query $q$. If we use correlation efficient to label the relevance between docs and query, then the label $y_i\in[0, 1]$. The larger the $y_i$, the more relevant of the doc $x_i$ to $q$. Most L2R algorithms, such as approximated rank/NDCG and ListMLE, focus more on the ranking accuray of positve correlated doc (i.e. $y_i$ close 1) by giving larger weight in the loss function. This is common in information retrieval problem.

However, if the relevance between docs and query can also be negative, i.e. $y_i\in[-1,1]$, and the ranking accuaracy of both large positive and large negativa docs are important, how can L2R algorithms handle this problem ?

Topic learning-to-rank machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.