How to explain a stable NDCG@K in extreme multilabel recommender model

I am working in a multilabel recommender project and I try to evaluate it as a ranking problem.

I calculate recall@k and precision@k which both looks quite well. Recall increases and Precision decreases as I try higher K values, which is expected.

However, the NDCG@K increases up to a certain K and after that it stays the same. How can we explain such a behaviour?

Topic ndcg metric multilabel-classification ranking

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.