How is there an inverse relation between precision and recall?

What I know? Firstly,

Precision= $\frac{TP}{TP+FP}$

Recall=$\frac{TP}{TP+FN}$

What book says?

A model that declares every record has high recall but low precision.

I understand that if predicted positive is high, precision will be low. But how will recall be high if predicted positive is high.

A model that assigns a positive class to every test record that matches one of the positive records in the training set has very high precision but low recall.

I am not able to properly comprehend how there is inverse relation between precision and recall.

Here is a doc that I found but I could not also understand from this doc as well.

https://www.creighton.edu/fileadmin/user/HSL/docs/ref/Searching_-_Recall_Precision.pdf

Topic classifier confusion-matrix data-mining

Category Data Science


There is an overall inverse relationship, but not a strictly monotone one. See e.g. the precision-recall curves in the sklearn examples.

A model that declares every record [to be positive class] has high recall but low precision.

If the model declares every record positive, then $TP=P$ and $FP=N$ (and $FN=TN=0$). So recall is 1; and the precision is $P/(P+N)$, i.e. the proportion of positives in the sample. (That may be "low" or not.)

Rather than addressing your second quote immediately, I think it might be beneficial to just examine (nearly) the opposite case to the above. Suppose your classifier only makes one positive prediction; assuming the model rank-orders reasonably well, it's very likely this is a true positive. Then $TP=1$, $FP=0$, $TN$ and $FN$ are both large. Then precision is 1, and recall is very small.

The second quote makes the assumption there more solid: every positive prediction is a true positive (assuming no opposite-class clones), but there are very few positive predictions.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.