Regularizing the intercept

I am reading The Elements of Statistical Learning and regarding regularized logistic regression it says:

As with the lasso, we typically do not penalize the intercept term

and I am wondering in which situations you would penalize the intercept?

Looking at regularization in general, couldn't one think of scenarios where penalizing the intercept would lead to a better EPE (expected prediction error)? Although we increase the bias wouldn't we in some scenarios still reduce the EPE?

EDIT It might be that we cant reduce the EPE while penalizing the intercept. But are there scenarios where the following statement isnt correct: The model will get a lower Expected Prediction Error if we do not penalize the intercept?

Topic regularization

Category Data Science


Of course there exist scenarios where it makes sense to penalize the intercept, if that aligns with domain knowledge.

However in real world, more often we do not just penalize the magnitude of intercept, but enforce it to be zero. This happens in cases where we expect the output to be 0 if all inputs are 0.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.