Regularizing the intercept
I am reading The Elements of Statistical Learning and regarding regularized logistic regression it says:
As with the lasso, we typically do not penalize the intercept term
and I am wondering in which situations you would penalize the intercept?
Looking at regularization in general, couldn't one think of scenarios where penalizing the intercept would lead to a better EPE (expected prediction error)? Although we increase the bias wouldn't we in some scenarios still reduce the EPE?
EDIT It might be that we cant reduce the EPE while penalizing the intercept. But are there scenarios where the following statement isnt correct: The model will get a lower Expected Prediction Error if we do not penalize the intercept?
Topic regularization
Category Data Science