What is the intuition behind decreasing the slope when using regularization?
While training a logistic regression model, using regularization can help distribute weights and avoid reliance on some particular weight, making the model more robust.
Eg: suppose my input vector is 4 dimensional. The input values are [1,1,1,1]. The output can be 1 if my weight matrix has values [1,0,0,0] or [0.25,0.25,0.25,0.25]. L2 norm would give the later weight matrix (because pow(1, 2) 4*pow(0.25,2) ). I understand intuitively why l2 regularization can be beneficial here.
But in case of linear regression l2 regularization reduces the slope. Why reducing the slope only provides better performance, is increasing the slope also an alternative?
Topic regularization
Category Data Science