Why does using Gradient descent over Stochatic gradient descent improve performance?
Currently, I'm running two types of logistic regression.
- logistic regression with SGD
- logistic regression with GD
implemented as follows
SGD= SGDClassifier(loss=log,max_iter=1000,penalty='l1',alpha=0.001)
logreg = LogisticRegression(solver='liblinear', max_iter=100, penalty='l1', C=0.1)
nevermind the hyperparameters as I've used GridsearchCV and tried multiple combinations.
When calculating accuracy logistic with GD performs better than SGD. I want to understand why this is the case, is using GD instead SGD one way to mitigate underfitting model?
Topic sgd gradient-descent logistic-regression python machine-learning
Category Data Science