Should I use or tune `reg_lambda` or `reg_alpha` hyperparameters when using a tree booster in XGBoost

XGBoost has 3 types of boosters:

  • tree boosters (gbtree, dart)
  • linear booster (gbliner)

Since reg_alpha (L1, LASSO) and reg_lambda (L2, Ridge) are linear regularization parameters, should I use them or tune them when using tree boosters?

Essentially, I want to decrease my hyperparameter search spaces and I was wondering if these linear regularization parameters have any effect on the objective function of the tree boosters.

Topic lasso ridge-regression xgboost scikit-learn machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.