Does hyperparameter tuning of Decision Tree then use it in Adaboost individually vs Simultaneously yield the same results?

So, my predicament here is as follows, I performed hyperparameter tuning on a standalone Decision Tree classifier, and I got the best results, now comes the turn of Standalone Adaboost, but here is where my problem lies, if I use the Tuned Decision Tree from earlier as a base_estimator in Adaboost, then I perform hyperparameter tuning on Adaboost only, will it yield the same results as trying to perform hyperparameter tuning on untuned Adaboost and untuned Decision Tree as a base_estimator simultaneously, where I try the hyperparameters of both Adaboost and Decision Tree together.

Topic hyperparameter-tuning adaboost gridsearchcv decision-trees scikit-learn

Category Data Science


No, generally optimizing two parts of a modeling pipeline separately will not work as well as searching over all the parameters simultaneously.

In your particular case, this is easier to see: the optimal single tree will probably be much deeper than the optimal trees in an AdaBoost ensemble. A single tree (probably) needs to split quite a bit to avoid being dramatically underfit, whereas AdaBoost generally performs best with "weak learners", and in particular often a "decision stump", i.e. a depth-1 tree, is selected.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.