Why does Catboost outperform other boosting algorithms?
I have noticed while working with multiple datasets that catboost with its default parameters tends to outperform lightgbm or xgboost with its default parameters even on a tabular dataset with no categorical features.
I am assuming this has something to do with the way catboost constructs the decision trees but I just wanted to confirm this theory. If anyone could elaborate on why it performs better on non categorical data then that would be great! Thanks in advance!
Topic catboost lightgbm boosting decision-trees python
Category Data Science