What quantile is used for the initial DummyRegressor for Gradient Boosting Regressor in scikit-learn?

According to the documentation of Scikit-Learn Gradient Boosting Regressor:

init: estimator or ‘zero’, default=None: An estimator object that is used to compute the initial predictions. init has to provide fit and predict. If ‘zero’, the initial raw predictions are set to zero. By default a DummyEstimator is used, predicting either the average target value (for loss=’ls’), or a quantile for the other losses.

So what quantile is used for the DummyRegressor if the loss function is 'huber'? Is it the 50-quantile, ie. median?

I need this information because I am reconstructing the predictor for the Gradient Boosting Regressor for use in another software environment.

Topic loss-function gbm regression scikit-learn

Category Data Science


Yes, a GBM with Huber loss initializes with the median. The relevant bit of code is the method init_estimator of the loss class, in the file _gb_losses.py. For HuberLossFunction:

def init_estimator(self):
    return DummyRegressor(strategy='quantile', quantile=.5)

(source)

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.