How to reconstruct a scikit-learn predictor for Gradient Boosting Regressor?

I would like to train my datasets in scikit-learn but export the final Gradient Boosting Regressor elsewhere so that I can make predictions directly on another platform.

I am aware that we can obtain the individual decision trees used by the regressor by accessing regressor.estimators[].tree_. What I would like to know is how to fit these decision trees together to make the final regression predictor.

Topic natural-gradient-boosting representation prediction gbm scikit-learn

Category Data Science


There are two estimators i.e. The initial predictor and the sub-estimators

init_estimator
The estimator that provides the initial predictions. Set via the init argument or loss.init_estimator.
estimators_
ndarray of DecisionTreeRegressor of shape (n_estimators, 1)
The collection of fitted sub-estimators.

Prediction after the first (i.e. init) estimator is controlled by the learning rate.

You can get the prediction as done in the below code -

trees = model.estimators_

x  = x_test.iloc[10,:].values # A sample X to be predicted
y_pred = model.init_.predict(x.reshape(1, -1)) # prediction from init estimator

for tree in trees:
    pred = tree[0].predict(x.reshape(1, -1)) # prediction from sub-estimator

    y_pred = y_pred + model.learning_rate*pred  # Summing with LR
y_pred

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.