Prediction of a Decision tree will lie within the limits of the target because at the end either the record will fall to a specific target leaf if the depth is not controlled Or it will be average on multiple targets.
With the second approach too, it can't cross the limit of the target.
Coming to Ensembling -
Bagging -
Bagging simply averages multiple trees. So again prediction will remain in the target's limit
Adaptive boosting
Here we add weight to records on successive Tree.
This will not impact the prediction of an individual tree. Here, we do a weighted average of all tree. Again, the prediction will remain in the target's limit
Gradient Boosting
Here we add new tree based on the prediction error of the previous three.
In a very simple language, Let's assume 100 is a target. The first tree predicts 70. Then the second tree will be trained on this 30. Let's assume it predicted 20. With this approach, we grow many trees. Then, we will have these predictions -
70 + 20 + 6 + 2 + 1 + 0.5 + 0.2 + ......
It will not cross 100.
Edit post Ben's comment-
Above logic(for GB) will not work if your learning rate is too high as that will make the residual value grow with every next tree and can reach any value.
Gradientboost uses Gradient Descent on the Function itself. So, the target for the next tree depends on the residual and the Learning rate. With too many trees, the value will blow up.
See this code snippet with LR=2.1 and Tree=100, 398 can become 1.5 Mn
from sklearn.datasets import make_regression
from sklearn.ensemble import GradientBoostingRegressor
X, y = make_regression()
model = GradientBoostingRegressor(max_depth=1, n_estimators=100, learning_rate=2.1, random_state=42)
model.fit(X,y)
preds = model.predict(X)
print(preds.min(),y.min(), preds.max(),y.max())
-1246776.29 || -487.87 || 1586302.24 || 398.12
if n_estimators=10, then it is not blown yet. Need more Trees to multiply
-277.83 || -393.27 || 118.32 || 594.82
Hence, the answer to your question is No Yes (Theoretically as we mostly keep LR<1.0 for a smooth learning)