Bagging vs pasting in ensemble learning
This is a citation from "Hands-on machine learning with Scikit-Learn, Keras and TensorFlow" by Aurelien Geron:
"Bootstrapping introduces a bit more diversity in the subsets that each predictor is trained on, so bagging ends up with a slightly higher bias than pasting, but this also means that predictors end up being less correlated so the ensemble’s variance is reduced."
I can't understand why bagging, as compared to pasting, results in higher bias and lower variance. Can anyone provide an intuitive explanation of this?
Topic bagging bias ensemble variance machine-learning
Category Data Science