Base model in ensemble learning
I've been doing some research on ensemble learning and read that for base models, model with high variance are often recommended (can't remember which book I read this from exactly).
But, it seems counter-intuitive because wouldn't having base models with low variance(doing good on test set) be better than having multiple bad base models?
Topic bagging ensemble-modeling machine-learning
Category Data Science