How does XGBoost perform in Parallel
So what I know about boosting technique, Like we train the data and update the weights of falsely predicted values or try to minimize the loss in the next model. So basically, it's the sequential process where we feed the output of one model to another.
In XGBoost it's said that model performs parallelly by Data parallelization or Model parallelization, so I'm not able to understand that if that's the case then how are we feeding out of first model or weak learner to next one if they are running parallelly on different nodes, isn't that similar to bagging or Random Forest Technique. I know I'm definitely wrong, but I'm not to get that how bagging technique is working parallely.
Topic boosting xgboost ensemble-modeling python machine-learning
Category Data Science