Minimum number of samples to train XGBoost without overfitting
When using Neural Networks for image processing I learned a rule of thumb: to avoid overfitting, supply at least 10 training examples for every neuron.
Is there a similar rule of thumb for classifiers such as XGBoost
, presumably taking into account the number of features and estimators?
And, considering the 'curse of dimensionality' shouldn't the rule of thumb be that n_training
is geometric in n_dimensions
, and not linear?
Topic overfitting xgboost neural-network classification
Category Data Science