Why do the performance of DL models increase with the volume of data while that of ML models will flat out or even decrease?
I have read some articles and realized that many of them cited, for example, DLis better for large amount of data than ML.
Typically:
The performance of machine learning algorithms decreases as the number of data increases
Another one says the performance of ML models will plateau,
As far as I understand, the more data, the better. It helps us implement complex models without overfitting as well as the algorithms learn the data better, thus inferring decent patterns for accurate outputs. This should apply to both DL and ML.
So, I feel quite confused the statements from the cited sources, hopefulle guys could help me elaborate more on this matter,
Topic theory
Category Data Science