Rate of convergence - comparison of supervised ML methods
I am working on a project with sparse labelled datasets, and am looking for references regarding the rate of convergence of different supervised ML techniques with respect to dataset size.
I know that in general boosting algorithms, and other models that can be found in Scikit-learn like SVM's, converge faster than neural networks. However, I cannot find any academic papers that explore, empirically or theoretically, the difference in how much data different methods need before they reach n% accuracy. I only know this from experience and various blog posts.
For this question I am ignoring semi-supervised and weakly supervised methods. I am also ignoring transfer learning.
Topic convergence supervised-learning reference-request machine-learning
Category Data Science