Would all classification models perform similarly in a theoretical and ideal scenario?
Imagine that we count on infinite computation power, an infinite amount of data and we have an infinite amount of time to wait for a model to learn. In such a scenario, we want to have some data binary classified.
My question is: would all classification models (we can leave out linear models because they won't be able to learn non-linear boundaries) perform similarly? In other words, are all the (in principle) solvable problems by each (non-linear) classification algorithm the same? You can assume an arbitrary amount of layers and neurons in a neural network, an arbitrary number of trees with arbitrary depths in a random forest, and so on.
I know that this question may not be of use in a realistic, practical world as the one we live in, but I want to know if, in theory, there are any specific obstacles that some models would have that others wouldn't.
Topic theory model-selection classification machine-learning
Category Data Science