Uniform convergence garantee on sample complexity
I can't understand why the Uniform Convergence guarantees an upper bound and not a lower bound on sample complexity as stated on [1]
Corollary 4.4. If a class $H$ has the uniform convergence property with a function $m^{UC}_H$ then the class is agnostically PAC learnable with the sample complexity $$m_H(\epsilon ,\delta) \leq m^{UC}_H(\epsilon/2,\delta)$$ Furthermore, in that case, the $ERM_H$ paradigm is a successful agnostic PAC learner for $H$.
From what I understood, if we have the sample set $S$ with $$|S| \geq m^{UC}_H(\epsilon/2,\delta) $$ then, by definition of the UC property, for every $\epsilon,\delta \in (0, 1)$ and for every probability distribution D over Z, with probability of at least $1 − \delta$, $S$ is $\epsilon/2$-representative.
Then, by Lemma 4.3, $$L_D(h_S) \leq min_{h \in H}{L_D(h) +\epsilon},\ h_S = ERM_H(S)$$ Therefore, by definition, the class $H$ is agnostic PAC learnable with a sample complexity $$m_H(\epsilon, \delta) \geq m^{UC}_H(\epsilon/2,\delta)$$
Is that proof correct? Thank you.
[1] Understanding Machine Learning By Shai Shalev-Shwartz and Shai Ben-David, pag 32 (https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/)
Topic convergence sampling machine-learning
Category Data Science