VC Dimensions in Machine Learning

Hello I'm learning about VC dimensions in machine learning. The class of classifiers $H$ where $h \in H$ if $h \in \mathbb{R} \rightarrow \{0,1\}$ is what I believe is simply binary classifiers (with mapping from the real numbers to 0 or 1). And with $\mathcal{X} = \mathbb{R}$, is it true that the VC dimension is $\infty$.

My intuition to this is that $H$ is a rich set of classifiers that contains any sort of mapping between real numbers and 0 or 1. So for instance both $(9,0)$ and $(9,1)$ are contained in $H$, and so does $(-3.1,0)$ and $(-3.1,1)$. This means that whichever set of points I pick in the real number space (and no matter how many they are), I could get an adversary to label them with 0s or 1s whichever way they want, and I can still find a classifier in $H$ that happens to give the correct labelling for all configurations - since I happen to have the labellings for both 1 and 0 in my $H$. So thus, since $H$ can shatter an arbitrarily large number of points then I can say $VC_{dim}(H) = \infty$?

Am I thinking about it correctly? But something also tells me that I'm wrong because $(9,0)$ and $(9,1)$ shouldn't both be in the $H$ as it maps any real number $x$ to two items: 0 and 1, which is contradictory to the definition of a function. So now, I'm debating whether I interpreted the class of classifiers $H$ correctly.

Thanks in advanced!

Topic vc-theory

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.