VC Dimension of some certain Hypothesis Classes

I'm still unsure as to how to get the VC-dimension of some certain hypothesis classes. So for example we have $\mathcal{X} = \mathbb{R}$ and $\mathcal{Y} = \{ 0,1 \}$ and the hypothesis class $\mathcal{H} = \{ h(x) \}$ where $h(x) = 1$ if $|x| \in S$ where $S = [1,2) \cup [3,4) \cup [5,6) \cup ...$. My intuition is that I pick any set of points $\{x\} \in \mathcal{X}$. And then an adversary labels all such points arbitrarily. Then I find a classifier in $\mathcal{H}$ that labels such set of points correctly according to the adversary's labelling. Seems like here, the VC-dimension is 0, because there are no parameters and is only a single classifier in $\mathcal{H}$. So for any single point I pick, if $x \in S$, then the adversary labels it with 0 and my classifier in $\mathcal{H}$ cannot correctly label it. If I pick a point not in $S$, the adversary will label it as 1, and then again my classifier in $\mathcal{H}$ cannot correctly label it. So it cannot shatter a single point. Thus VC-dimension is 0. My question now is though, is it enough to find just any 1 set of $n$ points that is shattered to say that the VC dimension is at least $n$? OR do I have to prove that for any set of $n$ points I choose in the space, then the $\mathcal{H}$ can shatter those any given set of $n$ points?

What if we modify the $\mathcal{H}$ above a bit by adding a threshold parameter $t$ such that not only does $x \in S$ is required to output 1 but also that $x t$ is mandatory to generate 1. And then 0 for any others. Now, I'm thinking the VC-dimension becomes 1 (can shatter a single point but not two). I have to ensure to pick a point $x \in S$ so that there's a chance that it can be labelled with both 1 or 0. If $x \notin S$, then all classifiers in my $\mathcal{H}$ will always label such point a 0. By picking $x \notin S$, then the adversary will just label it with 1 and none of my classifiers can classify it correctly. But by picking $x \in S$, then the adversary labels with 1 (so I pick a classifier that has threshold to the left of $x$); or if the adversary labels with 0 (so I pick a threshold to the right of $x$).

Is this how I should be thinking about VC-dimensions? I'm still feeling pretty confused. Thank you!

Topic vc-theory machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.