PAC Learnability - Notation

The following is from Understanding Machine Learning: Theory to Algorithm textbook:

Definition of PAC Learnability: A hypothesis class $\mathcal H$ is PAC learnable if there exist a function $m_H : (0, 1)^2 \rightarrow \mathbb{N}$ and a learning algorithm with the following property: For every $\epsilon, \delta \in (0, 1)$, for every distribution $D$ over $X$, and for every labeling function $f : X \rightarrow \{0,1\}$, if the realizable assumption holds with respect to $\mathcal H,D,f$ then when running the learning algorithm on $m \ge m_H(\epsilon,\delta)$ i.i.d. examples generated by $D$ and labeled by $f$, the algorithm returns a hypothesis $h$ such that, with probability of at least $1 - \delta$ (over the choice of the examples), $L_{(D,f)}(h) \le \epsilon$.

1) In the function definition $m_H : (0, 1)^2 \rightarrow \mathbb{N}$; what does a) 0 and 1 in the bracket, b) the integer 2, and c) $\rightarrow \mathbb{N}$ refer to?

Topic pac-learning notation machine-learning

Category Data Science


The explanations are as follows:

  • $m_H:(0,1)^2 \rightarrow \mathbb N$ is a similar notation to $f:R^n\rightarrow \mathbb N$ which means it takes a n-dimensional input consisting of real numbers only. In the case of PAC learning the input is 2 dimensional consisting of numbers between $0$ and $1$ only which stand for values of $\epsilon, \delta$ respectively.

  • The integer $2$ as explained above is the dimension of the input vector.

  • $\rightarrow \mathbb N$ means mapped to natural numbers. In case of PAC learning that for each value of $(\epsilon, \delta)$ a function $m_H$ maps it to natural numbers, simply put $m_H(\epsilon, \delta) = $

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.