Does linear classifier creates linear decision boundary in the input feature space?

I read a lot , but still not able to get the following concepts -:

(1) If a classifier is given, how do we know whether its a linear or non linear classifier? (Interested in step by step procedure to make a judgement of classifier)

(2) If a classifier is linear then its decision boundary is linear (True or False )

(3)If a decision boundary is linear then its classifier is linear(True or Flase)

Now, lets suppose we have to features - $X1, X2$, and the data is linearly separable and we are using the logistic regression as a classifier, then in this case the logistic regression is a linear clasifier. Because our classifier is making a decision based on the linear combinations of features i.e. $\beta_0 + \beta_1*X1 +\beta_3*X2 = \theta$, where $\theta$ is our threshold. And the decision boundary is $\beta_0 + \beta_1*X1 +\beta_3*X2 = \theta$ which is hyperplane (straight line in this case). So, in a gist it a linear classifier and the decision boundary is linear too.

Lets move to the other side, if the data is not linearly separable, then it will create a non linear decision boundary in the input space. So, linear logistic regression is not going to work in this case. Hence, consider polynomial logistic regression which is $\beta_0 + \beta_1*X1 +\beta_3*X2 +\beta_4*X1^{2} +\beta_5*X2^{2} + \beta_6*X1*X2 = \theta$, which is not a linear combinations of features anymore. So is it coorect to say that the poynomial logistic regression is not a linear classifier and it creates non linear decision boundary in the input feature space.?

Could anyone help me to understand the concepts of classifers and decision boundaries?

Topic linearly-separable machine-learning-model classification machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.