Your understanding that an RBF kernel can make points linearly separable only if they are located in perfect circular way is not correct. For your dataset, is very easy to use the RBF kernel to separate the two classes.
For simplicity let's assume you are using a SVM classifier.
It is true that the RBF kernel looks spherical, not elliptical. However, the decision boundary is also determined by the support vectors. Unless you only have one support vector for one of the classes, the decision boundary is usually NOT spherical.
You can easily visualize the decision boundary given by SVM. For example, I simply modified the code here (using sklearn.svm
) to use your dataset, and set C=10
, the result looks like:
![decisionboundary](https://i.stack.imgur.com/t7VO8.png)
So a more general question is:
How does the "shape" of kernel affect the shape of the decision boundary?
We can visualize the effect with some self-defined kernels. In the figure below, in the upper row we plot the shape of the kernels used (same color denotes the same kernel value), while the lower row shows the SVM decision boundary with that kernel.
![kernels](https://i.stack.imgur.com/uNax8.png)
So we can see that the "shape" of the kernel does affect the shape of the decision boundary, but only its details. The overall shape is still determined by the actual supported vectors. BTW in this case all 17 data points are supported vectors.