If an SVM decision boundary is the perpendicular bisector of the line connecting the support vectors, why iterate for it using a loss function?

Would it not make more sense to do some linear algebra to find the vector of the decision boundary? Is that more computationally expensive?

Topic linear-algebra svm machine-learning

Category Data Science


You have to find the support vectors!

Anyway, the title is not quite correct, because there can be many support vectors, so "the line connecting" them doesn't necessarily exist.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.