SVM behavior when regularization parameter equals 0
I read on this Wikipedia page the following about soft-margin SVM:
"The parameter $λ$ determines the trade-off between increasing the margin size and ensuring that the $x_i$ lie on the correct side of the margin. Thus, for sufficiently small values of $λ$, the second term in the loss function will become negligible, hence, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not."
I can't understand why in the case that λ=0 the algorithm will behave like hard-margin SVM. If λ=0, it seems to me that the algorithm won't have any reason to perform any optimization on the margin. Doesn't it just become a perceptron in that case, since the algorithm only "cares" about classifying all the train data correctly, while not reaching any optimal solution regarding the margin?
I'll appreciate a clarification about the issue, please.
Topic hinge-loss regularization svm machine-learning
Category Data Science