What are practical differences between kernel k-means and spectral clustering?
I've been lately wondering about kernel k-means and spectral clustering algorithms and their differences.
I know that spectral clustering is a more broad term and different settings can affect the way it works, but one popular variant is using K-means clustering on the spectral embedding of affinity matrix.
On the other hand kernel K-means applies K-means clustering directly to the affinity matrix. Therefore one immediate, theoretical difference is it omits spectral embedding step, i.e. it doesn't look for the lower-dimensional representation of data with eigenvectors.
I think it could be beneficial in high-dimensional settings (with many observations to cluster), but does it provide any boost with small sample size, for example from 10 to 20 observations?
What are other practical implications of using either of these algorithms versus another (e.g. which one is more sensitive to any changes in affinity etc.)?
Topic spectral-clustering kernel k-means clustering
Category Data Science