PCA vs.KernelPCA: which one to use for high dimensional data?
I have a dataset which contains a lot of features (>>3). For computational reasons, I would like to apply a dimensionality reduction. At this point I could use different techniques:
standard PCA Kernel PCA LLE ... My problem is to choose the right approach since the number of features is so high that I cannot know beforehand what the distribution of points is like. I could do it only if I have 3D data, but in my case I have much more than that.
I know for example that if the set of points was lineary seperable I could use standard PCA; if it was somehow a sort of concentric circles like shape, then KernelPCA would be a better option.
Therefore how can I know beforehand which dimensionality reduction technique I need to use for high dimensional data?
Topic linearly-separable kernel pca feature-selection dimensionality-reduction
Category Data Science