Bayesian Linear Regression using the Kernel Trick vs Constructing features using Kernels as Prototypes

How different is it to do Bayesian linear regression using the GP approach (kernel trick) versus constructing features using kernels to prototypes? As far as I know, this very basic question is unanswered. GPs have the disadvantage that they are expensive: they grow with the number of samples.

I tried doing some research on this topic, but haven't found any relevant paper discussing this! Any paper which discusses this or at least gives some information on the cons of this approach or thoughts on this topic would be great!

Topic kernel bayesian regression feature-selection machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.