Learning the uncertainty of a ML algorithm

I have a regression GAM (General Additive Model) and I want to learn its epistemic uncertainty( the variance of my residuals or predictions as a function of my input).

I have already used a bayesian approach to turn my GAM into a gaussian process so I can construct a covariance matrix but this approach is not scalable due to the high dimension of my problem.

I am trying to use an approach that uses the current model as a black-box and observe only the input and the residual, the closest thing that I found is quantile regression but I was wondering if there is any deep learning approach that learn the variance from the input.

Most of deep learning approach that I found estimate the mean and the variance simultaneously (Deep bayes, MVE, MC dropout ...)

A naive approach that I am implementing currently is a neural network that learn the variance as function of my input by minimizing the likelihood of my residuals as a centred gaussian but I didn't find any paper or ressources on this approach.

Do you have any idea on the problem, any possible ressources or an opinion on my current approach ?

Thanks

Topic bayesian-networks deep-learning python

Category Data Science


About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.