A deployed model has epistemic or aleatoric uncertainty?

Aleatoric uncertainty refers to the notion of randomness that there is in the outcome of an experiment that is due to inherently random effects.

Epistemic uncertainty refers to the ignorance of the decision-maker, due to for example lack of data.

Aleatoric uncertainty is irreducible, while epistemic can be mitigated (Adding more data).

When we deploy a ML model in production. Can we distinguish between epistemic and aleatoric uncertainy?

Topic uncertainty statistics machine-learning

Category Data Science


In Bias-Variance tradeoff theorem, aleatoric uncertainty is represented by the irreducible error (inherently and irreducibly random). The rest represents model mismatch due to imprecise knowledge of the generation of the problem.

One way to quantify aleatoric uncertainty is as average uncertainty over various models for the same problem, as then uncertainty due to model mismatch will tend to average out leaving only the irreducible uncertainty.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.