Propagating -infs in pytorch and outliers in general

I am using a loss which requires sampling from probability distributions to do monte carlo integration with. Sometimes legitimate training data can throw -inf/NaN. This is intended behaviour since the data point maybe far enough from the model that the probability is too small for float32. Needless to say switching to float64 etc is not a solution.

The problem is that -inf turns into nan when calculating the gradient in using logsumexp, sinh, and MultivariateNormal.logpdf which then propagates all the way down to the loss resulting in a crash.

How do I best solve this issue? I can't remove these data at the start because this problem is model dependent.

The loss is essentially a sum over an data-length elbo array at the end. I've tried replacing bad values resulting from outlier data with just a very large negative logl: elbo[~torch.isfinite(elbo)] = torch.scalar_tensor(-1+20) but this still resolves to infinite gradients.

Using torch.autograd.set_detect_anomaly(True) just highlights what I already know, that some operations that contribute to the loss result in NaN/inf.

What is the best way to take these data into account?

Thanks

Topic gradient pytorch outlier

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.