Regressing over tiny floats with Neural Networks

I am trying to regress over very small floats - of the magnitude [1e-2, 9e-3]. They're mostly in this range.

Using simple MSE (Mean Squared Error) loss and backpropagating against it does not lead to very good results. The networks get the answer usually in the right neighbourhood, but fails to achieve even decent precision. This implies MSE is penalizing less for small differences

I tried checking some articles and results published by people, but they don't seem to yield any significant results.

What are the common techniques used to train networks on such precise data? what loss and activation functions are usually utilized for such data?

Topic methodology deep-learning machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.