How often may algorithms improve on the quality of input data? Or i.e. extrapolate in information?

How often may algorithms improve on the quality of input data? Or i.e. extrapolate in information?

I've sometimes thought that it might seem like it's possible to add data points by some well-informed interpolation or random draw procedures on top of real-world data, but I wonder how common or reasonable is this in general? Since someone else could view that input data is somehow bandwidth-limited and all processes that follow must necessarily be limited to this resolution. And that there's no way to gain more samples etc.

Topic interpolation

Category Data Science


There are a variety of algorithms that can create additional input data.

Two common examples are data augmentation and synthetic minority oversampling techniques.

The efficacy of the techniques depend on the domain and the data.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.