Using mathematical derivatives of input data to augment training input data

I'm thinking of how to design a basic feedforward neural network that would be able to predict future datapoints given past datapoints. I'm very new to neural network design so I'm wondering if there's some sort of a best practice as far as getting as much data out of the input data as possible. Would it make sense to provide the neural network with mathematically computed derivatives of the input data or are feedforward networks capable of generating derivatives within themselves? Or if they aren't would something like an RNN be able to do this task?

Basically, what I'm asking is:

Is it good practice to provide a neural network with derivatives of the input data, so the network has more data to work with, or does that just complicate training because the network would be able to do this task itself?

Topic pretraining machine-learning-model training

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.