Are less training epochs better in the following scenario

So I have a scenario in which the training data is being generated in response to what the Neural Network backed actor is doing. In essence its giving feedback to the Neural Network based on all of its mistakes, as its performing them, and it doesn't matter how many mistakes it makes, it will generate more feedback. Would it not, given this is statistical grouping in essence, make more sense to back-propagate fewer times per piece of feedback ? Would that not after a very long time produce more fine grained results ? Or is there an aspect of gradient descent I'm missing that requires 100+ iterations of the same training data when the dataset is changing constantly or will be part of a range of values that fall easily into statistical grouping with multiple variables and conditions ?

Topic epochs training neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.