Should I reshuffle the training set when benchmarking neural networks?

I'm trying to set up a fair benchmarking between various RNN models, where each of them is trained until convergence with a fixed random seed. Because the task is very costly, I am only able to run each model once and then compare their performance.

By reshuffling training set, I would change the loss surface every epoch. The result is that the models converge to a more generalized minima. But assumed that my random seed is fixed and the training time per epoch for each model is not equal, every model would be trained with a different set of permutations of the training set. I'm worried that this discrepancy would affect the fairness of the benchmark?

Without reshuffled training data set, however, I would not know if my fixed seed would bias some models over others.

My question is: assume that I want to run each model once, and to keep the result to be comparatively fair, is it a good idea to reshuffle the training set every epoch?

Topic training data research deep-learning neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.