Is it right to argue that a testing dataset is not needed when evaluating the performance of a GAN?
For my degree final project I have been working on a GAN to solve a certain image enhancement task. The problem I’m currently working on has an extremely limited number of datasets due to the physical constraints of taking such pictures, and for this reason I used a paired dataset for training and an unpaired dataset to see if the images generated from the unpaired dataset have the same distribution of the ground truth in the paired one, which turned out to be true.
At this point though, I would like to provide a comparison metric between a ground truth and a generator prediction, but I’m limited by the fact that I have used the only useful paired dataset for training. Now, to my understanding the generator’s learning is completely unsupervised: the input images are just random noise from which it learns how to model the data distribution of the desired domain. Would it be sensible to argue that a train test split is not needed to evaluate the performance of a GAN?
Topic gan computer-vision deep-learning
Category Data Science