Why we call Mix-up method is a data augmentation technique?
I am bit confused in the Mixup data augmentation technique, let me explain the problem briefly:
What is Mixup
For further detail you may refer to original paper .
We double or quadruple the data using classic augmentation techniques (e.g., Jittering, Scaling, Magnitude Warping). For instance, if the original data set contained 4000 samples, there will be 8000 samples in the data set after the augmentation.
On the other hand, according to my understanding, in Mixup data augmentation, we do not add the data but rather mix the samples and their labels and use these new mixed samples for training to produce a more regularized model. Am I correct? If yes, then why is the Mixup method referred to as data augmentation? Since we only mix samples and not artificially increase the data set size?
Topic data-augmentation deep-learning neural-network time-series dataset
Category Data Science