Should I normalise image pixel wise for pretrained VGG16 model
My goal is to use pretrained VGG16 to compute the feature vectors excluding the top layer. I want to compute embedding(no training involved) per image one by one rather than feeding batches to the network as this network is just to compute embedding and not classification. In batch training I understand the importance of batch normalisation but for a single image should I normalise it pixel wise? All I can think is maybe its useful to reduce the importance of illumination in an image. Am I missing something?
Topic one-shot-learning image-preprocessing feature-extraction
Category Data Science