Should I normalise image pixel wise for pretrained VGG16 model

My goal is to use pretrained VGG16 to compute the feature vectors excluding the top layer. I want to compute embedding(no training involved) per image one by one rather than feeding batches to the network as this network is just to compute embedding and not classification. In batch training I understand the importance of batch normalisation but for a single image should I normalise it pixel wise? All I can think is maybe its useful to reduce the importance of illumination in an image. Am I missing something?

Topic one-shot-learning image-preprocessing feature-extraction

Category Data Science


It all depends on how the original pretrained model was trained. If it was trained with normalized data, you should also normalize your data before giving it to the model. Otherwise, the input data distribution will not match what the network was trained with, and then you probably won't obtain good results.

VGG16 pretrained weights expect normalized data as input, so the answer is yes, you should normalize the data (subtracting the mean and dividing by the standard deviation).

If your image domain is similar to ImageNet, you may use the same mean and standard deviation statistics. If your images are from a very different domain, you should compute your own statistics (source).

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.