dimension of input image for pyTorch VGG16

I have implemented the codes:

https://towardsdatascience.com/image-feature-extraction-using-pytorch-e3b327c3607a?gi=7b5fd7b03ed1 for image feature extraction.

But it is confusing that both

224*224 input image

448*448 input image

work fine.

As I understand, pretained VGG16 (without changing its trained weights) only takes 224*224 input image.

I suppose the 1st layer

(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

can take larger size of images, but the pretrained weights cannot extend to larger dimension of inputs. Am I right?

Topic pytorch

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.