PCA, Better performances with 300 components rather than 400 components : why?

I am building this content based image retrieval system.

I basically extract feature maps of size 1024x1x1 using any backbone.
I then proceed to apply PCA on the extracted features in order to reduce dimensions.
I use either nb_components=300 or nb_components=400.
I achieved these performances (dim_pca means no pca applied)

Is there any explanation of why k=300 works better then k=400 ? If I understand, k=400 is suppose to explain more variance then k=300 ? Is it my mistake or a totally acceptable and understandable result ?

Thank you very much

Topic computer-vision dimensionality-reduction machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.