Dealing with pre-trained model for grayscale images

I would like to do Transfer Learning using one of the novel networks such as VGG, ResNet, Inception, etc.

The problem is that my images are grayscale (1 channel) since all the above mentioned models were trained on ImageNet dataset (which consists of RGB images).

One of the solutions is to repeat the image array 3 times to make it 3 channel.

Is this really the only solution for that? Is it a good solution? Are there any other solutions?

Topic transfer-learning inception keras image-classification deep-learning

Category Data Science


print(grayscale_batch.shape)  # (64, 224, 224)    

rgb_batch = np.repeat(grayscale_batch[..., np.newaxis], 3, -1)    
print(rgb_batch.shape)  # (64, 224, 224, 3)

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.