Representaion Learning - Self-supervision methods that do well with a limited amount of classes when

I understand that a contrastive learning approach such as SimCLR has an inherent problem when dealing with a low number of classes (let's say 2,3,5,6, maybe even 10). Problem is that the chances of picking a negative sample that has the same label as the image from the positive pair is not low (let's say a dog and another dog)

Which contrastive learning approaches do better on such problems that we have let's say 4 classes rather than 1000 (or 100)? Are there are good unsupervised/self-supervised approaches to tackle such problems when we want to get a good representation when we have a lot of unlabeled data?

Topic representation unsupervised-learning computer-vision deep-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.