Transfer Learning on Resnets/VGGs -- Validation accuracy can never be over 75%
I am trying to classify skin cancer images into two categories -- malignant and benign. Literatures suggest that using pre-trained resnet/vgg network achieves more than 90% accuracy. However, with my dataset, whatever I try, the validation accuracy can never be greater than 75%.
I am using a well-balanced dataset where there are 500 malignant and 500 benign images in the training dataset. The number of images is on a smaller end; however, the fact that I am using transfer learning and other literatures showing that with only ~1000 images they could achieve more than 90% accuracy, I believe there's something wrong with the transfer learning logic than the data itself.
I additionally have tried the followings:
- Used different number of batches (8/16/32/64/128) and different learning rates (from 1e-7 to 0.1)
- unfreezed (change require_grad=False to True) the pretrained resnet/vgg after the validation error platues and updated the weights for smaller epochs (a.k.a. fine tuning the pretrained network after setting require_grad=True after validation accuracy platues)
- Decayed the learning rate using scheduler
- used other nets: inception v3, GoogleNet
- rather than having one fully connected layer, used dropout-FC-relu-FC
This is the core logic:
model_ft = models.resnet18(pretrained=True) # some other place in my code, I set all of resnet18 param to require_grad=False
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
and this is how I normalize my data (I have tried cropping, rotating, and horizontal-flipping; none of these boosted the performance. For now, in order to preserve some relevant clinical features, I decided not to touch the data):
data_transforms = {
'train': transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
Can someone please let me know what I could do to improve the model performance? Without too much of a data pre-processing, we think the model should achieve well-above 85% accuracy in the validation set.