Fine tuning CNN with imbalance data gives good results - not sure why
I'm fine tuning with caffenet for a 3 classes classification problem. I have 100 instances of class A, 90 instances of class B and 30 instances of class C. I thought that my net would be biased toward classes A and B but I'm actually getting quite good results. I know that caffenet doesn't take care of imbalance data for me. Maybe it has to do with the fact that my entire train set fits into one batch? Or maybe it has to do with the fact that I'm not really training from scratch but mostly using caffenet's already learned weights? Thanks
Topic cnn caffe class-imbalance deep-learning
Category Data Science