Baseline model and transfer learning
I've tried to find any guidance on using transfer learning when building baseline models for ML projects (CNN in my case) but found no clues on good practices in the matter.
My logic says that no baseline model should be pretrained first as it is complicating it without any known reason to do it (as yet it is not proven we need it). But it is not the first time my logic may be wrong in the case of DS.
I'd like to invite more experienced Data Scientists and Engineers to the discussion. What is your opinion on that topic?
Edit: Moreover, I would ask the same question about augmentations such as image rotation, scale, etc. in CNN