Transfer Deep Learning from one aerial imagery datset to many others
I am new to Deep Learning but have been able to use RasterVision successfully to predict building footprints within a set of aerial imagery.
This aerial imagery data set is for a province of New Zealand. Now that I have a model that predicts successfully in this province, I am interested in how I could use this to predict in the many other regions of New Zealand. The problem is these regions are captured with differing sensors, resolution and with different color balancing applied (I have tried using my model in another region with poor results 70% recall as opposed to 92% in the region trained for).
I am guessing I could take my model as a base to begin training in another region...
My question is, is it conceivable that I could have a model trained that would predict with acceptable accuracy in many regions with differing resolution (0.1m -- 0.7m) and different color balancing or is the approach to take a base model and retrain for every different imagery dataset (which is obviously less ideal)?
Are there examples of such an approach across such differing aerial/satellite imagery?
I note this question does answer some of this in terms of resolution. What I am just as interested in is the impact and managing of differing color balancing across datasets
The other imagery datasets I want to start predicting on include these
Topic image-segmentation image-classification machine-learning
Category Data Science