Algorithm for learning image distortion?
I'm looking for tools to characterize relationships between gridded outputs of multiple physical models as image distortions. For instance, given a 2-d picture of the temperature distribution in two rooms, one might characterize it by a contraction of an upper layer of warm air:
The inverse problem I am interested in is inferring this contraction using the two fields as inputs. I understand that this may often be an underdetermined problem and am prepared to regularize as necessary by imposing, e.g., length scale constraints on image distortion.
Can anyone point me towards any tools or algorithms to explore this problem?
Topic transformation image-preprocessing
Category Data Science