Multi-task learning for improving segmentation
I am building a multi-task model, where my main task is segmentation and my auxiliary task is either denoising or image inpainting. The goal is to try to improve the quality of the segmentation with the multi-task learning approach, where certain representations are shared across the tasks, thus reducing their noise and increasing the segmentation accuracy.
Currently, my approach is as follows:
- Manually add noise to the image (or a hole if the auxiliary task in inpainting)
- Fit a model using the image from 1. as input. So X=manuallyChangedImage and the labels are then two. The first one is the original image and the second is the mask of segmentation. So Y=[OriginalImage, segmentationMask]
- Test the model on the test set consisting of images that are not manually changed (no noise or hole)
So far the image inpainting has not improved my results, while denoising the image has actually decreased the performance quite a bit. I know that there are other things to consider, like the loss weights for each task and also parameters for changing the original image, such as the amount of noise added, size of the hole, etc., but I have not received any better results even with changing these hyperparameters a lot.
Seeing my results with denoising has made me think, that there might be a basic error with my approach, as I am training the model with noisy images, but testing with normal ones.
My question is: Does my described approach even make sense?