Segmentation Network produces noisy output
I've implemented a SegNet and SegNet ReLU variant in PyTorch. I'm using it as a proof-of-concept for now, but what really bothers me is the noise produced by the network. With ADAM I seem to get slightly less noise, whereas with SGD the noise increases. I can see the loss going down and the cross-evaluation accuracy rising to 98%-99% and yet the noise is still there.
On the left is the actual image, then you can see the mask, and finally the actual output from the network. There's 1024 samples per class, and two classes, which are very consistent as the documents are very structured. I'm using the vanilla SegNet (same kernel, striding and padding) on 224x224.
What could explain this noise, and how could I potentially address the issue?
Topic image-segmentation convolution machine-learning
Category Data Science