How to force a NN to ouput the same output given a reverse input?
I want to choose an architecture that can deal with an input symmetry.
As input, I have a sequence of zeros and ones, like [1, 1, 1, 0, 1, 0] and at the output layer I have N neurons that outputs a categorical distribution like [0.3, 0.4, 0.3].
How can I force a NN to ouput the same distribution when I feed its reverse copy, i.e [1, 1, 1, 0, 1, 0]?
A simple way just to learn twice:
feed straight [1, 1, 1, 0, 1, 0] - [0.3, 0.4, 0.3]
feed reverse [0, 1, 0, 1, 1, 1] - [0.3, 0.4, 0.3]
Or maybe, there are more elegant ways? What type of architecture I should use or maybe I need to play with loss functuions?
Topic weight-initialization machine-learning-model deep-learning
Category Data Science