How does Keras optimization for a network with multiple outputs

I currently have a neural network that takes in 3 numbers as inputs and outputs 3 numbers. I've attached a picture of the network below and my code is accessible through the following link: [Google colab notebook]https://colab.research.google.com/drive/1q0Lvw4p_vxogmAu8QpYn5HRxAojuEtTY?usp=sharing

Firstly, as you can see, the network consists of a main branch (i.e. the first two layers) that is connected to three sub-branches that correspond to the three outputs. My understanding of how multi-output back-propagation works is that the model first does a forward pass relatively simply, but when it does a backward pass, because the target outputs for each of the sub-branches are different, the weights for the last 3 layers will be different for each of the sub-branches, and will be the same for the main branch. However, if you look at my code, after printing the weights for 2 epochs of training, it seems that the weights for the sub-branches are actually all the same if they are on the same layer. For example, the weight on sub_1_dense2, sub_2_dense2, and sub_3_dense3 are all the same. I'm not sure why this is and I've tried changing a bunch of things to try to debug with no luck.

Lastly, I just want to say that the point of this project is to manually confirm how multi-output back-propagation works in Keras, and not as a prediction model. If anyone has the answer to this question or has a better idea on how to check, feel free to drop suggestions. Thanks!

Topic multi-output keras backpropagation neural-network optimization

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.