Backtracking filter coefficients of Convolutional Neural Networks

I'm starting to learn how convolutional neural networks work, and I have a question regarding the filters. Apparently, these are randomly generated when the model is generated, and then as the data is fed, these are corrected accordingly as with the weights in backtracking.

However, how does this work in filters? To my understanding, backtracking works by calculating how much an actual weight contributed to the total error after an output has been predicted, and then correct it accordingly. I've been thinking about how this might work with the filters, and the thing that appears to bother me is that all coefficients in a single filter have contributed the same to the final error. So, suppose you would have a matrix (filter) of random numbers at the beginning, then backtracking happens, and as all the coefficients get corrected in the same way, wouldn't you get a matrix which is just the same as the original times a constant?

Any help or insight you might be able to provide me in this matter is greatly appreciated!

Topic convolutional-neural-network backpropagation neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.