What is the big O, computation complexity of deep neural architectures?
I was wondering what is the big O, computation complexity of a deep neural architecture in a broad sense (dependent on different parameters of the net) ? How does a recurrent, convolutional and other architectures change it ?
Topic time-complexity deep-learning neural-network machine-learning
Category Data Science