What is the big O, computation complexity of deep neural architectures?

I was wondering what is the big O, computation complexity of a deep neural architecture in a broad sense (dependent on different parameters of the net) ? How does a recurrent, convolutional and other architectures change it ?

Topic time-complexity deep-learning neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.