As you increase your hidden layers in a neural network, the weights associated with the nodes are also increased. They increase memory requirements for your model. Also, as there are more weights, more time is needed to optimize them, hence, increase the time taken for processing.
As layers are increased our model accuracy starts to degrade due to over-fitting.

ResNet is a great example , where they skip layers in an extremely deep neural networks to achieve great performance.


The number of hidden units is a hyperparameter which increases the model capacity. Therefore, increasing it not only increases time and memory requirements but can also lead to overfitting (also see section 11.4.1 in the Deep Learning Book).

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.