why use one regularisation technique over another?

why should I prefer L1 over L2, in fully-connected-layer or convolution?

why use dropout between 2 layers, when there is the option of regularising a layer(or both) with something like L1 or L2? and one would also have the flexibility to use different regularisation techniques at each layer?

A lot of the times, trying out different techniques and comparing performance may cost time and money. So, when should I use(or prefer) one regularisation technique over other?

Topic perceptron regularization deep-learning machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.