About different structures of neural network
https://www.mathworks.com/help/deeplearning/ref/fitnet.html is the tutorial that I am following to understand fitting data to a function. I have few doubts regarding structure and terminologies which are the following:
1. Model number of hidden layers
By hidden layer we mean the layer that is inbetween the input and output. If number of layers = 1 with 10 hidden neurons (as shown in second figure) then is it essentially a neural network which is termed as an MLP. Is my understanding correct? In general,
- if the number of hidden layers = 0, we call the NN as a perceptron.
- If the number of hidden layers >=1 but less than 3, the NN becomes an MLP. Is the picture in the link that of an MLP since it contains 1 hidden layer of 10 neurons?
- if the number of hidden layers >3, the NN is called as deep NN aka deep learning
Is that correct?
2. Linear vs nonlinear mapping function
The resulting model eventually learn to map the input to output data.
- Do we call the machine learning model as linear or nonlinear ? Or is this term associated to the mapping function ?
- Which layer's mapping function determines this? Based on which layer's activation function do we say that the mapping function or the model is linear or nonlinear? For ex, In this picture, the last layer is the output layer and the activation function looks like an identity/linear. But the hidden layer has sigmoid activation function which is nonlinear. Therefore, is this model a nonlinear function?
Topic mlp perceptron terminology neural-network
Category Data Science