Multilayer perceptron does not converge
I have been coding my own multi layer perceptron in MATLAB and it compiles without error. My training data features, x, has values from 1 to 360, and the training data output, y, has the value of $\sin(x)$.
The thing is my MLP only decreases the cost for the first few iterations and will get stuck at 0.5. I have tried including momentum, but it does not help and increasing the layers or increasing the neurons does not help at all. I am not sure why this is happening.
I have uploaded the files for your reference here.
The summary of my code is:
I normalize my input data either using min-max or zscore
Initialize random weights and bias within the range of -1 to 1
for i = 1:length(nodesateachlayer)-1 weights{i} = 2*rand(nodesateachlayer(i),nodesateachlayer(i+1))-1; bias{i} = 2*rand(nodesateachlayer(i+1),1)-1; end
Then, I do a forward pass where the input is multiplied by weights and added with the bias and then activated by a transfer function (sigmoid)
for i = 2:length(nodesateachlayer) stored{i} = nactivate(bsxfun(@plus,(weights{i-1}'*stored{i-1}),bias{i-1}),activation); end
Then calculate the error then do a backward pass
dedp = 1/length(normy)*error; for i = length(stored)-1:-1:1 dpds = derivative(stored{i+1},activation); deds = dpds'.*dedp; dedw = stored{i}*deds; dedb = ones(1,rowno)*deds; dedp = (weights{i}*deds')'; weights{i}=weights{i}-rate.*dedw; bias{i}=bsxfun(@minus,bias{i},rate.*dedb'); end
I have the cost plotted out at every iteration to see the descent
I assume there is something wrong with the code so where could the error possibly lie in it?
Topic mlp gradient-descent neural-network
Category Data Science