Neural Net gradient descend

I was planning on making my own neural network library in C++ and was going through other's code to make sure I am on right track.

Below is a sample code that I am trying to learn from.

Everything in that code made sense, except for the gradient descend part, in which they literally update the weights by adding a positive learning rate.

Shouldn't we take the negative of the gradient to reach the optimum?

Line number: 137 - 157.

https://github.com/huangzehao/SimpleNeuralNetwork/blob/master/src/neural-net.cpp

The good thing is that it works fine which is making me weird.

I asked this question to everybody I know of, but they all got confused.

Here is the video representation of creating neural network library. Same code as above one.

https://vimeo.com/19569529

Topic c++ gradient-descent neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.