I Can't find my RMSProp implementation bug?



I'm trying to implement RMSProp in my own Neural Network library so I can undertand the 'under-the-hood' operations, but this specific implementation is not working / converging, and I can't figure out why.

I'm pretty sure I followed the formula (RMSProp + Momentum), and here is my code :

//Step 1 - Compute a hidden neuron (this == ClNeuron) error gradient (TanH)
double tmpBuffer = 0.00;
for (std::size_t i = 0; ithis-m_output_connections.size(); i++)
{
    ClNeuron* target_neuron = (ClNeuron*)m_output_connections[i]-m_target_neuron;
    tmpBuffer += (target_neuron-m_error_gradient * this-m_output_connections[i]-m_weight);
}

//Get the TanH derivative of this neuron's last output
this-m_error_gradient = tmpBuffer * this-TanHDerivative(this-m_result_buffer);




//Step 2 - For each of this neuron's input weights, compute the new weight's change
for(std::size_t i=0;ithis-m_input_connections.size();i++)
{
    double new_weight_delta = this-m_input_connections[i]-m_learning_rate * this-m_error_gradient * this-m_input_connections[i]-m_data;
    this-m_input_connections[i]-m_last_error_gradient_mean_square = 0.9 * this-m_input_connections[i]-m_last_error_gradient_mean_square + (1 - 0.9) * (new_weight_delta * new_weight_delta);
    this-m_input_connections[i]-m_momentum = this-m_input_connections[i]-m_momentum_constant * this-m_input_connections[i]-m_momentum + this-m_input_connections[i]-m_learning_rate * new_weight_delta / std::sqrt(this-m_input_connections[i]-m_last_error_gradient_mean_square)+0.000000001;

    //Make the actual weight update
    this-m_input_connections[i]-m_weight -= this-m_input_connections[i]-m_momentum;
}

Would anyone be kind enough to point me in the right direction ? Many thanks !!

Topic c++ backpropagation implementation

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.