Analytical gradients from tf.gradients don't match approximate gradients

I have a trained neural network (NN) with independent inputs x1, x2.. xn and a scalar output y.

Input x1 is a scalar, and tf.gradients(y, x1) returns a negative value. However, calculating approximate gradients via $\frac{NN(x1 + \Delta) - NN(x1-\Delta)}{2\Delta}$ where $\Delta 0$ yields a positive value.

The following is a visualization of my problem. In blue are y = NN(inputs) for all inputs seen as training data plotted against x1. Judging by these points, it is reasonable to me that tf.gradients(y, x1) 0, as x1 decreases as y increases.

However, when I perturb only the x1 value about a fixed point, it appears that y increases with x1, which is opposite of what the analytical gradient implies.

In general, what does it mean when analytical gradients and approximate gradients don't match? Am I misinterpreting the meaning of tf.gradients?

Topic gradient tensorflow optimization machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.