Question about grad() from Deep Learning by Chollet
On page 58 of the second edition of Deep Learning with Python, Chollet is illustrating an example of a forward and backward pass of a computation graph. The computation graph is given by:
$$ x\to w\cdot x := x_1 \to b + x_1 := x_2 \to \text{loss}:=|y_\text{true}-x_2|. $$
We are given that $x=2$, $w=3$, $b=1$, $y_{\text{true}}=4$. When running the backward pass, he calculates $$ grad(\text{loss},x_2) = grad(|4-x_2|,x_2) = 1. $$
Why is the following not true: $$ grad(\text{loss},x_2) = \begin{cases} 1 x_24 \\ -1 x_2 4 \end{cases} $$
Topic gradient backpropagation
Category Data Science