Beyond its use in deep learning, backpropagation has been used in many other areas, ranging from weather forecasting to analyzing numerical stability. In fact, the algorithm has been reinvented several times in different fields. The general, application independent, name is "reverse-mode differentiation" [3].
The modern version of backpropagation, also known as automatic differentiation, was first published by Seppo Linnainmaa [1] in 1970. He used it as a tool for estimating the effects of arithmetic rounding errors on the results of complex expressions [2].
Gerardi Ostrowski discovered and used it some five years earlier in the context of certain process models in chemical engineering [2].
In the sixties Hachtel et al. considered the optimization of electronic
circuits using the costate equation of initial value problems and its discretizations to compute gradients in the reverse mode for explicitly time-dependent problems [2].
Others researchers that discovered it include Bernt Speelpenning. He arrived at the reverse mode via compiler optimization when asked to automatically generate efficient codes for Jacobians of stiff ODEs [2].
More info can be found in: Deep Learning in Neural Networks: An Overview and in Who invented backpropagation? and references therein.
[1] Who invented backpropagation?
[2] Who Invented the Reverse Mode of Differentiation?
[3] Calculus on Computational Graphs: Backpropagation