neural network function approximation with constraints
I would like to approximate a function $f(\cdot)$ by means of a neural network given a finite set of observations $f(x_i)$ where $x_i\in\mathbb{R}^n$ and $i=1\dots,N$. However, I have some prior knowledge on how this function should behave, for example that it is monotonic in the first coordinate.
Are there methodologies accounting for this type of shape constraints when training a (D)NN?
Topic objective-function loss-function neural-network
Category Data Science