neural network function approximation with constraints

I would like to approximate a function $f(\cdot)$ by means of a neural network given a finite set of observations $f(x_i)$ where $x_i\in\mathbb{R}^n$ and $i=1\dots,N$. However, I have some prior knowledge on how this function should behave, for example that it is monotonic in the first coordinate.

Are there methodologies accounting for this type of shape constraints when training a (D)NN?

Topic objective-function loss-function neural-network

Category Data Science


One option is to synthetically generate that data and provide it as training data to the model. Deep Neural Networks are quick at learning patterns. If the data is always monotonic in the first coordinate, the model will learn that.

Another option is using something like TensorFlow Lattice to add "common sense" constraints.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.