Inbetween CNN and MLP: neural network architecture for "close to convolutional" problem?

I am looking to approximate an (expensive to calculate precisely) forward problem using a NN. Input and output are vectors of identical length. Although not linear, the output somewhat resembles a convolution with a kernel, but the kernel is not constant but varies smoothly along the offset in the vector. I can only provide a limited training set, so I'm looking for a way to exploit this smoothness.

Correct me if I'm wrong (I'm completely new to ML/NN), but in my understanding, a simple MLP would learn the response on all offsets more or less independently and would not really interpolate the pattern seen across the neighbors in between them. In my imagination, this would make the required training set bigger to ensure the full pattern is provided at each offset independently. On the other hand, a CNN like approach would transfer the pattern from the neighbors, but fail to account for the variation in that.

I'm wondering if there is some common approach which could account for both. I was thinking of either creating a variation of the convolutional layer with additional offset dependent constant input, or maybe some form of regularization to enforce smoothness inside MLP output layers. Yet I'd somehow expect there would be a cookbook approach for such a case that I have missed?

Topic mlp cnn convolutional-neural-network convolution efficiency

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.