NNs for fitting highly oscillatory functions
in a scientific computing application of neural networks, I have to maximize several neural networks with scalar output with respect to a target/loss function (coming from a weak form of a PDE).
It is known from theoretical considerations that typically the functions that would be optimal with respect to the target function (i.e. the maximizers) are extremly oscillatory functions. I suppose that this is the reason, why - according to my first numerical experiments - typical network architectures, initializations and training procedures from Computervision and related fields do not perform well in fitting those oscillatory functions.
Does anyone have any ideas, suggestions or even references for how to deal with this problem? Are there any standard network architectures, activation functions, initializations and so on that work well in such a situation, where the optimal result would be a highly oscillatory function? (The training points stem from certain quadrature rules and in principle I can choose arbitrary many, restricted of course by the computational cost).
Best regards
PM
Topic weight-initialization training neural-network
Category Data Science