One key benefit of genetic programming is that it can generate models that are capable of extrapolation. Neural nets tend to be really good at interpolating, but fall apart when trying to extrapolate.
As an example to highlight this we can think of data fitting.
We can start by using just a very simple example. Let the input data be x={1,2,3} and y={1,4,9}.
In genetic programming the proper choice here would be to use symbolic regression, which evolves an algebraic expression to fit data. Given this training data it would likely evolve the expression $y=x^2$. As you can see this expression is capable of taking new values outside of the training data range and mapping it to a response outside of the training responses. For example, you could supply as input 10 and it would use the evolved expression to give a predicted value of 100.
Alternatively, a neural net is very good at interpolating since it creates a network of connections to connect inputs to responses, but when given new data, it tries to fit the new data into its set of rules built to fit only the training data. So likely, the trained neural net would be able to perfectly map $x\mapsto y$, but if you were to give it the number 10 as input, the closest training value it saw was 3, so it would likely give a predicted response of 9 instead of 100.