avoiding premature convergence with neural networks (EA's)

I am currently writing a program that would be able to play snake on an 25*25 grid. It works by optimizing a set of weights of 300 different solutions (each solution would be a different neural network) with the aid of an evolutionary strategy, thus by random mutation and parent selection. I have decided not to apply crossover on the parents pool due to the black box problem concerning MLPs and other neural networks. My population size is 300 and 10 parents are selected every generations (200 gen. total). Every other solution which is not a parent gets deleted to make place for 290 new mutated solutions based on the parents.

My network structure (MLP) is the following : 6 input nodes, (10,10) hidden nodes, 3 output nodes.

With my current version of the programme, I am able to reach a steady score of 60 apples (or points) eaten (around 15/200 have a score above 60). And this only after 50 generations! But now, it seems like i have bumped into a local minima that I am not able to overcome. Does anyone have any idea if further progress is still possible or that i might have hit the ceiling? The quickly elevating performance might be a sign that my exploration/eploitation balance might not be ideal, but with each solution containing 190 weights, more exploration would seem too computationally expensive and would take forever on my i7 10gen laptop :/.

Can i call this a premature convergence and is there a way to deal with this, or have i hit the limits of my idea?

Topic genetic-programming evolutionary-algorithms mlp optimization machine-learning

Category Data Science


You can take a look at dropout, it will get you out of the local minima you are stuck in.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.