What makes a problem good for an evolutionary strategy vs a genetic algorithm vs particle swarm optimization?

I understand that evolutionary strategies (ES), genetic algorithms (GA), and particle swarm optimization (PSO) are all algorithms used to solve optimization types of problems, but what might make an optimization problem better to choose one of those techniques over the other?

For example, I read in an article that genetic algorithms are typically preferred for combinatorial problems, but other than that one sentence or so, I haven't been able to find any source that explains when you might choose one of those techniques over the other.

Is there a general approach one might use when trying to decide which technique to use, or is it more of a guess and check type of situation? It would be great if someone could help me understand when each of the three cases might be preferred.

Topic evolutionary-algorithms genetic-algorithms optimization

Category Data Science


Both ES and PSO are algorithms that have canonically been described as inherently real-valued optimization algorithms. You can (and lots of people have) formulate combinatorial variants of those algorithms, but the variants that people learn first include operators that assume floating point numbers as parameters. I suspect that's part of the answer you're looking for. If you were taught that ES was for real-valued problems, you naturally tend to think of something else as being better at combinatorial problems.

A short answer that's likely accurate is "one really isn't better than the other". We have No Free Lunch results that show this to be true over a sufficiently broad class of problems, but even in practice, GAs, ES, PSO, these are sort of fuzzy concepts that bleed into one another without sharp boundaries. I could take a GA, modify it to work on real-valued problems, and if I keep tweaking it, end up with an ES, with no clear defining boundary of when it stopped being a GA and started being an ES. And at any point along that journey, it may or may not be performing better on a specific problem. The same is true if I start from ES or PSO and make combinatorial versions. Those versions will just be better sometimes on some problems.

The deeper answer is really that we just don't know why that is. There aren't good theoretical models of why one algorithm outperforms another on a specific problem. I've not been involved in this research field for a few years now, so I'm going to be unaware of some developments I'm sure, but generally, I think this is still an open question. You can sometimes take a specific problem and try to analyze what one particularly successful algorithm is doing, but often the answer is less than completely satisfying (e.g., it turns out that my particular mutation operator is biased to jump just the right distance to escape a local optimum on this specific problem, or whatever).

I'm also aware of work done maybe 6-8 years ago to attempt to build up a dataset of problem/algorithm performance data that one could feed to a machine learning algorithm to hope for insights, but I don't think that ever reached a point where a general question like "why is this better for combinatorial problems" could really be answered.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.