Ensemble of different reservoirs (echo state networks)

Suppose I want to do reservoir computing to classify the input to the proper category (e.g. recognizing a handwritten letter).

Ideally, after training a single reservoir and testing it, there would be an output vector y with one value close to 1 and the others close to 0. However, this is not the case in practice, and I don't want to make the reservoir bigger at the moment.

I was therefore thinking of combining the predictions of a number of independent reservoirs for higher precision. After all, each reservoir does not only contain information on the 'most probable' output, but also on the 'second most probable' etc. I have by now found out that renormalizing the output from each reservoir to a probability can be done with the 'softmax' function. But how to combine the probabilities assigned by the individual reservoirs to a total probability, with the hope that combining enough independent reservoirs would lead to a single element of the total y that is a clear outlier?

Some ways that I thought of:

  • Multiply the probabilities obtained through the different reservoirs (this seems the most intuitive to me)
  • Take the average
  • For each letter that has to be classified, just pick the reservoir that assigns the highest probability to one of the possibilities; or where the difference between first and second most probable possibility is largest

Are such schemes appropriate? Or should I do something else? I saw a paper where they trained a second layer to combine inputs from different reservoir, but this seems too far-fetched at the moment, something simple would be preferable.

Topic ensemble-learning softmax probability neural-network machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.