Best gradient free methos to converge a 2 layers Neural Network on MNIST?

We've developed a C++ Neural Network that work on the MNIST dataset. We don't want to use backpropagation. Are there optimal methods to avoid it and that make the network converge to high accuracies?

Topic neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.