Best gradient free methos to converge a 2 layers Neural Network on MNIST?
We've developed a C++ Neural Network that work on the MNIST dataset. We don't want to use backpropagation. Are there optimal methods to avoid it and that make the network converge to high accuracies?
Topic neural-network
Category Data Science