Performance gain of GPU when learning DNNs

Currently, I learn deep neural networks on my CPU (i7-6700K) using TensorFlow without AVX2 enabled. The networks need about 3 weeks to be learned. Therefore, I am searching for a (cheap) way to speed up this process. Is it better to compile TensorFlow enabling AVX2 or to buy a cheap[1] GPU like the GeForce GTX 1650 Super (about 180€ and 1408 CUDA cores)? What is the estimated performance gain of using a cheap[1] GPU?

[1] Cheap compared to current top edge GPUs.

Topic hardware gpu

Category Data Science


3 years ago the rule of thumb was: About 15 times faster

Your CPU does 113 GFlops on float operations (source) and your GPU does 3 Tflops (source).

My bet: somewhere between 15 and 30 times faster

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.