Would it be possible/practical to build a distributed deep learning engine by tapping into ordinary PCs' unused resources?
I started thinking about this in the context of Apple's new line of desktop CPUs with dedicated neural engines. From what I hear, these chips are quite adept at solving deep learning problems (as the name would imply). Since I can only imagine the average user wouldn't necessarily be optimizing cost functions on a regular basis, I was wondering if it would be theoretically possible to use those extra resources set up some type of distributed network similar to a spark cluster, only instead of having 5 to 20 dedicated systems, you would have maybe 200 to 300 nodes consisting of whatever portion of available resources the user decided to rent out. I'm imagining an economic arrangement somewhat similar to a crypto mining pool wherein the owner of each node gets paid according to the extent of their contribution.
Topic apache-spark deep-learning map-reduce machine-learning
Category Data Science