Local RTX 2080 is 3x faster than V100 on GCP?

I have a gaming rig with an i9 CPU, 32GB RAM and RTX 2080, and I have a GCP VM with 4 vCPU, 52 GB RAM and V100.

I try to train the same dataset using the same toolchain on both machines and these are my ETA's:

GCP VM: 16 days Gaming rig: 5 days

How can a single $600 GPU outperform a 10k GPU?

What's going on here?

And what should I even expect?

Topic google-cloud training machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.