How to evenly distribute data to multiple GPUs using Keras

I am using Keras=2.3.1 with Tensorflow-gpu=2.0.0 backend. While I trained model on two RTX 2080 ti 11G GPUs, it allocates all data to '/gpu:0',and nothing changed with '/gpu:1'. Surely, the second GPU not used at all.

However, every GPU could work if I selected only one GPU.

Moreover, the two gpus can be run parallelly in Pytorch.

Follow some instances, I try to run multi-gpus with these codes:

Below is NVIDIA-SMI output when I run a multi-gpus model.

and cuda = 10.1, cudnn = 7.6.5.

Topic gpu keras tensorflow deep-learning

Category Data Science


Check out the docs on TensorFlow GPU usage

If you wanted data parallelism where you run a copy of your model on multiple GPUs and split the data between them, you could use the tf.distribute.MirroredStrategy.

The tf.distribute.Strategy docs are also a good source to read.

Also, you should also profile your application; adding a second GPU has the potential to reduce performance depending on what your bottlenecks are.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.