How to evenly distribute data to multiple GPUs using Keras
I am using Keras=2.3.1 with Tensorflow-gpu=2.0.0 backend. While I trained model on two RTX 2080 ti 11G GPUs, it allocates all data to '/gpu:0',and nothing changed with '/gpu:1'. Surely, the second GPU not used at all.
However, every GPU could work if I selected only one GPU.
Moreover, the two gpus can be run parallelly in Pytorch.
Follow some instances, I try to run multi-gpus with these codes:
Below is NVIDIA-SMI output when I run a multi-gpus model.
and cuda = 10.1, cudnn = 7.6.5.
Topic gpu keras tensorflow deep-learning
Category Data Science