Not able to connect to GPU on Google Colab
I'm trying to use tensorflow with a GPU on Google Colab.
I followed the steps listed at https://www.tensorflow.org/install/gpu
I confirmed that gpu is visible and CUDA is installed with the commands -
!nvcc --version
!nvidia-smi
This works as expected giving -
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Wed Nov 20 10:58:14 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.50 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 53C P8 10W / 70W | 0MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
So far so good. I next try to see if it is visible to tensorflow -
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 16436294862263048894, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 18399082617569983288
physical_device_desc: "device: XLA_CPU device", name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 1461835910630192838
physical_device_desc: "device: XLA_GPU device"]
However when I try to run even a simple operation on the GPU with tensorflow it throws an error. When I checked if the GPU is visible to tensorflow it returns False -
tf.test.is_gpu_available()
False
What am I doing wrong and how do I fix this ?
Topic colab nvidia gpu tensorflow google
Category Data Science