GPU shows 0 utilization even when tensors and model are mounted on the gpu?
I am trying to run some PyTorch scripts on a remote GPU server. While calling the script in the ubuntu terminal i start as:CUDA_VISIBLE_DEVICES=0(or whichever is available) python3 script.py
. Also, used the following snippet in the code and used .to(device)
on the model, input and target tensors.
device = torch.device(cuda if torch.cuda.is_available() else cpu)
print(device)
I have confirmed that my model and data and target tensors are mounted on the cuda device. But the GPU shows 0 percentage utilization all through the run. What could I be missing?
Category Data Science