GPU shows 0 utilization even when tensors and model are mounted on the gpu?

I am trying to run some PyTorch scripts on a remote GPU server. While calling the script in the ubuntu terminal i start as:CUDA_VISIBLE_DEVICES=0(or whichever is available) python3 script.py. Also, used the following snippet in the code and used .to(device) on the model, input and target tensors.

device = torch.device(cuda if torch.cuda.is_available() else cpu)
print(device)

I have confirmed that my model and data and target tensors are mounted on the cuda device. But the GPU shows 0 percentage utilization all through the run. What could I be missing?

Topic nvidia pytorch gpu

Category Data Science


The problem was due to some modules I was importing without ensuring the tensors are mounted on cuda through that part of the pipeline. This had turned into the bottleneck blocking the GPU utilization.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.