Gradients are becoming None in PyTorch

#AM is the autograd function for argmax with backpropagation 
x,preds = model(id, mask)
print(preds.retain_grad())
print(AM.apply(preds))    
# compute the loss between actual and predicted values
loss = torch.mean(1/(sample_score(x,AM.apply(preds))+1))
print(loss)
#loss.requires_grad=Truecse
# add on to the total loss
x.retain_grad()
preds.retain_grad()
print(x.requires_grad)
print(preds.requires_grad)
loss.backward()
print(model.fc2.weight.grad)
total_loss = total_loss + loss.item()

Here gradients are becoming None. How can I solve this?

Topic gradient pytorch

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.