1D target tensor expected, multi-target not supported

I am trying to train my model. My model outputs a [4,2] tensor where 4 is the batch size and 2 because of binary classification. After receiving the outputs I found the index of the maximum element for each row.so , now the shape is [4,1] and the shape of my label is [4,1] as well.I could not understand why am I still getting this error.Could someone please help me to solve it out.Also, the optimizer I am using is SGD and the loss criteria is crossentropy.

  for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(dataloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        #inputs, labels = data

        inputs, labels = \
            data['image'], data['Status']

        # zero the parameter gradients
        optimizer.zero_grad()
     
        outputs = net(inputs.float())
        

        a=torch.max(outputs,1).indices

        a=a.reshape(4,1)
        a=a.float()
        labels=labels.float()
        print(a.shape,labels.shape)
        loss = criterion(a, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

This is the error I am getting.

torch.Size([4, 1]) torch.Size([4, 1])
RuntimeError                              Traceback (most recent call last)
ipython-input-83-72f63a4db63e in module()
     22         labels=labels.float()
     23         print(a.shape,labels.shape)
--- 24         loss = criterion(a, labels)
     25         loss.backward()
     26         optimizer.step()

2 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2822     if size_average is not None or reduce is not None:
   2823         reduction = _Reduction.legacy_get_string(size_average, reduce)
- 2824     return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   2825 
   2826 

RuntimeError: 1D target tensor expected, multi-target not supported

Also,model is:

import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # 1 input image channel, 16 output channels, 5x5 square convolution
        # kernel
        self.conv1 = nn.Conv2d(1, 16, 5)
        self.conv2 = nn.Conv2d(16, 32, 7)
        self.dropout1 = nn.Dropout2d(0.25)
        self.dropout2 = nn.Dropout2d(0.5)
        self.fc1 = nn.Linear(4608,128)  
        self.fc2 = nn.Linear(128,16)
        self.fc3 = nn.Linear(16, 2)


    def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # If the size is a square, you can specify with a single number
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = self.dropout1(x)

        x = torch.flatten(x, 1) # flatten all dimensions except the batch dimension
        x = F.relu(self.fc1(x))
        
        x = F.relu(self.fc2(x))
        x = self.dropout2(x)
        x =self.fc3(x)
     
        return x


net = Net()
net = net.float()
print(net)

Topic torch training convolutional-neural-network

Category Data Science


If this is a binary classification problem then your model should only need to predict one output - a value between 0 and 1. A predicted value close to 0 would indicate the input likely belongs to your first class, and a predicted value close to 1 would indicate the input likely belongs to the second class.

Then you can optimise your model using a loss function such as nn.BCELoss(prediction, target) or nn.BCEWithLogitsLoss(prediction, target). This should avoid the error that you currently get as you won't be dealing with multiple output values for your predictions.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.