How to correct a validation loss in a regression problem?
I've developed a graph neural network using PyTorch Geometric.
My model looks like:
class GCN(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = GCNConv(3, 32)
self.conv2 = GCNConv(32, 64)
self.conv3 = GCNConv(64, 32)
self.fc1 = Linear(32, 10)
def forward(self, data):
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
x = F.elu(self.conv1(x, edge_index))
x = F.elu(self.conv2(x, edge_index))
x = F.elu(self.conv3(x, edge_index))
x = F.elu(self.fc1(x))
return x
I've generated a train and validation dataset, with the following learning curves:
I do not know what kind of actions I may take to correct the validation loss. Now, I understand that the model overfits the data, but in a very weird way.
What's happening? Any suggestion?
Topic graph-neural-network cross-validation deep-learning
Category Data Science