I am reading paper on MEGnet which is a GNN. The objective is that we have several molecules that share same elements such as molecules $C0_2$ and $COOH$ share $C$ and $O$. Now if we learn the node embeddings of the both graphs via representation learning, we shall get different result because of message-passing and read-out phases! In MEGnet, a giant graph is built with adjacency matrix. Pytorch does mention something about training multiple graphs in single batch but what …
I am reading an article about graph neural network and it is mentioned: In this step, we extract all newly update hidden states and create a final feature vector describing the whole graph. This feature vector can be then used as input to a standard machine learning model. What does it mean that this feature vector can be used as an input to standard machine learning model? Isnt machine learning all about obtaining the features in the first place? And …
While there has been a lot of talk in how to define the similarity between nodes in the embedding space, but I don't seem to come across any talking about defining the similarity between nodes in the original, non-embedded graph. Any suggestions as of how to define such?
What is the relation between random walk, DeepWalk and Neighbour Aggregation in GNN? Please provide compare and contrast for all these 3 pairs. Thank you.
Difference of the model design. It seems the difference is that GraphSAGE sample the data. But what is the difference in model architecture.
It seems in GNN(graph neural network), in transductive situation, we input the whole graph and we mask the label of valid data and predict the label for the valid data. But is seems in inductive situation, we also input the whole graph(but sample to batch) and mask the label of the valid data and predict the label for the valid data.