Graph Neural Network | How node embeddings are learned from several graphs?

I am reading paper on MEGnet which is a GNN. The objective is that we have several molecules that share same elements such as molecules $C0_2$ and $COOH$ share $C$ and $O$. Now if we learn the node embeddings of the both graphs via representation learning, we shall get different result because of message-passing and read-out phases!

In MEGnet, a giant graph is built with adjacency matrix. Pytorch does mention something about training multiple graphs in single batch but what I fail to see is how the two graphs in Fig below will have same node embeddings (embedding of H in graph(a) and H in graph(b) be same) if they dont see each other? In other words the message-passing and read-out which is responsible for producing node embeddings will be different in graph(a) and graph(b) because message-passing accounts for structure of the graphs and they both are clearly different structures, so how the optimization achieves same node embeddings for H in graph(a) and (b)?

Topic gnn embeddings

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.