Isn't graph embedding a step back from non-euclidean space?

As I understand, we use graph embedding to make a euclidean representation of non-euclidean structure - graph. Does it mean that conceptually we just take a step back to, may be, more complex, but still grid processing?

Topic graph-neural-network embeddings graphs

Category Data Science


Take a look at this, it compared graph embedding vs graph convolution, which takes the native graph structure as input.

https://towardsdatascience.com/graph-convolutional-networks-for-geometric-deep-learning-1faf17dee008


Building embeddings is the first step in graph processing.

We mainly use to apply mathematics on something like plane or hyperplane. Almost all mathematical methods can work with linear space, especially linear algebra which we use in neural networks and computer data processing.

So, in fact, embedding is just one of the first steps we need to do in order to apply some math on graph data. We can embed node features to a lower dimension, we can also embed an entire graph to some space we know how to work with. Then apply functions, transformations, whatever we want.

When you are building an embedding, your task is to represent a graph in linear space so that it will keep all the characteristics it has in non-linear "graph" space.

In this article you can read more about different approaches to graph embedding. I will just cite one phrase from there.

Machine learning algorithms are tuned for continuous data, hence why embedding is always to a continuous vector space.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.