Any research on relationship between the dimensions of a (word2Vec) space and how the human mind constructs meaning (or reality) through language?

Neuroscience is still trying to find how the mind (and language) somehow works. Is there any theory linking a (low-dimensionality) embedding space (like word2Vec) to a mind (linguistic) model? Any Cognitive Linguistics theory?

Topic deepmind word2vec nlp

Category Data Science


Some initial steps where taken here:

Connecting concepts in the brain by mapping cortical representations of semantic relations

To represent words as vectors, we used a pretrained word2vec model21. Briefly, this model was a shallow neural network trained to predict the neighboring words of every word in the Google News dataset, including about 100 billion words (https://code.google.com/archive/p/word2vec/). After training, the model was able to convert any English word to a vector embedded in a 300-dimensional semantic space (extracted through software package Gensim56 in python). Note that the basis functions learned with word2vec should not be interpreted individually, but collectively as a space.

Applying the encoding model to the differential vector of a word pair could effectively generate the cortical representation of the corresponding word relation. With this notion, we used the encoding model to predict the cortical representations of semantic relations. For each class of semantic relation, we calculated the relation vector of every word pair in that class, projected the relation vector onto the cortex using the encoding model, and averaged the projected patterns across word-pair samples in the class.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.