Addressing polysemy in NLP tasks

Looking for modern algorithms using NN Language Model implementations addressing polysemy in NLP tasks, including text classification, question answering and topic modeling. Transfer/Zero-short learning methods are most interesting to find. Any working solutions with BERT and Hugging Face Transformers libraries?

Topic question-answering classification topic-model nlp

Category Data Science


Maybe this article will help you How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings.

Talks about contextual word embeddings like BERT and GPT how they can capture various polysemous concepts rather than the static word embeddings which create a single representation for each word, such as GloVe.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.