A way to init sentence embedding for unsupervised text clustering, better than glove wordvec?
For unsupervised text clustering, the key thing is the init embedding for text.
If we want to use deepcluster for text, the problem for text is how to get the init embedding from deep model.
BERT can not get good init embedding.
If we do not use deep model, is there better way to get embedding better than glove wordvec?
Topic representation embeddings word-embeddings deep-learning clustering
Category Data Science