Is it acceptable to append information to word embeddings?

Let's say I have my 300 dimensional word embedding trained with Word2Vec and it contains 10,000 word vectors.

I have additional data on the 10,000 words in the form of a vector (10,000x1), containing values between 0 and 1. Can I simply append the vector to the word embedding so that I have a 301 dimensional embedding?

I am looking to calculate similarities between word vectors using cosine similarity.

Topic vector-space-models word2vec nlp

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.