What are different ways to reduce size of word2vec vectors file?

I am working on an application with memory constraints. We are getting vectors from python Gensim models but need to transmit copies of them to react native mobile app and potentially in-browser JS. I need to get word2vec word vectors using as much less memory as possible. So, I need some ways in which this can be achieved.

I already tried reducing floating-point precision to 9 floating points and got a 1.5 times improvement on memory. I am okay with compromisation on performance. Can anyone suggest some more alternatives?

Topic word2vec word-embeddings memory

Category Data Science


you actually need to reduce the size of your embedding. Use a nonlinear dimensionality reduction algorithm such as LLE, UMAP or using an Autoencoder and reduce the size of Word2Vec from $n$ to $m$.

Choosing $m$ is done through a simple hyperparametr tuning for your model

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.