What are different ways to reduce size of word2vec vectors file?
I am working on an application with memory constraints. We are getting vectors from python Gensim models but need to transmit copies of them to react native mobile app and potentially in-browser JS. I need to get word2vec word vectors using as much less memory as possible. So, I need some ways in which this can be achieved.
I already tried reducing floating-point precision to 9 floating points and got a 1.5 times improvement on memory. I am okay with compromisation on performance. Can anyone suggest some more alternatives?
Topic word2vec word-embeddings memory
Category Data Science