Keep word2vexc/fasttext model loaded in memory without using API
I have to use Fasttext model to return word embeddings. In test I was calling it through API. Since there are too many words to compute embeddings, API call seems to be expensive. I would like to use fasttext without API. For that I need to load the model once and keep it in memory for further calls. How can this be done without using API. Any help is highly appreciated.
Topic fasttext word2vec word-embeddings nlp
Category Data Science