Keep word2vexc/fasttext model loaded in memory without using API

I have to use Fasttext model to return word embeddings. In test I was calling it through API. Since there are too many words to compute embeddings, API call seems to be expensive. I would like to use fasttext without API. For that I need to load the model once and keep it in memory for further calls. How can this be done without using API. Any help is highly appreciated.

Topic fasttext word2vec word-embeddings nlp

Category Data Science


Surely, I will good idea to write embedding on some database like SQLite,Postgres or RocksDB. I highly appreciate using RocksDB as it is store data in the form of a key-value pair which is the best suit for your use case. Maybe it will take little more time but your main issue will get resolve through it. If u directly deal with python than there is support of SQLite and Postgres. If u work with java or .NET than RocksDB or LevelDB is supercool. I hope this explanation will help you. Please have review my answer for better reach.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.