Should I use Pad Sequence when using Word Vectors?
I have an unbalanced text data set. I want to use word vectors to embed words. When I use pad sequence? Before or after the word vector? I tried it, after the word vector I used pad sequence but my model accuracy was low. When I use the pad sequence before the word vector, how do I know the result? Because pad sequence and word vector give me numeric result. Meanwhile, word vector input is token and pad sequence input is token too. This is make me confused, can someone tell my mistake to understand this flow? Or please give me some explaination
Topic fasttext word-embeddings class-imbalance neural-network python
Category Data Science