Size Matrix features after applying 6 1D Kernels on one-hot encoded vectors

Suppose we are building the following model to build a neural network over one-hot encoded vectors of characters:

  • For a given dataset, it’s not reasonable to read the whole text! So, we take some characters of text, say 1014.

  • Then we apply 1D convolutions + pooling 6 times and we use the following kernels width: o Kernels width: 7,7,3,3,3,3 o We will apply 1024 filters on the same data.

  • Since we apply the same process six times, we will get a matrix of features of size $1024\times 34 $.

  • Apply MLP for regression, classification, etc.

Question: Why we got $1024\times 34$ please?

Topic convolutional-neural-network ngrams nlp

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.