How does Alexnet flatten operation go from 6x6x256 tensor to 4096 vector

In the Alexnet model, after the encoder steps are completed, you end up with a 6x6x256 tensor. Now this needs to be flattened before we go to the ANN part of the network. However, the flattening results in a length of 4096. How did the size of the tensor reduce? In a few tutorials I read about these flatten steps, there is no loss of size when you flatten the tensor so I was expecting the length of the flattened vector to be 6 * 6 * 256 i.e. 9216. Why does Alexnet flatten end up with 4096 and not 9216 length?

The Alexnet paper does not go in to the details of the individual layers of the network.

Thanks

Topic alex-net cnn

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.