Hardware datapaths for weights and operands

A paper, Survey and Benchmarking of Machine Learning Accelerators, mentions

Conversely, pooling, dropout, softmax, and recurrent/skip connection layers are not computationally intensive since these types of layers stipulate datapaths for weight and data operands

What does this exactly means, stipulate datapaths for weight and data operands? What are this specific datapaths, how are they stipulated?

Those operations are compared to fully connected and conv layers, which might benefit more from dedicated AI accelerators,

Overall, the most emphasis of computational capability for machine learning is on DNN and CNNs because they are quite computationally intensive [6], with the fully connected and convolutional layers being the most computationally intense.

Topic hardware cnn optimization machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.