Is it possible to implement a vectorized version of a Maxout activation function?

I want to implement an efficient and vectorized Maxout activation function using python numpy. Here is the paper in which Maxout Network was introduced (by Goodfellow et al).

For example, if k = 2:

def maxout(x, W1, b1, W2, b2):
    return np.maximum(np.dot(W1.T,x) + b1, np.dot(W2.T, x) + b2)

Where x is a N*D matrix.

Suppose k is an arbitrary value(say 5). Is it possible to avoid for loops when calculating each wx + b? I couldn't come up with any vectorized solutions.

Topic numpy activation-function deep-learning python machine-learning

Category Data Science


If you could combine all the weight vectors into a matrix W and all b's into a vector b, then you could do

np.maximum(np.dot(W.T,x)  + b)

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.