Is it possible to implement a vectorized version of a Maxout activation function?
I want to implement an efficient and vectorized Maxout activation function using python numpy. Here is the paper in which Maxout Network was introduced (by Goodfellow et al).
For example, if k = 2:
def maxout(x, W1, b1, W2, b2):
return np.maximum(np.dot(W1.T,x) + b1, np.dot(W2.T, x) + b2)
Where x is a N*D matrix.
Suppose k is an arbitrary value(say 5). Is it possible to avoid for loops when calculating each wx + b
? I couldn't come up with any vectorized solutions.
Topic numpy activation-function deep-learning python machine-learning
Category Data Science