How exp(-z) is working in a sigmoid function in neural networks while z is a matrix?

function g = sigmoid(z)

%SIGMOID Compute sigmoid function
%J = SIGMOID(z) computes the sigmoid of z.

g = 1.0 ./ (1.0 + exp(-z));

end

I'm going through the Andrew Ng Coursera course. I doubt that how exp(-z) is computed directly while z is a matrix?

Topic matrix logistic-regression neural-network octave

Category Data Science


In many languages and libraries, operations that apply to a scalar can be applied to vectors, matrices and tensors. They're just applied element-wise, and the result is another vector, matrix, etc with each value transformed by that function.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.