Is the Cross entropy cost function the same as the Cross entropy loss?

Is the Cross entropy cost function defined as $J(\Theta) = -\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}y_{k}^{(i)}log(\hat{p}_{k}^{(i)})$ the same as the one implemented in sklearn.metrics.log_loss?

If not, what's the difference between them?

$m=\text{number of samples}$

$K=\text{number of classes}$

Topic softmax cost-function

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.