Why does batchnorm1d in Pytorch compute 0 with the following example (2 lines of code)?
Here is the code
import torch
import torch.nn as nn
x = torch.Tensor([[1, 2, 3], [1, 2, 3]])
print(x)
batchnorm = nn.BatchNorm1d(3, eps=0, momentum=0)
print(batchnorm(x))
Here is what is printed
tensor([[1., 2., 3.],
[1., 2., 3.]])
tensor([[0., 0., 0.],
[0., 0., 0.]], grad_fn=NativeBatchNormBackward)
What I am expecting is the following:
Using hand calculation, let $x = (1,2,3)$, then $E(x) = (1+2+3)/3 = 2$ and $Var(x) = (1^2 + 2^2 + 3^2) /3 - (2)^2 = 0.9999...$, so that the final output looks like $y \approx (1,2,3) - 2/\sqrt{1} = (-1, 0, 1)$
So, I am expecting the output to the batchnorm be
tensor([[-1., 0., 1.],
[-1., 0., 1.]])
Can someone please explain where I went wrong?
Topic pytorch batch-normalization
Category Data Science