Why doesn't batch normalization 'zero out' a batch of size one?
I'm using Tensorflow. Consider the example below:
x
tf.Tensor: shape=(1,), dtype=float32, numpy=array([-0.22630838], dtype=float32)
tf.keras.layers.BatchNormalization()(x)
tf.Tensor: shape=(1,), dtype=float32, numpy=array([-0.22619529], dtype=float32)
There doesn't seem to be any change at all, besides maybe some perturbation due to epsilon. Shouldn't a normalized sample of size one just be the zero tensor?
I figured maybe there was some problem with the fact that the batch size = 1 (variance is zero in this case, so how do you make the variance =1) But I've tried other simple examples with different shapes and setting the
axis=0,1,etc
parameter to different values. None of them make any change at all, really.
Am I simply using the API incorrectly?
Topic batch-normalization keras tensorflow
Category Data Science