Matrix factorization how to initialize weights and biases?

I have a matrix factorization and I'm wondering how I should initialize its weights and biases.

When getting prediction (recommendation), after computing a dot product and adding bias I want to use sigmoid function on that to get value from 0 to 1.

But when introducing a sigmoid here I also introduce a possibile vanishing/exploding gradient problem. For that I think that weights can be initialized with xavier function. But what aboud biases? Should I just use uniform distribution from (-0.01, 0.01) for example?

Topic weight-initialization matrix-factorisation deep-learning neural-network machine-learning

Category Data Science


For matrix factorization I usually see it being initialized by a uniform distribution from [0, 1) like in this (pytorch) or a truncated normal with mean=0.0 and std=1.0 as in this (tensorflow).

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.