Why do I get different results at inference time even with fixed seed?

I am a very beginner in deep learning and am playing with voice cloning project. I trained my dataset and used the trained model to synthesize some sentences and was surprised to get a very different output each time I ran the synthesis (output ranging for very good quality to very poor with unintelligible content).

I understood that this was due to the initial state of the model that was set up randomly thanks to a random seed, but in the project I use the seed is fixed to 1234 and used to initialize the random generators.

Can it be that with the same initial seed the outputs differ at inference time for the same input ? Does it have to do with a bad dataset ? What are the reasons for that ?

Thank you

Topic pytorch inference deep-learning dataset

Category Data Science


In PyTorch a typical gotcha that leads to this behavior is forgetting to set the model in evaluation mode when doing inference. You can do this by invoking .eval() on the model.

Evaluation mode changes the behavior of some stochastic elements that can lead to not deterministic results, like batch normalization and dropout.

Apart from that, unless you have explicitly stochastic elements in your model (e.g. explicit sampling from distributions), you should get deterministic results.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.