Does using torch.cuda.empty_cache can decrease performance?

We're struggling with memory usage in a project that deploys multiple models to the same GPU (models are usually built with PyTorch and TensorFlow).

It was suggested that we could use torch.cuda.empty_cache to save a few precious bytes.

However, besides the operation itself using GPU time, will it adversely affect performance in the future, i.e., will that result in torch reallocating the cache on the next call, for instance?

Topic pytorch memory

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.