Can a framework use both CPU and GPU in parallel for inference?

Can a framework use both CPU and GPU in parallel for inferencing a model? It seems possible but wondering if any of the frameworks like TensorFlow or PyTorch done this?

To explain further, can we use CPU to execute part of model graph and another parallel subgraph will use GPU to do the same inference?

Topic pytorch inference gpu tensorflow

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.