What is the difference between Pytorch's DataParallel and DistributedDataParallel?
I am going through this imagenet example.
And, in line 88, the module DistributedDataParallel is used. When I searched for the same in the docs, I haven’t found anything. However, I found the documentation for DataParallel
.
So, would like to know what is the difference between the DataParallel and DistributedDataParallel modules.
Topic pytorch gpu distributed
Category Data Science