DDP运行报错(单卡无错):ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1)

发布时间 2023-04-25 11:38:20作者: 石中火本火

使用DDP时出现错误,但是单卡跑无错误。
错误记录如下:

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by
making sure all forward function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 1: 4 5 6 7
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error

一度以为是DDP的bug,仔细阅读报错发现,关键在于

This error indicates that your module has parameters that were not used in producing loss.

即有参数未参与到loss生成过程中,换句话说就是有参数在init中定义,但是未在forward中使用,就会造成这样的结果。原来为了不断调优模型,我将几个待选网络模块都写在了init函数中,然后这样只需要在forward中改变调用的模块就可以了。在单机运行中这样是可行的无错的,但是在DDP中由于需要多卡进行loss的reduce,为了防止出错,ddp就强行设置了这样的规则,但是可以通过如上错误提示里面的参数更改此设置,但是尽量不要修改。
解决方法:将init函数中未使用到的模块注释掉即可。