For multi-GPU training it requires DDP (torch.distributed.launch).