① 导入包import torch.distributed as dist ② from torch.utils.data.distributed import DistributedSampler ③ 引入local_rank参数 parser.add_argument('--local_rank', type=int, default=-1) ④ 进程初始化 torch.distributed.init_process_group(backend='nccl',init_method='env://',rank=parser.local_rank, world_size=GPU_num) 其中,world_size代表使用的GPU数量; ⑤ 数据分发 data_sampler=DistributedSampler(dataset_train,rank=parser.local_rank,num_replicas=world_size) dataloader_train=DataLoader(dataset_train,batch_size=2,num_workers=2,collate_fn=collater, drop_last=True, pin_memory=True, sampler=data_sampler) 其中,batch_size自行设置大小 ⑥ 显卡选择 torch.cuda.set_device(parser.local_rank) device = torch.device("cuda", parser.local_rank) ⑦ 并行处理 model= torch.nn.parallel.DistributedDataParallel(model, device_ids=[parser.local_rank], output_device=parser.local_rank, find_unused_parameters=True) ⑧ 训练周期for循环中加上 data_sampler.set_epoch(epoch_num)
训练语句 以两块GPU为例:
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
Reference:
https://www.cnblogs.com/JunzhaoLiang/p/13535952.html
|