Traceback (most recent call last):
File "C:/Users/CAF_d201/PycharmProjects/pythonProject/代码资源_PyTorch深度学习入门/CH4/TransferLearning.py", line 125, in <module>
train(alexnet,criterion,optimizer,epochs=2)
File "C:/Users/CAF_d201/PycharmProjects/pythonProject/代码资源_PyTorch深度学习入门/CH4/TransferLearning.py", line 89, in train
loss.backward()
File "D:\Anaconda3_5\envs\pytorch\lib\site-packages\torch\_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "D:\Anaconda3_5\envs\pytorch\lib\site-packages\torch\autograd\__init__.py", line 149, in backward
allow_unreachable=True, accumulate_grad=True)
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 2.00 GiB total capacity; 282.16 MiB already allocated; 118.23 MiB free; 294.00 MiB reserved in total by PyTorch)
由于本人为一个GPU,所以不能同时开两个进程,否则会以上错误。 将多余的进程关闭 成功解决
|