1 回答
TA贡献1816条经验 获得超4个赞
不幸的是,在当前的实现中,该with-device语句不能以这种方式工作,它只能用于在 cuda 设备之间切换。
您仍然必须使用该device参数来指定使用哪个设备(或.cuda()将张量移动到指定的 GPU),在以下情况下使用如下术语:
# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
所以要访问cuda:1:
cuda = torch.device('cuda')
with torch.cuda.device(1):
# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
并访问cuda:2:
cuda = torch.device('cuda')
with torch.cuda.device(2):
# allocates a tensor on GPU 2
a = torch.tensor([1., 2.], device=cuda)
但是没有device参数的张量仍然是 CPU 张量:
cuda = torch.device('cuda')
with torch.cuda.device(1):
# allocates a tensor on CPU
a = torch.tensor([1., 2.])
把它们加起来:
不 - 不幸的是,在当前的with-device 语句实现中,无法以您在问题中描述的方式使用。
以下是文档中的更多示例:
cuda = torch.device('cuda') # Default CUDA device
cuda0 = torch.device('cuda:0')
cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)
x = torch.tensor([1., 2.], device=cuda0)
# x.device is device(type='cuda', index=0)
y = torch.tensor([1., 2.]).cuda()
# y.device is device(type='cuda', index=0)
with torch.cuda.device(1):
# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
# transfers a tensor from CPU to GPU 1
b = torch.tensor([1., 2.]).cuda()
# a.device and b.device are device(type='cuda', index=1)
# You can also use ``Tensor.to`` to transfer a tensor:
b2 = torch.tensor([1., 2.]).to(device=cuda)
# b.device and b2.device are device(type='cuda', index=1)
c = a + b
# c.device is device(type='cuda', index=1)
z = x + y
# z.device is device(type='cuda', index=0)
# even within a context, you can specify the device
# (or give a GPU index to the .cuda call)
d = torch.randn(2, device=cuda2)
e = torch.randn(2).to(cuda2)
f = torch.randn(2).cuda(cuda2)
# d.device, e.device, and f.device are all device(type='cuda', index=2)
添加回答
举报