1.张量计算补充 2.计算图(Computational Graph) Pytorch中autograd的底层采用了计算图,计算图是一种有向无环图(DAG),用于记录算子与变量之间的关系。 下图为z=wx+b的计算图 图片源自:深度学习框架pytorch入门与实践,陈云著 基本概念:叶子节点(leaf node)即输入的变量 中间节点例如:z 输出节点如:y grad属性用于储存节点tensor的微分值 grad_fn属性用于储存tensor的微分函数,又称反向传播函数 例如下面代码中示意的mulbackward等。
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x=torch.ones(2)
>>> x
tensor([1., 1.])
>>> x.requires_grad
False
>>> x.requires_grad=true
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'true' is not defined
>>> x.requires_grad=True
>>> z=4*x
>>> z
tensor([4., 4.], grad_fn=<MulBackward0>)
>>> z.requires_grad
True
>>> y=z.norm
>>> y
<bound method Tensor.norm of tensor([4., 4.], grad_fn=<MulBackward0>)>
>>> y=z.norm()
>>> y
tensor(5.6569, grad_fn=<CopyBackwards>)
>>> y.backward()
>>> x.grad
tensor([2.8284, 2.8284])
>>> z.grad
__main__:1: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
>>> z.retain_grad
<bound method Tensor.retain_grad of tensor([4., 4.], grad_fn=<MulBackward0>)>
>>>
x输入是一个一维向量,变量的requires_grad属性默认为False,如果某一个节点requires_grad被设置为True,那么所有依赖它的节点都会默认为True,因为求导时涉及到链式求导法则。
|