根据龙良曲Pytorch学习视频整理,视频链接:
【计算机-AI】PyTorch学这个就够了!
1.PyTorch介绍
2017年1月,由Facebook人工智能研究院(FAIR)基于Torch推出了PyTorch。它是一个基于Python的可续计算包,提供两个高级功能:1、具有强大的GPU加速的张量计算(如NumPy)。2、包含自动求导系统的深度神经网络。
2.开发环境准备
cmd查看CUDA版本信息: nvcc -V cmd查看python版本信息:python --version python查看torch和cuda信息:
import torch
print(torch.__version__)
print('gpu:', torch.cuda.is_available())
3.numpy.genfromtxt()
file_name = "data.csv"
"""
10.3,6.6
...
"""
points = numpy.genfromtxt(file_name, delimiter=",")
for i in range(len(points)):
x = points[i, 0]
y = points[i, 1]
4.基本数据类型
4.1 All is about Tensor
python | PyTorch |
---|
Int | IntTensor of size() | float | FloatTensor of size() | Int array | IntTensor of size[d1,d2,…] | Float array | FloatTensor of size[d1,d2…] | string | – |
4.2 How to denote string
- One-hot:[0, 1, 0, 0,…]
- Embedding:Word2vec,glove
4.3 Data type
4.4 Type check
a = torch.randn(2, 3)
print(a)
print(a.type())
print(type(a))
print(isinstance(a, torch.FloatTensor))
print(isinstance(a, torch.cuda.FloatTensor))
a = a.cuda()
print(isinstance(a, torch.cuda.FloatTensor))
4.5 Dimension / rank
- Dimension 0 :Loss
- Dimension 1 :Bias和Linear Input
- Dimension 2 :Linear Input batch
- Dimension 3 :RNN Input Batch
- Dimension 4 :CNN[b, c, h, w]
4.6 Mixed
a = torch.rand(2, 3, 4, 5)
print(a.size())
print(a.shape)
print(len(a.shape))
print(a.dim())
print(a.numel())
5.创建Tensor
5.1 Import from numpy
a = np.ones(2)
print(a)
print(torch.from_numpy(a))
5.2 Import from List
a = torch.tensor([2, 3.2])
print(a)
torch.Tensor() 接受数据的类型 torch.tensor() 接收现成的数据
5.3 uninitialized
初始化的数据可能会非常大或者非常小,不建议使用
- torch.empty()
- torch.FloatTensor(d1, d2, d3)
- torch.IntTensor(d1, d2, d3)
5.4 set default type
print(torch.tensor([1.2, 3]).type())
torch.set_default_tensor_type(torch.DoubleTensor)
print(torch.tensor([1.2, 3]).type())
5.5 rand / rand_like, randint, randn
推荐使用这个随机初始化
a = torch.rand(2, 1)
print(a)
print(torch.rand_like(a))
print(torch.randint(1, 5, [2, 1]))
torch.randn() 输出随机数服从正态分布(0, 1) torch.normal(mean=torch.full([10], 0), std=torch.arrange(1, 0, -0.1)) 自定义正态分布均值和标准差
5.6 full
print(torch.full([], 7))
print(torch.full([1], 7))
5.7 arrange / range
print(torch.arange(0, 10, 2))
print(torch.range(0, 10, 2))
5.8 linspace / logspace
print(torch.linspace(0, 10, steps=3))
print(torch.logspace(0, -1, steps=5))
5.9 Ones / zeros / eye
print(torch.ones(1, 2))
print(torch.zeros(1, 2))
print(torch.eye(2))
5.10 randperm
a = torch.rand(4, 3)
print(a)
idx = torch.randperm(3)
print(idx)
print(a[[1, 0, 2]])
"""
tensor([[0.1708, 0.2821, 0.8163],
[0.8898, 0.6628, 0.2350],
[0.3743, 0.4281, 0.5309],
[0.4996, 0.7259, 0.5485]])
tensor([1, 0, 2])
tensor([[0.8898, 0.6628, 0.2350],
[0.1708, 0.2821, 0.8163],
[0.3743, 0.4281, 0.5309]])
"""
6.索引与切片
索引和切片的方式同python list[start:end:steps]
- Indexing
- select first / last N
- select by steps
- select by specific index
6.1 …
连续全部取值
a = torch.rand(4, 3, 28, 28)
print(a[0, 1].shape)
print(a[0, 0, 2, 4])
print(a.index_select(2, torch.arange(8)).shape)
print(a[:,:1,...].shape)
6.2 select by mask
x = torch.randn(3, 4)
print(x)
mask = x.ge(0.5)
print(mask)
print(torch.masked_select(x, mask))
"""
tensor([[-1.8692, 0.9687, -0.4978, 0.7836],
[-2.5662, 0.0487, 0.3978, -0.3676],
[-1.5896, -0.1129, -1.9687, 0.5585]])
tensor([[False, True, False, True],
[False, False, False, False],
[False, False, False, True]])
tensor([0.9687, 0.7836, 0.5585])
"""
6.3 select by flatten index
reshape为一维后取索引
src = torch.tensor([[4, 3, 5],
[6, 7, 8]])
print(torch.take(src, torch.tensor([0, 2, 5])))
7.维度变换
7.1 View / reshape
view的size前后必须一致,view的size要有实际意义,避免产生数据污染
a = torch.rand(4, 1, 28, 28)
print(a.view(4, 28*28).shape)
print(a.unsqueeze(4).shape)
7.2 Sequeeze / unsqueeze
unsqueeze插入一个维度,不会改变数据本身 squeeze若给定的挤压维度为1则被挤压掉,若不为1则保持原来数据维度不变
b = torch.tensor([1, 2])
print(b.unsqueeze(-1))
print(b.unsqueeze(0))
c = torch.rand(1, 31, 1, 1)
print(c.shape)
print(c.squeeze().shape)
print(c.squeeze(0).shape)
print(c.squeeze(1).shape)
7.3 Expand / repeat
- Expand(broadcasting)
- repeat(memory copied)
a = torch.rand(1,32,1,1)
print(a.shape)
print(a.expand(4,32,14,14).shape)
print(a.repeat(4,32,1,1).shape)
7.4 Transpose / t / permute
t()只能转置二维矩阵,否则报错
a = torch.rand(2,3)
print(a.t().shape)
b = torch.rand(4,3,28,28)
print(b.transpose(1,3).transpose(1,2).shape)
print(b.permute(0,2,3,1).shape)
8. Broadcasting
|