卷积神经网络常见的层
类型 | 名称 | 作用 |
---|
Conv | 卷积层 | 提取特征 | ReLU | 激活层 | 激活 | Pool | 池化 | —— | BatchNorm | 批量归一化 | —— | Linear(Full Connect) | 全连接层 | —— | Dropout | —— | —— | ConvTranspose | 反卷积 | —— |
pytorch中各种层的用法
卷积 Convolution
卷积类型 | 作用 |
---|
torrch.nn.Conv1d | 一维卷积 | torch.nn.Conv2d | 二维卷积 | torch.nn.Conv3d | 三维卷积 | torch.nn.ConvTranspose1d | 一维反卷积 | torch.nn.ConvTranspose2d | 二维反卷积 | torch.nn.ConvTranspose3d | 三维反卷积 |
m = nn.Conv1d(in_channels=16,
out_channels=33,
kernel_size=3,
padding=1,
stride=1)
input = torch.randn(1, 16, 50)
output = m(input)
print(output.size())
m = nn.Conv2d(in_channels=16,
out_channels=33,
kernel_size=3,
stride=2)
input = torch.randn(20, 16, 50, 100)
output = m(input)
print(output.size())
m = nn.Conv2d(in_channels=16,
out_channels=33,
kernel_size=(3, 5),
stride=(2, 1),
padding=(4, 2),
dilation=(3, 1))
output = m(input)
print(output.size())
m = nn.ConvTranspose2d(in_channels=16,
out_channels=33,
kernel_size=3,
stride=2)
input = torch.randn(20, 16, 50, 100)
output = m(input)
print(output.size())
input = torch.randn(1, 16, 12, 12)
downsample = nn.Conv2d(in_channels=16,
out_channels=16,
kernel_size=3,
stride=2,
padding=1)
upsample = nn.ConvTranspose2d(in_channels=16,
out_channels=16,
kernel_size=3,
stride=2,
padding=1,
output_padding=1)
output = downsample(input)
print(output.size())
output = upsample(output)
print(output.size())
对图像进行卷积操作
- 图像读取: 采用PIL.Image读取
- Tensor转换: 采用
torch.from_numpy() 将图像转为Tensor - 维度变换: 采用
tensor.squeeze() 进行降维度, tensor.unsqueeze() 升维
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
image = Image.open('./data/lena_gray.jpg')
dir(torch)
torch.set_default_dtype(torch.float)
x = torch.from_numpy(np.array(image, dtype=np.float32))
print(x.size())
x = x.unsqueeze(0)
x = x.unsqueeze(0)
print(x.size())
filter1 = torch.tensor([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]], dtype=torch.float32)
filter1 = filter1.unsqueeze(0)
filter1 = filter1.unsqueeze(0)
out = F.conv2d(x, filter1)
print(out.size())
out = out.squeeze(0)
out = out.squeeze(0)
plt.imshow(out.numpy().astype(np.uint8), cmap='gray')
plt.axis('off')
plt.show()
池化层 Pooling
一般池化采用这两种方法:最大池化和平均池化
pytorch中提供的池化API:
m = nn.MaxPool2d(kernel_size=3, stride=2)
input = torch.randn(20, 16, 50, 32)
output = m(input)
print(output.size())
m = nn.MaxPool2d(kernel_size=(3, 2), stride=(2, 1))
output = m(input)
print(output.size())
Dropout层
Dropout层的特点: 输出Tensor与输出Tensor尺寸相同(Output is of the same shape as input)
计算过程: tensor_out = 1/(1-p) * tensor_input
m = nn.Dropout(p=0.2, inplace=False)
input = torch.randn(1, 5)
output = m(input)
print('input:', input, '\n',
output, output.size())
m = nn.Dropout2d(p=0.2)
input = torch.randn(1, 1, 5, 5)
output = m(input)
print(output, output.size())
1* 1/(1-0.2) = 1.25
全连接层 / 线性层 Linear
import torch
import torch.nn as nn
import torch.nn.functional as F
m = nn.Linear(in_features=20, out_features=30)
input = torch.randn(128, 20)
output = m(input)
print(output.size())
m = nn.Bilinear(in1_features=20, in2_features=30, out_features=40)
input1 = torch.randn(128, 20)
input2 = torch.randn(128, 30)
output = m(input1, input2)
print(output.size())
Normalization层
mport torch
import torch.nn as nn
m = nn.BatchNorm1d(num_features=10, affine=False)
input = torch.randn(20, 10)
output = m(input)
print(output, output.size())
m = nn.BatchNorm2d(num_features=100, affine=True)
input = torch.randn(20, 100, 35, 45)
output = m(input)
print(output.size())
链接:https://www.jianshu.com/p/343e1d994c39
|