conv1d代码
从最简单的开始,没有bias,没有padding,stride=1,不进行分组计算,这些条件之后可以逐步添加,这次先实现最基本的,了解其底层过程。
"""
Created on Sat Mar 12 15:04:51 2022
@author: masteryi
"""
def myconv1d(infeat, convkernel, padding=0, stride=1):
b, c, h = len(infeat), len(infeat[0]), len(infeat[0][0])
out_c, in_c, lenk = len(convkernel), len(convkernel[0]), len(convkernel[0][0])
res = [[[0] * (h-lenk+1) for _ in range(out_c) for _ in range(b)]]
for i in range(b):
for j in range(out_c):
for m in range(c):
for n in range(h-lenk+1):
ans = 0
for k in range(lenk):
ans += infeat[i][m][n+k] * convkernel[j][m][k]
res[i][j][n] += ans
return res
infeat = [[[1,2,3,4], [1,2,4,3]]]
convkernel = [[[0,1,2], [0,2,1]], [[1,0,2], [1,2,0]], [[2,0,1], [2,1,0]]]
outfeat = myconv1d(infeat, convkernel)
print(outfeat)
from torch.nn.functional import conv1d
import torch
import numpy
infeat = torch.tensor(numpy.array(infeat))
convkernel = torch.tensor(numpy.array(convkernel))
outfeat_pytorch = conv1d(infeat, convkernel)
print(outfeat_pytorch)
输出结果如下,和官方的计算结果相同:
[[[16, 22], [12, 20], [9, 16]]]
tensor([[[16, 22],
[12, 20],
[ 9, 16]]], dtype=torch.int32)
思考
- 对于输入通道为in_c,输出通道为out_c,则卷积层的构造为一共有
out_c组 卷积,每组卷积有in_c个 卷积核,每个卷积核大小为
k
h
?
k
w
k_h*k_w
kh??kw?。 - 组内的多个卷积核都有各自的权重,不互相影响,各个组的卷积核权重也不同;所以卷积层的参数量是
c
o
u
t
?
c
i
n
?
k
h
?
k
w
c_{out}*c_{in}*k_h*k_w
cout??cin??kh??kw?。
- 参考资料:知乎:卷积神经网络CNN(2),详细认识卷积过程
|