MobileNet
传统卷积神经网络,内存需求大、运算量大导致无法在移动设备以及嵌入式设备上运行。
MobileNet网络是由google团队在2017年提出的,专注于移动端或者嵌入式设备中的轻量级CNN网络。相比传统卷积神经网络,在准确率小幅降低的前提下大大减少模型参数与运算量。(相比VGG16准确率减少了0.9%,但模型参数只有VGG的1/32)
以下是传统卷积和Depthwise Convolution的比较:
pw卷积(就是1*1卷积,用于调整深度):
?
计算量大大降低:
MobileNet V1?
网络中的亮点: ? ? ? ? Depthwise Convolution(大大减少运算量和参数数量) ????????增加超参数α、β?
????????其中α为卷积核的个数,β为输入图像的分辨率。
MobileNet V2
MobileNet v2网络是由google团队在2018年提出的,相比MobileNet V1网. 络,准确率更高,模型更小。
网络中的亮点:
????????Inverted Residuals (倒残差结构)
????????Linear Bottlenecks
与ResNet中形成的两头大 中间小的bottleneck结构相反,是一种倒残差结构。
采用了ReLU6函数,
?在倒残差结构的最后一个1*1卷积层中,使用了线性的激活函数,而不是ReLU激活函数。
?并不是所有的block中都有shortcut捷径分支,当sride=1并且输入特征矩阵与输出特征矩阵shape相同时才有shortcut连接。
?
网络结构:
此网络中描述的bottleneck是倒残差结构。?
以下是MobileNet V2在分类和目标检测任务下与MobileNet V1的效率对比,可以看到在CPU下有比较明显的效率提升,基本实现了在移动端能实时完成任务的功能。
MobileNet V3
MoblieNet V3主要更新了:
????????更新Block(bneck)
????????使用NAS(Neural Architecture Search)搜索参数
????????重新设计耗时层结构
更新Block
?重新设计耗时层结构
?重新设计激活函数
?
SENet
国内自动驾驶创业公司 Momenta 在 ImageNet 2017 挑战赛中夺冠,网络架构为 SENet,Squeeze-and-Excitation Networks,论文作者为 Momenta 高级研发工程师胡杰。
网络是否可以从其他层面来考虑去提升性能,比如考虑特征通道之间的关系?SENet就是考虑了通道层面的关系。具体来说,就是通过学习的方式来获取每个特征通道的重要程度,然后依照这个重要程度去提升有用的特征并抑制对当前任务用处不大的特征。
SE模块如下图所示:
?
首先是 Squeeze 操作,顺着空间维度来进行特征压缩,将每个二维的特征通道变成一个实数,这个实数某种程度上具有全局的感受野,表征着在特征通道上响应的全局分布
然后是 Excitation 操作,类似于循环神经网络门的机制。通过参数 w 来为每个特征通道生成权重,其中参数 w 被学习用来显式地建模特征通道间的相关性。(这里是两个全连接层)
最后是 Reweight 的操作,将 Excitation 的输出的权重看做是进过特征选择后的每个特征通道的重要性,然后通过乘法逐通道加权到先前的特征上,完成在通道上对原始特征的重标定。
下面是 BasicBlock 的代码:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
import torch.optim as optim
class BasicBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(out_channels)
# shortcut的输出维度和输出不一致时,用1*1的卷积来匹配维度
self.shortcut = nn.Sequential()
if stride != 1 or in_channels != out_channels:
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(out_channels))
# 在 excitation 的两个全连接
self.fc1 = nn.Conv2d(out_channels, out_channels//16, kernel_size=1)
self.fc2 = nn.Conv2d(out_channels//16, out_channels, kernel_size=1)
#定义网络结构
def forward(self, x):
#feature map进行两次卷积得到压缩
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
# Squeeze 操作:global average pooling
w = F.avg_pool2d(out, out.size(2))
# Excitation 操作: fc(压缩到16分之一)--Relu--fc(激到之前维度)--Sigmoid(保证输出为 0 至 1 之间)
w = F.relu(self.fc1(w))
w = F.sigmoid(self.fc2(w))
# 重标定操作: 将卷积后的feature map与 w 相乘
out = out * w
# 加上浅层特征图
out += self.shortcut(x)
#R elu激活
out = F.relu(out)
return out
SE网络结构:
#创建SENet网络
class SENet(nn.Module):
def __init__(self):
super(SENet, self).__init__()
#最终分类的种类数
self.num_classes = 10
#输入深度为64
self.in_channels = 64
#先使用64*3*3的卷积核
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
#卷积层的设置,BasicBlock
#2,2,2,2为每个卷积层需要的block块数
self.layer1 = self._make_layer(BasicBlock, 64, 2, stride=1)
self.layer2 = self._make_layer(BasicBlock, 128, 2, stride=2)
self.layer3 = self._make_layer(BasicBlock, 256, 2, stride=2)
self.layer4 = self._make_layer(BasicBlock, 512, 2, stride=2)
#全连接
self.linear = nn.Linear(512, self.num_classes)
#实现每一层卷积
#blocks为大layer中的残差块数
#定义每一个layer有几个残差块,resnet18是2,2,2,2
def _make_layer(self, block, out_channels, blocks, stride):
strides = [stride] + [1]*(blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_channels, out_channels, stride))
self.in_channels = out_channels
return nn.Sequential(*layers)
#定义网络结构
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
HybridSN 高光谱分类
S. K. Roy, G. Krishna, S. R. Dubey, B. B. Chaudhuri HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification,?IEEE GRSL?2020
这篇论文构建了一个 混合网络 解决高光谱图像分类问题,首先用 3D卷积,然后使用 2D卷积,代码相对简单,下面是代码的解析。
1. 定义 HybridSN 类
模型的网络结构为如下图所示:
三维卷积部分:
- conv1:(1, 30, 25, 25), 8个 7x3x3 的卷积核 ==>(8, 24, 23, 23)
- conv2:(8, 24, 23, 23), 16个 5x3x3 的卷积核 ==>(16, 20, 21, 21)
- conv3:(16, 20, 21, 21),32个 3x3x3 的卷积核 ==>(32, 18, 19, 19)
接下来要进行二维卷积,因此把前面的 32*18 reshape 一下,得到 (576, 19, 19)
二维卷积:(576, 19, 19) 64个 3x3 的卷积核,得到 (64, 17, 17)
接下来是一个 flatten 操作,变为 18496 维的向量,
接下来依次为256,128节点的全连接层,都使用比例为0.4的 Dropout,
最后输出为 16 个节点,是最终的分类类别数。
下面是 HybridSN 类的代码:
class_num = 16
class HybridSN(nn.Module):
def __init__(self):
super(HybridSN, self).__init__()
# conv1:(1, 30, 25, 25), 8个 7x3x3 的卷积核 ==>(8, 24, 23, 23)
self.conv1 = nn.Sequential(
nn.Conv3d(in_channels=1, out_channels=8, kernel_size=(7, 3, 3), stride=1, padding=0),
nn.BatchNorm3d(8),
nn.ReLU(inplace=True)
)
# conv2:(8, 24, 23, 23), 16个 5x3x3 的卷积核 ==>(16, 20, 21, 21)
self.conv2 = nn.Sequential(
nn.Conv3d(in_channels=8, out_channels=16, kernel_size=(5, 3, 3), stride=1, padding=0),
nn.BatchNorm3d(16),
nn.ReLU(inplace=True)
)
# conv3:(16, 20, 21, 21),32个 3x3x3 的卷积核 ==>(32, 18, 19, 19)
self.conv3 = nn.Sequential(
nn.Conv3d(in_channels=16, out_channels=32, kernel_size=(3, 3, 3), stride=1, padding=0),
nn.BatchNorm3d(32),
nn.ReLU(inplace=True)
)
# 接下来要进行二维卷积,因此把前面的 32*18 reshape 一下,得到 (576, 19, 19)
# 二维卷积:(576, 19, 19) 64个 3x3 的卷积核,得到 (64, 17, 17)
self.conv4 = nn.Sequential(
nn.Conv2d(in_channels=576, out_channels=64, kernel_size=(3, 3), stride=1, padding=0),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True)
)
# 接下来是一个 flatten 操作,变为 18496 维的向量,其中18496 = 64*17*17
# 接下来依次为256,128节点的全连接层,都使用比例为0.4的 Dropout,
# 最后输出为 16 个节点,是最终的分类类别数。
self.fc1 = nn.Linear(in_features=18496, out_features=256)
self.fc2 = nn.Linear(in_features=256, out_features=128)
self.fc3 = nn.Linear(in_features=128, out_features=class_num)
self.drop = nn.Dropout(p=0.4)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(out)
out = self.conv3(out)
out = out.reshape(out.shape[0], -1, 19, 19)
out = self.conv4(out)
out = out.reshape(out.shape[0],-1)
out = F.relu(self.drop(self.fc1(out)))
out = F.relu(self.drop(self.fc2(out)))
out = self.fc3(out)
return out
# 随机输入,测试网络结构是否通
# x = torch.randn(1, 1, 30, 25, 25)
# net = HybridSN()
# y = net(x)
# print(y.shape)
2. 创建数据集
首先对高光谱数据实施PCA降维;然后创建 keras 方便处理的数据格式;然后随机抽取 10% 数据做为训练集,剩余的做为测试集。
首先定义基本函数:
# 对高光谱数据 X 应用 PCA 变换
def applyPCA(X, numComponents):
newX = np.reshape(X, (-1, X.shape[2]))
pca = PCA(n_components=numComponents, whiten=True)
newX = pca.fit_transform(newX)
newX = np.reshape(newX, (X.shape[0], X.shape[1], numComponents))
return newX
# 对单个像素周围提取 patch 时,边缘像素就无法取了,因此,给这部分像素进行 padding 操作
def padWithZeros(X, margin=2):
newX = np.zeros((X.shape[0] + 2 * margin, X.shape[1] + 2* margin, X.shape[2]))
x_offset = margin
y_offset = margin
newX[x_offset:X.shape[0] + x_offset, y_offset:X.shape[1] + y_offset, :] = X
return newX
# 在每个像素周围提取 patch ,然后创建成符合 keras 处理的格式
def createImageCubes(X, y, windowSize=5, removeZeroLabels = True):
# 给 X 做 padding
margin = int((windowSize - 1) / 2)
zeroPaddedX = padWithZeros(X, margin=margin)
# split patches
patchesData = np.zeros((X.shape[0] * X.shape[1], windowSize, windowSize, X.shape[2]))
patchesLabels = np.zeros((X.shape[0] * X.shape[1]))
patchIndex = 0
for r in range(margin, zeroPaddedX.shape[0] - margin):
for c in range(margin, zeroPaddedX.shape[1] - margin):
patch = zeroPaddedX[r - margin:r + margin + 1, c - margin:c + margin + 1]
patchesData[patchIndex, :, :, :] = patch
patchesLabels[patchIndex] = y[r-margin, c-margin]
patchIndex = patchIndex + 1
if removeZeroLabels:
patchesData = patchesData[patchesLabels>0,:,:,:]
patchesLabels = patchesLabels[patchesLabels>0]
patchesLabels -= 1
return patchesData, patchesLabels
def splitTrainTestSet(X, y, testRatio, randomState=345):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=testRatio, random_state=randomState, stratify=y)
return X_train, X_test, y_train, y_test
下面读取并创建数据集:
# 地物类别
class_num = 16
X = sio.loadmat('Indian_pines_corrected.mat')['indian_pines_corrected']
y = sio.loadmat('Indian_pines_gt.mat')['indian_pines_gt']
# 用于测试样本的比例
test_ratio = 0.90
# 每个像素周围提取 patch 的尺寸
patch_size = 25
# 使用 PCA 降维,得到主成分的数量
pca_components = 30
print('Hyperspectral data shape: ', X.shape)
print('Label shape: ', y.shape)
print('\n... ... PCA tranformation ... ...')
X_pca = applyPCA(X, numComponents=pca_components)
print('Data shape after PCA: ', X_pca.shape)
print('\n... ... create data cubes ... ...')
X_pca, y = createImageCubes(X_pca, y, windowSize=patch_size)
print('Data cube X shape: ', X_pca.shape)
print('Data cube y shape: ', y.shape)
print('\n... ... create train & test data ... ...')
Xtrain, Xtest, ytrain, ytest = splitTrainTestSet(X_pca, y, test_ratio)
print('Xtrain shape: ', Xtrain.shape)
print('Xtest shape: ', Xtest.shape)
# 改变 Xtrain, Ytrain 的形状,以符合 keras 的要求
Xtrain = Xtrain.reshape(-1, patch_size, patch_size, pca_components, 1)
Xtest = Xtest.reshape(-1, patch_size, patch_size, pca_components, 1)
print('before transpose: Xtrain shape: ', Xtrain.shape)
print('before transpose: Xtest shape: ', Xtest.shape)
# 为了适应 pytorch 结构,数据要做 transpose
Xtrain = Xtrain.transpose(0, 4, 3, 1, 2)
Xtest = Xtest.transpose(0, 4, 3, 1, 2)
print('after transpose: Xtrain shape: ', Xtrain.shape)
print('after transpose: Xtest shape: ', Xtest.shape)
""" Training dataset"""
class TrainDS(torch.utils.data.Dataset):
def __init__(self):
self.len = Xtrain.shape[0]
self.x_data = torch.FloatTensor(Xtrain)
self.y_data = torch.LongTensor(ytrain)
def __getitem__(self, index):
# 根据索引返回数据和对应的标签
return self.x_data[index], self.y_data[index]
def __len__(self):
# 返回文件数据的数目
return self.len
""" Testing dataset"""
class TestDS(torch.utils.data.Dataset):
def __init__(self):
self.len = Xtest.shape[0]
self.x_data = torch.FloatTensor(Xtest)
self.y_data = torch.LongTensor(ytest)
def __getitem__(self, index):
# 根据索引返回数据和对应的标签
return self.x_data[index], self.y_data[index]
def __len__(self):
# 返回文件数据的数目
return self.len
# 创建 trainloader 和 testloader
trainset = TrainDS()
testset = TestDS()
train_loader = torch.utils.data.DataLoader(dataset=trainset, batch_size=128, shuffle=True, num_workers=2)
test_loader = torch.utils.data.DataLoader(dataset=testset, batch_size=128, shuffle=False, num_workers=2)
3.?开始训练
# 使用GPU训练,可以在菜单 "代码执行工具" -> "更改运行时类型" 里进行设置
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 网络放到GPU上
net = HybridSN().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
# 开始训练
total_loss = 0
for epoch in range(100):
for i, (inputs, labels) in enumerate(train_loader):
inputs = inputs.to(device)
labels = labels.to(device)
# 优化器梯度归零
optimizer.zero_grad()
# 正向传播 + 反向传播 + 优化
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
total_loss += loss.item()
print('[Epoch: %d] [loss avg: %.4f] [current loss: %.4f]' %(epoch + 1, total_loss/(epoch+1), loss.item()))
print('Finished Training')
运行结果:
[Epoch: 1] [loss avg: 20.0778] [current loss: 2.0543]
[Epoch: 2] [loss avg: 16.4718] [current loss: 1.4499]
[Epoch: 3] [loss avg: 14.0356] [current loss: 0.9051]
[Epoch: 4] [loss avg: 12.1260] [current loss: 0.7068]
[Epoch: 5] [loss avg: 10.5903] [current loss: 0.4818]
[Epoch: 6] [loss avg: 9.3619] [current loss: 0.2623]
[Epoch: 7] [loss avg: 8.3675] [current loss: 0.1905]
[Epoch: 8] [loss avg: 7.5316] [current loss: 0.1921]
[Epoch: 9] [loss avg: 6.8536] [current loss: 0.0949]
[Epoch: 10] [loss avg: 6.2873] [current loss: 0.1509]
[Epoch: 11] [loss avg: 5.7947] [current loss: 0.1316]
[Epoch: 12] [loss avg: 5.3677] [current loss: 0.1210]
[Epoch: 13] [loss avg: 4.9912] [current loss: 0.0300]
[Epoch: 14] [loss avg: 4.6676] [current loss: 0.0594]
[Epoch: 15] [loss avg: 4.3908] [current loss: 0.1179]
[Epoch: 16] [loss avg: 4.1447] [current loss: 0.1044]
[Epoch: 17] [loss avg: 3.9206] [current loss: 0.0430]
[Epoch: 18] [loss avg: 3.7333] [current loss: 0.0263]
[Epoch: 19] [loss avg: 3.5731] [current loss: 0.1190]
[Epoch: 20] [loss avg: 3.4192] [current loss: 0.0483]
[Epoch: 21] [loss avg: 3.2751] [current loss: 0.0676]
[Epoch: 22] [loss avg: 3.1402] [current loss: 0.0514]
[Epoch: 23] [loss avg: 3.0141] [current loss: 0.0487]
[Epoch: 24] [loss avg: 2.8986] [current loss: 0.0501]
[Epoch: 25] [loss avg: 2.7922] [current loss: 0.0062]
[Epoch: 26] [loss avg: 2.6939] [current loss: 0.0276]
[Epoch: 27] [loss avg: 2.6039] [current loss: 0.0897]
[Epoch: 28] [loss avg: 2.5162] [current loss: 0.0429]
[Epoch: 29] [loss avg: 2.4366] [current loss: 0.0146]
[Epoch: 30] [loss avg: 2.3645] [current loss: 0.0153]
[Epoch: 31] [loss avg: 2.2966] [current loss: 0.0369]
[Epoch: 32] [loss avg: 2.2290] [current loss: 0.0122]
[Epoch: 33] [loss avg: 2.1718] [current loss: 0.0184]
[Epoch: 34] [loss avg: 2.1135] [current loss: 0.0162]
[Epoch: 35] [loss avg: 2.0582] [current loss: 0.0083]
[Epoch: 36] [loss avg: 2.0039] [current loss: 0.0149]
[Epoch: 37] [loss avg: 1.9525] [current loss: 0.0016]
[Epoch: 38] [loss avg: 1.9031] [current loss: 0.0155]
[Epoch: 39] [loss avg: 1.8560] [current loss: 0.0134]
[Epoch: 40] [loss avg: 1.8132] [current loss: 0.0051]
[Epoch: 41] [loss avg: 1.7739] [current loss: 0.0175]
[Epoch: 42] [loss avg: 1.7356] [current loss: 0.0083]
[Epoch: 43] [loss avg: 1.7029] [current loss: 0.0728]
[Epoch: 44] [loss avg: 1.6686] [current loss: 0.0515]
[Epoch: 45] [loss avg: 1.6422] [current loss: 0.3931]
[Epoch: 46] [loss avg: 1.6102] [current loss: 0.0074]
[Epoch: 47] [loss avg: 1.5781] [current loss: 0.0016]
[Epoch: 48] [loss avg: 1.5478] [current loss: 0.0232]
[Epoch: 49] [loss avg: 1.5180] [current loss: 0.0012]
[Epoch: 50] [loss avg: 1.4897] [current loss: 0.0173]
[Epoch: 51] [loss avg: 1.4636] [current loss: 0.0026]
[Epoch: 52] [loss avg: 1.4385] [current loss: 0.0601]
[Epoch: 53] [loss avg: 1.4140] [current loss: 0.0268]
[Epoch: 54] [loss avg: 1.3893] [current loss: 0.0192]
[Epoch: 55] [loss avg: 1.3659] [current loss: 0.0227]
[Epoch: 56] [loss avg: 1.3441] [current loss: 0.0163]
[Epoch: 57] [loss avg: 1.3233] [current loss: 0.0030]
[Epoch: 58] [loss avg: 1.3035] [current loss: 0.0128]
[Epoch: 59] [loss avg: 1.2844] [current loss: 0.0071]
[Epoch: 60] [loss avg: 1.2674] [current loss: 0.0045]
[Epoch: 61] [loss avg: 1.2490] [current loss: 0.0041]
[Epoch: 62] [loss avg: 1.2321] [current loss: 0.0788]
[Epoch: 63] [loss avg: 1.2144] [current loss: 0.0036]
[Epoch: 64] [loss avg: 1.1973] [current loss: 0.0017]
[Epoch: 65] [loss avg: 1.1810] [current loss: 0.0041]
[Epoch: 66] [loss avg: 1.1641] [current loss: 0.0057]
[Epoch: 67] [loss avg: 1.1478] [current loss: 0.0070]
[Epoch: 68] [loss avg: 1.1319] [current loss: 0.0168]
[Epoch: 69] [loss avg: 1.1172] [current loss: 0.0005]
[Epoch: 70] [loss avg: 1.1019] [current loss: 0.0044]
[Epoch: 71] [loss avg: 1.0876] [current loss: 0.0305]
[Epoch: 72] [loss avg: 1.0731] [current loss: 0.0088]
[Epoch: 73] [loss avg: 1.0591] [current loss: 0.0228]
[Epoch: 74] [loss avg: 1.0455] [current loss: 0.0045]
[Epoch: 75] [loss avg: 1.0323] [current loss: 0.0002]
[Epoch: 76] [loss avg: 1.0201] [current loss: 0.0168]
[Epoch: 77] [loss avg: 1.0076] [current loss: 0.0089]
[Epoch: 78] [loss avg: 0.9964] [current loss: 0.0316]
[Epoch: 79] [loss avg: 0.9858] [current loss: 0.0476]
[Epoch: 80] [loss avg: 0.9752] [current loss: 0.0033]
[Epoch: 81] [loss avg: 0.9658] [current loss: 0.0073]
[Epoch: 82] [loss avg: 0.9552] [current loss: 0.0047]
[Epoch: 83] [loss avg: 0.9456] [current loss: 0.0092]
[Epoch: 84] [loss avg: 0.9355] [current loss: 0.0004]
[Epoch: 85] [loss avg: 0.9255] [current loss: 0.0015]
[Epoch: 86] [loss avg: 0.9163] [current loss: 0.0214]
[Epoch: 87] [loss avg: 0.9080] [current loss: 0.0763]
[Epoch: 88] [loss avg: 0.8981] [current loss: 0.0092]
[Epoch: 89] [loss avg: 0.8907] [current loss: 0.1359]
[Epoch: 90] [loss avg: 0.8824] [current loss: 0.0028]
[Epoch: 91] [loss avg: 0.8759] [current loss: 0.0130]
[Epoch: 92] [loss avg: 0.8682] [current loss: 0.0057]
[Epoch: 93] [loss avg: 0.8600] [current loss: 0.0359]
[Epoch: 94] [loss avg: 0.8516] [current loss: 0.0235]
[Epoch: 95] [loss avg: 0.8456] [current loss: 0.0230]
[Epoch: 96] [loss avg: 0.8386] [current loss: 0.0044]
[Epoch: 97] [loss avg: 0.8312] [current loss: 0.0062]
[Epoch: 98] [loss avg: 0.8257] [current loss: 0.1492]
[Epoch: 99] [loss avg: 0.8187] [current loss: 0.0099]
[Epoch: 100] [loss avg: 0.8121] [current loss: 0.0025]
Finished Training
4. 模型测试
count = 0
# 模型测试
for inputs, _ in test_loader:
inputs = inputs.to(device)
outputs = net(inputs)
outputs = np.argmax(outputs.detach().cpu().numpy(), axis=1)
if count == 0:
y_pred_test = outputs
count = 1
else:
y_pred_test = np.concatenate( (y_pred_test, outputs) )
# 生成分类报告
classification = classification_report(ytest, y_pred_test, digits=4)
print(classification)
第一次测试结果:
第二次测试结果:
第三次测试结果:
问题与思考
3D卷积和2D卷积的区别
?????????2D卷积和3D卷积的主要区别为滤波器滑动的空间维度。3D卷积的优势在于描述3D空间中的对象关系。
每次分类的结果都不一样,为什么?
????????网络中采用了dropout每次都丢弃部分神经元所以结果会有所不同。
如何进一步提升高光谱图像的分类性能?
????????继续魔改网络加深层数之类的。?
|