IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 人工智能 -> NNDL 实验五 前馈神经网络(3)鸢尾花分类 深入理解Iris数据集 -> 正文阅读

[人工智能]NNDL 实验五 前馈神经网络(3)鸢尾花分类 深入理解Iris数据集

深入研究鸢尾花数据集

画出数据集中150个数据的前两个特征的散点分布图:

#coding:utf-8

import torch.utils.data
import torch
import torch.nn as nn
from torch import optim
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np

def init_Iris():
    df = pd.read_csv('Iris.csv')
    data_array = df.to_numpy()
    X = data_array[:, :-1]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1

    y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

if __name__=='__main__':
    X,y=init_Iris()
    x1=X[:,0].numpy()
    x2=X[:,1].numpy()
    y=y.numpy()
    plt.figure()
    plt.scatter(x1,x2,c=y)
    plt.xlabel('SepalLength')
    plt.ylabel('SepalWidth')
    plt.show()

在这里插入图片描述

【统计学习方法】感知机对鸢尾花(iris)数据集进行二分类

4.5 实践:基于前馈神经网络完成鸢尾花分类

继续使用第三章中的鸢尾花分类任务,将Softmax分类器替换为前馈神经网络。

损失函数:交叉熵损失;
优化器:随机梯度下降法;
评价指标:准确率。
若无特别说明,实验默认所用第三方库:

import torch.utils.data
import torch
import torch.nn as nn
from torch import optim
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np

4.5.1 小批量梯度下降法

为了减少每次迭代的计算复杂度,我们可以在每次迭代时只采集一小部分样本,计算在这组样本上损失函数的梯度并更新参数,这种优化方式称为小批量梯度下降法(Mini-Batch Gradient Descent,Mini-Batch GD)。

为了小批量梯度下降法,我们需要对数据进行随机分组。

目前,机器学习中通常做法是构建一个数据迭代器,每个迭代过程中从全部数据集中获取一批指定数量的数据。

自定义了一个Dataset,结合Dataloader实现多进程运行和小批量梯度下降。
自定义Dataset类

class MyIristDataSet(torch.utils.data.Dataset):
    def __init__(self):
        '''输入需要的数据,构建Dataset'''
        self.features, self.labels = self.init_Iris()

    def __getitem__(self, item):
        return self.features[item],self.labels[item]

    def __len__(self):
        return len(self.labels)

    def resetDataset(self):
        '''重新读Iris并将数据集替换为全部Iris数据集'''
        self.features, self.labels = self.init_Iris()

    def setDataset(self,X,y):
        self.features=X
        self.labels=y

    def init_Iris(self):
        df = pd.read_csv('Iris.csv')
        data_array = df.to_numpy()
        X = data_array[:, :-1]
        labels = data_array[:, -1]
        lenth = [0, 0, 0]
        for i in range(len(labels)):
            if labels[i] == 1:
                lenth[0] += 1
            elif labels[i] == 2:
                lenth[1] += 1
            elif labels[i] == 3:
                lenth[2] += 1

        y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
        X = torch.from_numpy(X.astype(np.float32))
        y = torch.from_numpy(y.astype(np.float32))
        return X, y

主函数区实例化:

if __name__ == '__main__':
    train_X,train_y=init_Iris()
    train_dataset=MyIristDataSet()
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=3,drop_last=False,shuffle=True)# 三进程运行,不舍弃非正除部分,随机打乱

4.5.2 数据处理

初始化Iris数据集

def init_Iris():
    df = pd.read_csv('Iris.csv')
    data_array = df.to_numpy()
    X = data_array[:, :-1]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1

    y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

if __name__ == '__main__':
    train_X,train_y=init_Iris()
    train_dataset=MyIristDataSet()
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=3,drop_last=False,shuffle=True)# 三进程运行,不舍弃非正除部分,随机打乱

分割数据集,分为训练集和测试集:
torch和numpy对数据分组不是特别友好,因此用pandas分组再转换回来。以后如果想分割数据集可以参照这种方法。
但是为什么没有对Iris数据集分组呢?
也不是没试过,但是发现Iris数据集的数据量太小,只有150个,因此我们在对其进行分组的话数据量会更小,训练效果极差,因此就不分组了。

if __name__ == '__main__':
    '''数据集随机分割'''
    train_X, train_y, test_X, test_y=devide_Irisdata('Iris.csv',2/3)
    train_dataset=MyIristDataSet()
    train_dataset.setDataset(train_X,train_y)
    test_dataset=MyIristDataSet()
    test_dataset.setDataset(test_X,test_y)
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=3,drop_last=False,shuffle=True)# 三进程运行,不舍弃非正除部分,随机打乱

其中的类和函数:

class MyIristDataSet(torch.utils.data.Dataset):
    def __init__(self):
        '''输入需要的数据,构建Dataset'''
        self.features, self.labels = self.init_Iris()

    def __getitem__(self, item):
        return self.features[item],self.labels[item]

    def __len__(self):
        return len(self.labels)

    def resetDataset(self):
        '''重新读Iris并将数据集替换为全部Iris数据集'''
        self.features, self.labels = self.init_Iris()

    def setDataset(self,X,y):
        self.features=X
        self.labels=y

    def init_Iris(self):
        df = pd.read_csv('Iris.csv')
        data_array = df.to_numpy()
        X = data_array[:, :-1]
        labels = data_array[:, -1]
        lenth = [0, 0, 0]
        for i in range(len(labels)):
            if labels[i] == 1:
                lenth[0] += 1
            elif labels[i] == 2:
                lenth[1] += 1
            elif labels[i] == 3:
                lenth[2] += 1

        y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
        X = torch.from_numpy(X.astype(np.float32))
        y = torch.from_numpy(y.astype(np.float32))
        return X, y

def devide_Irisdata(filename,rate):
    '''将iris数据集以rate为比例随机分组为训练集与测试集'''
    df=pd.read_csv(filename)
    train_df = df.sample(frac=rate, random_state=None, axis=0, replace=False)
    #print(train_df)
    test_df = df.drop(index=train_df.index)
    #print(test_df)
    train_X,train_y=df_to_tensor(train_df)
    test_X,test_y=df_to_tensor(test_df)
    return train_X,train_y,test_X,test_y

def df_to_tensor(df):
    data_array = df.to_numpy()
    X = data_array[:, :-1]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1
    y = np.concatenate((np.array([1, 0, 0] * lenth[0]), np.array([0, 1, 0] * lenth[1]), np.array([0, 0, 1] * lenth[2])))
    y=y.reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

4.5.3 模型构建

输入层神经元个数为4,输出层神经元个数为3,隐含层神经元个数为6。

class Irismodel(nn.Module):
    def __init__(self):
        super(Irismodel, self).__init__()
        self.linear = nn.Linear(4, 6)
        self.hide=nn.Linear(6,3)
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.linear(x)
        x2=self.hide(x1)
        pre_y = self.softmax(x2)
        return pre_y

    def save_model(self, save_path):
        torch.save(self, save_path)

    def read_model(self, path):
        torch.load(path)

4.5.4 完善Runner类
为Runner添加Dataset_based_SoftmaxClassify函数

# coding:utf-8
import torch.utils.data

import torch
import torch.nn as nn
from torch import optim
import matplotlib.pyplot as plt

class Runner_V3():

    def __init__(self,model,lossfunc,optim):
        '''传入模型、损失函数、优化器和评价指标'''
        self.model=model
        self.loss=lossfunc
        self.optim=optim

    def Dataset_based_SoftmaxClassify(self,train_X, train_y, test_X, test_y ,train_dataset,epoches=20):
        train_dataset.setDataset(train_X, train_y)
        train_dataloader = torch.utils.data.DataLoader(dataset=train_dataset,\
                                                       batch_size=20, num_workers=3, drop_last=False,
                                                       shuffle=True)  # 三进程运行,不舍弃非正除部分,随机打乱
        net=self.model
        loss = self.loss
        optimizer = self.optim
        print('start training.')
        for j in range(epoches):
            print('epoches:', j)
            '''用datasetloader实现小批量梯度下降'''
            i = 0
            for features, labels in train_dataloader:
                X, y = features, labels
                pre_y = net(X)
                # print('pre_y:',pre_y)
                l = loss(pre_y, y)
                optimizer.zero_grad()  # 梯度清零
                l.backward()
                optimizer.step()
                print('loss of the {}th batch of train data: {}'.format(i, l.item()))
                i += 1
            t_pre_y = net(test_X)
            t_loss = loss(t_pre_y, test_y)
            print('loss of train data:', loss(net(train_X), train_y).item())
            print('loss of test data:', t_loss.item())
            print('acc of train data:', self.SoftmaxClassify_acc(train_X, train_y) * 100, '%')
            print('acc of test data:', self.SoftmaxClassify_acc(test_X, test_y) * 100, '%')
        print('training ended.')

    def SoftmaxClassify_train(self,X,y,epoches=500):
        print('start training....')
        for i in range(epoches):
            loss = self.loss
            optimizer = self.optim
            pre_y = self.model(X)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            if i % 50 == 0:
                print('epoch %d, loss: %f' % (i, l.item()))
        print('training ended.')

    def Visible_LogisticClassification_train(self,X,y,epoches=500):
        print('start training....')
        net = self.model
        loss_list = []
        acc_list=[]
        for i in range(epoches):
            loss = self.loss
            optimizer = self.optim
            pre_y = net(X)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            loss_list.append(l.item())
            if i % 10 == 0:
                print('epoch %d, loss in train data: %f' % (i, l.item()))
                net.save_model('LNet.pt')
            acc_list.append(self.LogisticClassify_acc(X,y))
        x = range(epoches)
        plt.subplot(1,2,1)
        plt.plot(x,acc_list)
        plt.xlabel('epoches')
        plt.ylabel('acc(%)')
        plt.subplot(1,2,2)
        plt.plot(x, loss_list)
        plt.xlabel('epoches')
        plt.ylabel('loss')
        plt.show()
        print('training ended.')

    def LosgisticCliassify_train(self,X,y,epoches=500):
        print('start training....')
        for i in range(epoches):
            loss = self.loss
            optimizer = self.optim
            pre_y = self.model(X)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            if i % 50 == 0:
                print('epoch %d, loss: %f' % (i, l.item()))
        print('training ended.')

    def LSM_train(self,X,y,epoches=500):
        '''train_data:列表类型,两个元素为tensor类型,第一个是x,第二个是y'''
        print('start training....')
        model=self.model
        loss = self.loss
        optimizer = self.optim
        num_epochs = epoches
        for epoch in range(num_epochs):
            pre_y = model(X)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            print('epoch %d, loss: %f' % (epoch, l.item()))
        print('training ended.')

    def LSM_evaluate(self,X,y):
        '''测试模型
        test_data:列表类型,两个元素为tensor类型,第一个是x,第二个是y'''
        l = self.loss(self.model(X), y)
        print('loss in test data:', l.item())

    def predict(self,X):
        '''预测数据'''
        return self.model(X)

    def save_model(self, save_path):
        ''''.pt'文件'''
        torch.save(self, save_path)

    def read_model(self, path):
        ''''.pt'文件'''
        torch.load(path)

    def LogisticClassify_acc(self, X, y):
        '''最大项的为预测的类别'''
        ct = 0
        for i in range(len(y)):
            pre_y = self.model(X[i])
            if pre_y >= 0.5:
                pre_y = 1
            else:
                pre_y = 0
            if pre_y == y[i]:
                ct += 1
        return ct / y.shape[0]

    def SoftmaxClassify_acc(self, X, y):
        pre_y = self.model(X)
        max_pre_y = torch.argmax(pre_y, dim=1)
        max_y = torch.argmax(y, dim=1)
        return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

4.5.5 模型训练
全部代码:

import torch.utils.data

from Runner_V2 import *
import pandas as pd
import numpy as np

class MyIristDataSet(torch.utils.data.Dataset):
    def __init__(self):
        '''输入需要的数据,构建Dataset'''
        self.features, self.labels = self.init_Iris()

    def __getitem__(self, item):
        return self.features[item],self.labels[item]

    def __len__(self):
        return len(self.labels)

    def resetDataset(self):
        '''重新读Iris并将数据集替换为全部Iris数据集'''
        self.features, self.labels = self.init_Iris()

    def setDataset(self,X,y):
        self.features=X
        self.labels=y

    def init_Iris(self):
        df = pd.read_csv('Iris.csv')
        data_array = df.to_numpy()
        X = data_array[:, :-1]
        labels = data_array[:, -1]
        lenth = [0, 0, 0]
        for i in range(len(labels)):
            if labels[i] == 1:
                lenth[0] += 1
            elif labels[i] == 2:
                lenth[1] += 1
            elif labels[i] == 3:
                lenth[2] += 1

        y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
        X = torch.from_numpy(X.astype(np.float32))
        y = torch.from_numpy(y.astype(np.float32))
        return X, y


class Irismodel(nn.Module):
    def __init__(self):
        super(Irismodel, self).__init__()
        self.linear = nn.Linear(4, 6)
        self.hide=nn.Linear(6,3)
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.linear(x)
        x2=self.hide(x1)
        pre_y = self.softmax(x2)
        return pre_y

    def save_model(self, save_path):
        torch.save(self, save_path)

    def read_model(self, path):
        torch.load(path)

def SoftmaxClassify_acc(model, X, y):
    pre_y = model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    max_y = torch.argmax(y, dim=1)
    return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

def init_Iris():
    df = pd.read_csv('Iris.csv')
    data_array = df.to_numpy()
    X = data_array[:, :-1]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1

    y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

if __name__ == '__main__':
    train_X,train_y=init_Iris()
    train_dataset=MyIristDataSet()
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=3,drop_last=False,shuffle=True)# 三进程运行,不舍弃非正除部分,随机打乱
    '''创建神经网络并训练 '''
    net=Irismodel()
    epoches=30
    loss =nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
    loss_list=[]
    acc_list=[]
    print('start training.')
    for j in range(epoches):
        print('epoches:',j)
        '''用datasetloader实现小批量梯度下降'''
        i=0
        for features,labels in train_dataloader:
            X, y = features,labels
            pre_y = net(X)
            #print('pre_y:',pre_y)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            print('loss of the {}th batch of train data: {}'.format(i, l.item()))
            i+=1
        loss_list.append(loss(net(train_X),train_y).item())
        acc_list.append(SoftmaxClassify_acc(net, train_X,train_y) * 100)
        print('loss of train data:', loss(net(train_X),train_y).item())
        print('acc of train data:', SoftmaxClassify_acc(net, train_X,train_y) * 100, '%')
    print('training ended.')
    x=range(epoches)
    plt.figure()
    plt.subplot(1, 2, 1)
    plt.plot(x, acc_list)
    plt.xlabel('epoches')
    plt.ylabel('acc(%)')
    plt.subplot(1, 2, 2)
    plt.plot(x, loss_list)
    plt.xlabel('epoches')
    plt.ylabel('loss')
    plt.show()



4.5.6 模型评价

def SoftmaxClassify_acc(model, X, y):
    pre_y = model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    max_y = torch.argmax(y, dim=1)
    return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

4.5.7 模型预测

def predict(model,X):
    pre_y=model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    return max_pre_y.item()

运行结果

E:\...\pythonw.exe "C:/Users/.../BP_based_IrisClassification.py"
start training.
epoches: 0
loss of the 0th batch of train data: 0.6476609706878662
loss of the 1th batch of train data: 0.7843825221061707
loss of the 2th batch of train data: 0.7440940737724304
loss of the 3th batch of train data: 0.6563095450401306
loss of the 4th batch of train data: 0.6665217280387878
loss of the 5th batch of train data: 0.6551536321640015
loss of the 6th batch of train data: 0.6154247522354126
loss of the 7th batch of train data: 0.6839473843574524
loss of train data: 0.6389909386634827
acc of train data: 33.33333333333333 %
epoches: 1
loss of the 0th batch of train data: 0.614154577255249
loss of the 1th batch of train data: 0.6150603890419006
loss of the 2th batch of train data: 0.6659053564071655
loss of the 3th batch of train data: 0.6091535091400146
loss of the 4th batch of train data: 0.6125401258468628
loss of the 5th batch of train data: 0.6371644139289856
loss of the 6th batch of train data: 0.6106765866279602
loss of the 7th batch of train data: 0.5592544674873352
loss of train data: 0.5939202308654785
acc of train data: 33.33333333333333 %
epoches: 2
loss of the 0th batch of train data: 0.5883331298828125
loss of the 1th batch of train data: 0.583136796951294
loss of the 2th batch of train data: 0.5547168254852295
loss of the 3th batch of train data: 0.5671727061271667
loss of the 4th batch of train data: 0.5971918106079102
loss of the 5th batch of train data: 0.6008330583572388
loss of the 6th batch of train data: 0.5599240660667419
loss of the 7th batch of train data: 0.5810616612434387
loss of train data: 0.545295000076294
acc of train data: 65.33333333333333 %
。。。
epoches: 28
loss of the 0th batch of train data: 0.1630406677722931
loss of the 1th batch of train data: 0.10024014860391617
loss of the 2th batch of train data: 0.09794170409440994
loss of the 3th batch of train data: 0.1800483912229538
loss of the 4th batch of train data: 0.03714853525161743
loss of the 5th batch of train data: 0.09717585146427155
loss of the 6th batch of train data: 0.06517564505338669
loss of the 7th batch of train data: 0.09640061855316162
loss of train data: 0.1063510924577713
acc of train data: 94.66666666666667 %
epoches: 29
loss of the 0th batch of train data: 0.1744038462638855
loss of the 1th batch of train data: 0.06548900157213211
loss of the 2th batch of train data: 0.11274129897356033
loss of the 3th batch of train data: 0.08359776437282562
loss of the 4th batch of train data: 0.060643937438726425
loss of the 5th batch of train data: 0.13843710720539093
loss of the 6th batch of train data: 0.05225764214992523
loss of the 7th batch of train data: 0.15671654045581818
loss of train data: 0.09220920503139496
acc of train data: 98.0 %
training ended.

Process finished with exit code 0

在这里插入图片描述
正确率和损失都很理想,最终的正确率高达98%。

思考题

1. 对比Softmax分类和前馈神经网络分类。(必做)

第一次尝试可视化分类结果。Iris数据集有四个特征,三个类 。所以我们构建的神经网络是输入为4,输出为3,加一层隐含层,层数为6。在可视化结果的时候,我们只选择前两个变量,但是模型有四个输入,因此尝试用后两个特征的平均值来代替后两个变量,从而预测模型的分类边界。
描绘分类边界的可视化选用了matplotlib的热力图contourf()方法。

def draw_map(model,X,y):
    c=['b','g','r','y','gray']
    cmap=matplotlib.colors.ListedColormap(c[:len(y)])
    max_y=torch.argmax(y,dim=1)
    x1=X[:,0].numpy()
    x2=X[:,1].numpy()

    '''建立高度数组'''
    x3=(X[:,2].sum()/len(X[:,2])).item()
    x4=(X[:,3].sum()/len(X[:,3])).item()
    xx1=np.arange(min(x1)-0.2,max(x1)+0.2,0.02)
    xx2 = np.arange(min(x2) - 0.2, max(x2) + 0.2, 0.02)
    '''print(xx1.shape)
    print(xx2.shape)'''
    #xx1, xx2 = np.meshgrid(xx1, xx2)
    hights = np.zeros([len(xx2),len(xx1)])
    '''print(hights.shape)'''
    for i,x1_ in enumerate(xx1):
        for j,x2_ in enumerate(xx2):
            hight=model(torch.Tensor([[x1_,x2_,x3,x4]]))
            max_pre_y = torch.argmax(hight ,dim=1)
            #print(max_pre_y)
            hights[j][i]=max_pre_y.item()
    '''print(hights)'''
    plt.contourf(xx1, xx2, hights, cmap=cmap)
    plt.scatter(x1,x2,c=max_y)
    plt.xlim(min(x1) - 0.2, max(x1) + 0.2)
    plt.ylim(min(x2) - 0.2, max(x2) + 0.2)
import matplotlib.colors
import torch.utils.data

from Runner_V2 import *
import pandas as pd
import numpy as np

class MyIristDataSet(torch.utils.data.Dataset):
    def __init__(self,X,y):
        '''输入需要的数据,构建Dataset'''
        self.features, self.labels = X,y

    def __getitem__(self, item):
        return self.features[item],self.labels[item]

    def __len__(self):
        return len(self.labels)

    def resetDataset(self):
        '''重新读Iris并将数据集替换为全部Iris数据集'''
        self.features, self.labels = self.init_Iris()

    def setDataset(self,X,y):
        self.features=X
        self.labels=y

    def init_Iris(self):
        df = pd.read_csv('Iris.csv')
        data_array = df.to_numpy()
        X = data_array[:, :2]
        labels = data_array[:, -1]
        lenth = [0, 0, 0]
        for i in range(len(labels)):
            if labels[i] == 1:
                lenth[0] += 1
            elif labels[i] == 2:
                lenth[1] += 1
            elif labels[i] == 3:
                lenth[2] += 1

        y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
        X = torch.from_numpy(X.astype(np.float32))
        y = torch.from_numpy(y.astype(np.float32))
        return X, y

def init_Iris():
    df = pd.read_csv('Iris.csv')
    data_array = df.to_numpy()
    X = data_array[:, :-1]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1

    y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

class Irismodel(nn.Module):
    def __init__(self):
        super(Irismodel, self).__init__()
        self.linear = nn.Linear(4, 6)
        self.hide=nn.Linear(6,3)
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.linear(x)
        x2=self.hide(x1)
        pre_y = self.softmax(x2)
        return pre_y

    def save_model(self, save_path):
        torch.save(self, save_path)

    def read_model(self, path):
        torch.load(path)

def SoftmaxClassify_acc(model, X, y):
    pre_y = model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    max_y = torch.argmax(y, dim=1)
    return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

def draw_map(model,X,y):
    c=['b','g','r','y','gray']
    cmap=matplotlib.colors.ListedColormap(c[:len(y)])
    max_y=torch.argmax(y,dim=1)
    x1=X[:,0].numpy()
    x2=X[:,1].numpy()

    '''建立高度数组'''
    x3=(X[:,2].sum()/len(X[:,2])).item()
    x4=(X[:,3].sum()/len(X[:,3])).item()
    xx1=np.arange(min(x1)-0.2,max(x1)+0.2,0.02)
    xx2 = np.arange(min(x2) - 0.2, max(x2) + 0.2, 0.02)
    '''print(xx1.shape)
    print(xx2.shape)'''
    #xx1, xx2 = np.meshgrid(xx1, xx2)
    hights = np.zeros([len(xx2),len(xx1)])
    '''print(hights.shape)'''
    for i,x1_ in enumerate(xx1):
        for j,x2_ in enumerate(xx2):
            hight=model(torch.Tensor([[x1_,x2_,x3,x4]]))
            max_pre_y = torch.argmax(hight ,dim=1)
            #print(max_pre_y)
            hights[j][i]=max_pre_y.item()
    '''print(hights)'''
    plt.contourf(xx1, xx2, hights, cmap=cmap)
    plt.scatter(x1,x2,c=max_y)
    plt.xlim(min(x1) - 0.2, max(x1) + 0.2)
    plt.ylim(min(x2) - 0.2, max(x2) + 0.2)


if __name__ == '__main__':
    train_X,train_y=init_Iris()
    train_dataset=MyIristDataSet(train_X,train_y)
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=0,drop_last=False,shuffle=True)# 三进程运行,不舍弃非正除部分,随机打乱
    '''创建神经网络并训练 '''
    net=Irismodel()
    epoches=6000
    plt.figure()
    i_=0
    loss = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
    print('start training.')
    for j in range(epoches):
        print('epoches:',j)
        '''用datasetloader实现小批量梯度下降'''
        i=0
        for features,labels in train_dataloader:
            X, y = features,labels
            pre_y = net(X)
            #print('pre_y:',pre_y)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            print('loss of the {}th batch of train data: {}'.format(i, l.item()))
            i+=1
        print('loss of train data:', loss(net(train_X),train_y).item())
        print('acc of train data:', SoftmaxClassify_acc(net, train_X,train_y) * 100, '%')
        print('training ended.')
        if (j+1)%1000==0:
            plt.subplot(2,3,i_+1)
            draw_map(net,train_X,train_y)
            plt.xlabel('SepalLength')
            plt.ylabel('SepalWidth')
            plt.title('epoches={}'.format(j))
            i_+=1
    plt.show()

运行结果

E:\...\pythonw.exe "C:/Users/.../BP_based_IrisClassification.py"
start training.
epoches: 0
loss of the 0th batch of train data: 0.6476609706878662
loss of the 1th batch of train data: 0.7843825221061707
loss of the 2th batch of train data: 0.7440940737724304
loss of the 3th batch of train data: 0.6563095450401306
loss of the 4th batch of train data: 0.6665217280387878
loss of the 5th batch of train data: 0.6551536321640015
loss of the 6th batch of train data: 0.6154247522354126
loss of the 7th batch of train data: 0.6839473843574524
loss of train data: 0.6389909386634827
acc of train data: 33.33333333333333 %
epoches: 1
loss of the 0th batch of train data: 0.614154577255249
loss of the 1th batch of train data: 0.6150603890419006
loss of the 2th batch of train data: 0.6659053564071655
loss of the 3th batch of train data: 0.6091535091400146
loss of the 4th batch of train data: 0.6125401258468628
loss of the 5th batch of train data: 0.6371644139289856
loss of the 6th batch of train data: 0.6106765866279602
loss of the 7th batch of train data: 0.5592544674873352
loss of train data: 0.5939202308654785
acc of train data: 33.33333333333333 %
epoches: 2
loss of the 0th batch of train data: 0.5883331298828125
loss of the 1th batch of train data: 0.583136796951294
loss of the 2th batch of train data: 0.5547168254852295
loss of the 3th batch of train data: 0.5671727061271667
loss of the 4th batch of train data: 0.5971918106079102
loss of the 5th batch of train data: 0.6008330583572388
loss of the 6th batch of train data: 0.5599240660667419
loss of the 7th batch of train data: 0.5810616612434387
loss of train data: 0.545295000076294
acc of train data: 65.33333333333333 %
。。。
acc of train data: 98.0 %
training ended.
epoches: 5998
loss of the 0th batch of train data: 0.5514580011367798
loss of the 1th batch of train data: 0.5515455007553101
loss of the 2th batch of train data: 0.5881860256195068
loss of the 3th batch of train data: 0.5515824556350708
loss of the 4th batch of train data: 0.5565020442008972
loss of the 5th batch of train data: 0.5724121332168579
loss of the 6th batch of train data: 0.5514497756958008
loss of the 7th batch of train data: 0.7097947597503662
loss of train data: 0.5663188099861145
acc of train data: 98.66666666666667 %
training ended.
epoches: 5999
loss of the 0th batch of train data: 0.5590750575065613
loss of the 1th batch of train data: 0.5521644949913025
loss of the 2th batch of train data: 0.5725876688957214
loss of the 3th batch of train data: 0.5514986515045166
loss of the 4th batch of train data: 0.5842191576957703
loss of the 5th batch of train data: 0.551449179649353
loss of the 6th batch of train data: 0.5514755249023438
loss of the 7th batch of train data: 0.6517413854598999
loss of train data: 0.5685170292854309
acc of train data: 98.0 %
training ended.

Process finished with exit code 0

在这里插入图片描述
很显然边界检测是不正确的。所以可以认定后两个变量不能用平均值来表示。同时说明在不更改模型的4输入的前提下很难用两个变量对结果进行预测,我们只能缩减变量的数目。接下来我对模型进行修改,只输入前两个变量。
而且颜色太扎眼,对边界检测的填充颜色进行了调整,主要是让其变浅了些。

import matplotlib.colors
import torch.utils.data

from Runner_V2 import *
import pandas as pd
import numpy as np

class MyIristDataSet(torch.utils.data.Dataset):
    def __init__(self):
        '''输入需要的数据,构建Dataset'''
        self.features, self.labels = self.init_Iris()

    def __getitem__(self, item):
        return self.features[item],self.labels[item]

    def __len__(self):
        return len(self.labels)

    def resetDataset(self):
        '''重新读Iris并将数据集替换为全部Iris数据集'''
        self.features, self.labels = self.init_Iris()

    def setDataset(self,X,y):
        self.features=X
        self.labels=y

    def init_Iris(self):
        df = pd.read_csv('Iris.csv')
        data_array = df.to_numpy()
        X = data_array[:, :2]
        labels = data_array[:, -1]
        lenth = [0, 0, 0]
        for i in range(len(labels)):
            if labels[i] == 1:
                lenth[0] += 1
            elif labels[i] == 2:
                lenth[1] += 1
            elif labels[i] == 3:
                lenth[2] += 1

        y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
        X = torch.from_numpy(X.astype(np.float32))
        y = torch.from_numpy(y.astype(np.float32))
        return X, y

def init_Iris():
    df = pd.read_csv('Iris.csv')
    data_array = df.to_numpy()
    X = data_array[:, :2]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1

    y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

class Irismodel(nn.Module):
    def __init__(self):
        super(Irismodel, self).__init__()
        self.hide = nn.Linear(2, 6)
        self.out=nn.Linear(6,3)
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.hide(x)
        x2=self.out(x1)
        pre_y = self.softmax(x2)
        return pre_y

    def save_model(self, save_path):
        torch.save(self, save_path)

    def read_model(self, path):
        torch.load(path)

def SoftmaxClassify_acc(model, X, y):
    pre_y = model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    max_y = torch.argmax(y, dim=1)
    return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

def draw_map(model,X,y):
    c=['b','g','r','y','gray']
    cmap=matplotlib.colors.ListedColormap(['#a1ffa5', '#ffa1a1', '#a1a3ff'])
    max_y=torch.argmax(y,dim=1)
    x1=X[:,0].numpy()
    x2=X[:,1].numpy()

    '''建立高度数组'''
    xx1=np.arange(min(x1)-0.2,max(x1)+0.2,0.02)
    xx2 = np.arange(min(x2) - 0.2, max(x2) + 0.2, 0.02)
    #xx1, xx2 = np.meshgrid(xx1, xx2)
    hights = np.zeros([len(xx2),len(xx1)])
    for i,x1_ in enumerate(xx1):
        for j,x2_ in enumerate(xx2):
            hight=model(torch.Tensor([[x1_,x2_]]))
            max_pre_y = torch.argmax(hight ,dim=1)
            #print(max_pre_y)
            hights[j][i]=max_pre_y.item()
    plt.contourf(xx1, xx2, hights, cmap=cmap)
    plt.scatter(x1,x2,c=max_y)
    plt.xlim(min(x1) - 0.2, max(x1) + 0.2)
    plt.ylim(min(x2) - 0.2, max(x2) + 0.2)


if __name__ == '__main__':
    train_X,train_y=init_Iris()
    train_dataset=MyIristDataSet()
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=0,drop_last=False,shuffle=True)# 不舍弃非正除部分,随机打乱
    '''创建神经网络并训练 '''
    net=Irismodel()
    epoches=6000
    plt.figure()
    i_=0
    loss = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
    print('start training.')
    for j in range(epoches):
        print('epoches:',j)
        '''用datasetloader实现小批量梯度下降'''
        i=0
        for features,labels in train_dataloader:
            X, y = features,labels
            pre_y = net(X)
            #print('pre_y:',pre_y)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            print('loss of the {}th batch of train data: {}'.format(i, l.item()))
            i+=1
        print('loss of train data:', loss(net(train_X),train_y).item())
        print('acc of train data:', SoftmaxClassify_acc(net, train_X,train_y) * 100, '%')
        print('training ended.')
        if (j+1)%1000==0:
            plt.subplot(2,3,i_+1)
            draw_map(net,train_X,train_y)
            plt.xlabel('SepalLength')
            plt.ylabel('SepalWidth')
            plt.title('epoches={}'.format(j))
            i_+=1
    plt.show()




...
acc of train data: 81.33333333333333 %
training ended.
epoches: 5999
loss of the 0th batch of train data: 0.8350028991699219
loss of the 1th batch of train data: 0.6808366775512695
loss of the 2th batch of train data: 0.7580670714378357
loss of the 3th batch of train data: 0.6191402673721313
loss of the 4th batch of train data: 0.7778592705726624
loss of the 5th batch of train data: 0.6982424855232239
loss of the 6th batch of train data: 0.740178108215332
loss of the 7th batch of train data: 0.9860091209411621
loss of train data: 0.7369008660316467
acc of train data: 81.33333333333333 %
training ended.

在这里插入图片描述

分类边界就很合理了。但是由于绿色和黄色这两类的交叉很多,这两个类别本来就很不好划分,因此分类边界波动很大,这属于正常现象。
问题:迭代次数有些少,导致分类没达到最优。
下面引入激活函数LeakyRlu(),并增加迭代次数。对部分代码进行优化以加快运行速度。

import matplotlib.colors
import torch.utils.data

from Runner_V2 import *
import pandas as pd
import numpy as np

class MyIristDataSet(torch.utils.data.Dataset):
    def __init__(self,X,y):
        '''输入需要的数据,构建Dataset'''
        self.features, self.labels = X,y

    def __getitem__(self, item):
        return self.features[item],self.labels[item]

    def __len__(self):
        return len(self.labels)

    def resetDataset(self):
        '''重新读Iris并将数据集替换为全部Iris数据集'''
        self.features, self.labels = self.init_Iris()

    def setDataset(self,X,y):
        self.features=X
        self.labels=y

    def init_Iris(self):
        df = pd.read_csv('Iris.csv')
        data_array = df.to_numpy()
        X = data_array[:, :2]
        labels = data_array[:, -1]
        lenth = [0, 0, 0]
        for i in range(len(labels)):
            if labels[i] == 1:
                lenth[0] += 1
            elif labels[i] == 2:
                lenth[1] += 1
            elif labels[i] == 3:
                lenth[2] += 1

        y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
        X = torch.from_numpy(X.astype(np.float32))
        y = torch.from_numpy(y.astype(np.float32))
        return X, y

def init_Iris():
    df = pd.read_csv('Iris.csv')
    data_array = df.to_numpy()
    X = data_array[:, :2]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1

    y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

class Irismodel(nn.Module):
    def __init__(self):
        super(Irismodel, self).__init__()
        self.hide = nn.Linear(2, 6)
        self.out=nn.Linear(6,3)
        self.softmax = nn.Softmax(dim=1)
        self.act=nn.LeakyReLU()

    def forward(self, x):
        x1 = self.hide(x)
        x1=self.act(x1)
        x2=self.out(x1)
        pre_y = self.softmax(x2)
        return pre_y

    def save_model(self, save_path):
        torch.save(self, save_path)

    def read_model(self, path):
        torch.load(path)

def SoftmaxClassify_acc(model, X, y):
    pre_y = model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    max_y = torch.argmax(y, dim=1)
    return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

def draw_map(model,X,y):
    c=['b','g','r','y','gray']
    cmap=matplotlib.colors.ListedColormap(['#a1ffa5', '#ffa1a1', '#a1a3ff'])
    max_y=torch.argmax(y,dim=1)
    x1=X[:,0].numpy()
    x2=X[:,1].numpy()

    '''建立高度数组'''
    '''x3=(X[:,2].sum()/len(X[:,2])).item()
    x4=(X[:,3].sum()/len(X[:,3])).item()'''
    xx1=np.arange(min(x1)-0.2,max(x1)+0.2,0.02)
    xx2 = np.arange(min(x2) - 0.2, max(x2) + 0.2, 0.02)
    '''print(xx1.shape)
    print(xx2.shape)'''
    #xx1, xx2 = np.meshgrid(xx1, xx2)
    hights = np.zeros([len(xx2),len(xx1)])
    '''print(hights.shape)'''
    for i,x1_ in enumerate(xx1):
        for j,x2_ in enumerate(xx2):
            hight=predict(model,torch.Tensor([[x1_,x2_]]))
            #print(max_pre_y)
            hights[j][i]=hight
    '''print(hights)'''
    plt.contourf(xx1, xx2, hights, cmap=cmap)
    plt.scatter(x1,x2,c=max_y)
    plt.xlim(min(x1) - 0.2, max(x1) + 0.2)
    plt.ylim(min(x2) - 0.2, max(x2) + 0.2)

def predict(model,X):
    pre_y=model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    return max_pre_y.item()

if __name__ == '__main__':
    train_X,train_y=init_Iris()
    train_dataset=MyIristDataSet(train_X,train_y)
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=0,drop_last=False,shuffle=True)# 三进程运行,不舍弃非正除部分,随机打乱
    '''创建神经网络并训练 '''
    net=Irismodel()
    epoches=6000
    plt.figure()
    i_=0

    loss = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
    print('start training.')
    for j in range(epoches):
        print('epoches:',j)
        '''用datasetloader实现小批量梯度下降'''
        i=0
        for features,labels in train_dataloader:
            X, y = features,labels
            pre_y = net(X)
            #print('pre_y:',pre_y)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            print('loss of the {}th batch of train data: {}'.format(i, l.item()))
            i+=1
        print('loss of train data:', loss(net(train_X),train_y).item())
        print('acc of train data:', SoftmaxClassify_acc(net, train_X,train_y) * 100, '%')
        print('training ended.')
        if (j+1)%1000==0:
            plt.subplot(3,3,i_+1)
            draw_map(net,train_X,train_y)
            plt.xlabel('SepalLength')
            plt.ylabel('SepalWidth')
            plt.title('epoches={}'.format(j))
            i_+=1
    plt.show()



num_workers设为3时迭代过程很非常慢,应该是每次迭代都要进入一次dataLoader,时间花费在dataLoader的调用了。刚刚迭代90次就花费了很长时间。
迭代600次:(十分钟左右)。。。
在这里插入图片描述
可见分类边界基本上符合预期。迭代次数还应该往上加。
调试了DataLoader的好几个变量后发现将num_workers=0设为0后速度瞬间快了。!!!
激活函数为LeakyRelu(),epoches=6000
在这里插入图片描述
在这里插入图片描述
那我还苦苦等待这么长时间?看来对DataLoader的num_workers=0还是不太了解。应该是DATa Loader的num_workers花费的时间会很长。我把num_workers从0到8全试了一遍,只有赋0的时候速度飞快,其他的都是迭代一次后要等几秒钟才进行下一次迭代。暂时不知道是什么情况,按理说应该会更快才是。
接下来又尝试了几个激活函数
激活函数为Elu():
在这里插入图片描述
激活函数为Sigmoid():
在这里插入图片描述
激活函数为hardSwish():
在这里插入图片描述

2. 自定义隐藏层层数和每个隐藏层中的神经元个数,尝试找到最优超参数完成多分类。(选做)
import torch.utils.data

from Runner_V2 import *
import pandas as pd
import numpy as np

class MyIristDataSet(torch.utils.data.Dataset):
    def __init__(self):
        '''输入需要的数据,构建Dataset'''
        self.features, self.labels = self.init_Iris()

    def __getitem__(self, item):
        return self.features[item],self.labels[item]

    def __len__(self):
        return len(self.labels)

    def resetDataset(self):
        '''重新读Iris并将数据集替换为全部Iris数据集'''
        self.features, self.labels = self.init_Iris()

    def setDataset(self,X,y):
        self.features=X
        self.labels=y

    def init_Iris(self):
        df = pd.read_csv('Iris.csv')
        data_array = df.to_numpy()
        X = data_array[:, :-1]
        labels = data_array[:, -1]
        lenth = [0, 0, 0]
        for i in range(len(labels)):
            if labels[i] == 1:
                lenth[0] += 1
            elif labels[i] == 2:
                lenth[1] += 1
            elif labels[i] == 3:
                lenth[2] += 1

        y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
        X = torch.from_numpy(X.astype(np.float32))
        y = torch.from_numpy(y.astype(np.float32))
        return X, y

class Irismodel1(nn.Module):
    def __init__(self):
        super(Irismodel1, self).__init__()
        self.hide = nn.Linear(4, 6)
        self.out=nn.Linear(6,3)
        self.sigmoid=nn.Sigmoid()
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.hide(x)
        x1=self.sigmoid(x1)
        x2=self.out(x1)
        pre_y = self.softmax(x2)
        return pre_y


class Irismodel2(nn.Module):
    def __init__(self):
        super(Irismodel2, self).__init__()
        self.hide = nn.Linear(4, 20)
        self.out=nn.Linear(20,3)
        self.sigmoid=nn.Sigmoid()
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.hide(x)
        x1=self.sigmoid(x1)
        x2=self.out(x1)
        pre_y = self.softmax(x2)
        return pre_y


class Irismodel3(nn.Module):
    def __init__(self):
        super(Irismodel3, self).__init__()
        self.hide = nn.Linear(4,50)
        self.out=nn.Linear(50,3)
        self.sigmoid=nn.Sigmoid()
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.hide(x)
        x1=self.sigmoid(x1)
        x2=self.out(x1)
        pre_y = self.softmax(x2)
        return pre_y


def SoftmaxClassify_acc(model, X, y):
    pre_y = model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    max_y = torch.argmax(y, dim=1)
    return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

def init_Iris():
    df = pd.read_csv('Iris.csv')
    data_array = df.to_numpy()
    X = data_array[:, :-1]
    labels = data_array[:, -1]
    lenth = [0, 0, 0]
    for i in range(len(labels)):
        if labels[i] == 1:
            lenth[0] += 1
        elif labels[i] == 2:
            lenth[1] += 1
        elif labels[i] == 3:
            lenth[2] += 1

    y = np.array([[1, 0, 0] * lenth[0], [0, 1, 0] * lenth[1], [0, 0, 1] * lenth[2]]).reshape(len(labels), 3)
    X = torch.from_numpy(X.astype(np.float32))
    y = torch.from_numpy(y.astype(np.float32))
    return X, y

def predict(model,X):
    pre_y=model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    return max_pre_y.item()

if __name__ == '__main__':
    labels = ['1 hide layer of 6 neuron', '1 hide layer of 20 neuron', '1 hide layer of 50 neuron']
    colors = ['#006463', '#00bcba', '#00fffc']
    train_X,train_y=init_Iris()
    train_dataset=MyIristDataSet()
    train_dataloader=torch.utils.data.DataLoader(dataset=train_dataset,\
                                                 batch_size=20,num_workers=5,drop_last=False,shuffle=True)# 三进程运行,不舍弃非正除部分,随机打乱
    '''创建神经网络并训练 '''
    epoches=50
    plt.figure()
    for n,model in enumerate([Irismodel1(),Irismodel2(),Irismodel3()]):
        net=model
        loss = nn.BCELoss()
        optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
        loss_list=[]
        acc_list=[]
        print('start training.')
        for j in range(epoches):
            print('epoches:',j)
            '''用datasetloader实现小批量梯度下降'''
            i=0
            for features,labels in train_dataloader:
                X, y = features,labels
                pre_y = net(X)
                #print('pre_y:',pre_y)
                l = loss(pre_y, y)
                optimizer.zero_grad()  # 梯度清零
                l.backward()
                optimizer.step()
                print('loss of the {}th batch of train data: {}'.format(i, l.item()))
                i+=1
            loss_list.append(loss(net(train_X),train_y).item())
            acc_list.append(SoftmaxClassify_acc(net, train_X,train_y) * 100)
            print('loss of train data:', loss(net(train_X),train_y).item())
            print('acc of train data:', SoftmaxClassify_acc(net, train_X,train_y) * 100, '%')
        print('training ended.')
        x=range(epoches)
        plt.subplot(1, 2, 1)
        plt.plot(x,acc_list,c=colors[n],label=net.__class__.__name__)
        plt.subplot(1, 2, 2)
        plt.plot(x,loss_list,c=colors[n],label=net.__class__.__name__)
    plt.subplot(1, 2, 1)
    plt.xlabel('epoches')
    plt.ylabel('acc(%)')
    plt.legend()
    plt.subplot(1, 2, 2)
    plt.xlabel('epoches')
    plt.ylabel('loss')
    plt.legend()
    plt.show()

在这里插入图片描述

,由深到浅分别是隐含层神经元数目分别为6,20,50。
隐含层50时更好。
超参数的探寻在之前的实验中已经深入研究过了。现在只是用Iris验证一下。
结果再一次验证了,模型神经元数越高,拟合速度越快。

4. 对比SVM与FNN分类效果,谈谈自己看法。(选做)

SVM:用前两个特征进行分类。

import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.inspection import DecisionBoundaryDisplay
'''图片显示中文'''
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False

# import some data to play with
iris = datasets.load_iris()
# Take the first two features. We could avoid this by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
print(len(X))
print(len(y))
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0  # SVM regularization parameter
models = (
    svm.SVC(kernel="linear", C=C),
    svm.LinearSVC(C=C, max_iter=10000),
    svm.SVC(kernel="rbf", gamma=0.7, C=C),
    svm.SVC(kernel="poly", degree=3, gamma="auto", C=C),
)
models = (clf.fit(X, y) for clf in models)

# title for the plots
titles = (
    "SVC线性核",
    "LinearSVC (linear kernel)",
    "SVC高斯核",
    "SVC多项式核",
)

# Set-up 2x2 grid for plotting.
fig, sub = plt.subplots(2, 2)
plt.subplots_adjust(wspace=0.4, hspace=0.4)

X0, X1 = X[:, 0], X[:, 1]

for clf, title, ax in zip(models, titles, sub.flatten()):
    disp = DecisionBoundaryDisplay.from_estimator(
        clf,
        X,
        response_method="predict",
        cmap=plt.cm.coolwarm,
        alpha=0.8,
        ax=ax,
        xlabel=iris.feature_names[0],
        ylabel=iris.feature_names[1],
    )
    ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors="k")
    ax.set_xticks(())
    ax.set_yticks(())
    ax.set_title(title)

plt.show()

在这里插入图片描述
计算的复杂度比较
用支持向量机计算决策边界的过程是十分复杂的,需要大量的数学公式进行推导,最后相当于是一步到位,不需要大量的训练过程。而前馈神经网络只在反向传播的链式求导部分需要大量的计算,而这个复杂的计算过程也可以用pytorch的自动求导机制来实现。两者计算的复杂度一个天上一个地下。
多分类任务比较:
SVM的多分类主要有两种方法, 一类是“同时考虑所有类”,另一类是“组合二分类器解诀多分类问题”。组合二分类器又有两种策略,一种是一对一策略,一种是一对其余策略。每增加一个类别就要多进行一次二分类。具体内容网上都有,这里不再赘述。
前馈神经网络的多分类任务通常由MaxSoft作为最后的激活层来实现,对于不同的类别数,只需要改变最后一层的输出即可。
对于多变量问题:
SVM通常支持二维的分类,对于多特征问题的分类不友好。
而神经网络可以任意设置变量的数目。
二者的相似点
我们可以发现,我们建立的2输入线性模型的前馈神经网络和SVM的线性核分类结果是十分类似的。

5. 尝试基于MNIST手写数字识别数据集,设计合适的前馈神经网络进行实验,并取得95%以上的准确率。(选做)

前馈神经网络是全连接的,而图片是3x28x28的因此需要对图像进行展开,将其平埔到一维上再作为输入。

import torch.nn as nn
import torchvision
from matplotlib import pyplot as plt
from torch import optim
from torchvision import transforms
import torch.utils.data

#数据预处理
transform=transforms.Compose([transforms.ToTensor()])
#下载mnist库数据并定义读取器
train_dataset=torchvision.datasets.MNIST(root='./mnist',train=True,transform=transform,download=True)
train_dataloader=torch.utils.data.DataLoader(train_dataset,batch_size=5000,shuffle=True,num_workers=6)
test_dataset=torchvision.datasets.MNIST(root='./mnist',train=False,transform=transform,download=True)
test_dataloader=torch.utils.data.DataLoader(test_dataset,batch_size=10000,shuffle=True,num_workers=0)


class BPMNIST_net(torch.nn.Module):
    def __init__(self):
        super(BPMNIST_net, self).__init__()
        self.hide1=nn.Linear(28*28,2000)
        self.hide2=nn.Linear(2000,1000)
        self.out=nn.Linear(1000,10)
        self.softmax=nn.Softmax(dim=1)

    def forward(self, x):
        x1 = self.hide1(x)
        x2 = self.hide2(x1)
        x3=self.out(x2)
        pre_y = self.softmax(x3)
        return pre_y

def reJust_y(y):
    new_y=[]
    for i in y:
        if i.item()==0:
            new_y.append([1,0,0,0,0,0,0,0,0,0])
        elif i.item()==1:
            new_y.append([0,1,0,0,0,0,0,0,0,0])
        elif i.item()==2:
            new_y.append([0,0,1,0,0,0,0,0,0,0])
        elif i.item()==3:
            new_y.append([0,0,0,1,0,0,0,0,0,0])
        elif i.item()==4:
            new_y.append([0,0,0,0,1,0,0,0,0,0])
        elif i.item()==5:
            new_y.append([0,0,0,0,0,1,0,0,0,0])
        elif i.item()==6:
            new_y.append([0,0,0,0,0,0,1,0,0,0])
        elif i.item()==7:
            new_y.append([0,0,0,0,0,0,0,1,0,0])
        elif i.item()==8:
            new_y.append([0,0,0,0,0,0,0,0,1,0])
        else :
            new_y.append([0,0,0,0,0,0,0,0,0,1])
    return torch.Tensor(new_y)

def SoftmaxClassify_acc(model, X, y):
    pre_y = model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    max_y = torch.argmax(y, dim=1)
    return torch.nonzero(max_y.eq(max_pre_y)).shape[0] / y.shape[0]

def predict(model,X):
    pre_y=model(X)
    max_pre_y = torch.argmax(pre_y, dim=1)
    return max_pre_y.item()

if __name__=='__main__':
    net=BPMNIST_net()
    epoches = 6000
    plt.figure()
    i_ = 0
    loss = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
    print('start training.')
    for j in range(epoches):
        print('epoches:', j)
        '''用datasetloader实现小批量梯度下降'''
        i = 0
        for features, labels in train_dataloader:
            X, y = features, labels
            X=X.reshape(5000,28*28)
            y=reJust_y(y)
            pre_y = net(X)
            # print('pre_y:',pre_y)
            l = loss(pre_y, y)
            optimizer.zero_grad()  # 梯度清零
            l.backward()
            optimizer.step()
            print('loss of the {}th batch of train data: {}'.format(i, l.item()))
            i += 1
        print('training ended.')
        for features, labels in test_dataloader:
            X, y = features, labels
            X = X.reshape(10000, 28 * 28)
            y = reJust_y(y)
            print('loss of test data:', loss(net(X), y).item())
            print('acc of test data:', SoftmaxClassify_acc(net, X, y) * 100, '%')
loss of test data: 1.6457164287567139
acc of test data: 83.63000000000001 %
epoches: 49
loss of the 0th batch of train data: 1.650238275527954
loss of the 1th batch of train data: 1.6496107578277588
loss of the 2th batch of train data: 1.651124358177185
loss of the 3th batch of train data: 1.6480847597122192
loss of the 4th batch of train data: 1.6432812213897705
loss of the 5th batch of train data: 1.6586047410964966
loss of the 6th batch of train data: 1.660575032234192
loss of the 7th batch of train data: 1.6548717021942139
loss of the 8th batch of train data: 1.6513876914978027
loss of the 9th batch of train data: 1.6533269882202148
loss of the 10th batch of train data: 1.662839651107788
loss of the 11th batch of train data: 1.6466742753982544
training ended.
loss of test data: 1.644776701927185
acc of test data: 83.66 %

迭代50次时在测试集的正确率达到了83.66%.
训练过程十分缓慢。

training ended.
loss of test data: 1.5716991424560547
acc of test data: 90.31 %
epoches: 149
loss of the 0th batch of train data: 1.57888662815094
loss of the 1th batch of train data: 1.576472520828247
loss of the 2th batch of train data: 1.5839595794677734
loss of the 3th batch of train data: 1.5735300779342651
loss of the 4th batch of train data: 1.5735070705413818
loss of the 5th batch of train data: 1.5692503452301025
loss of the 6th batch of train data: 1.5703377723693848
loss of the 7th batch of train data: 1.5782700777053833
loss of the 8th batch of train data: 1.5726937055587769
loss of the 9th batch of train data: 1.5701545476913452
loss of the 10th batch of train data: 1.576499104499817
loss of the 11th batch of train data: 1.568349838256836
training ended.
loss of test data: 1.571158528327942
acc of test data: 90.35 %

训练150轮的时候达到了90.35%.

loss of the 10th batch of train data: 1.5464723110198975
loss of the 11th batch of train data: 1.5495859384536743
training ended.
loss of test data: 1.5456901788711548
acc of test data: 92.16 %
epoches: 300
loss of the 0th batch of train data: 1.544075608253479
loss of the 1th batch of train data: 1.54786217212677
loss of the 2th batch of train data: 1.5447371006011963
loss of the 3th batch of train data: 1.5545169115066528
loss of the 4th batch of train data: 1.5403251647949219
loss of the 5th batch of train data: 1.5452499389648438
loss of the 6th batch of train data: 1.5497677326202393
loss of the 7th batch of train data: 1.5499095916748047
loss of the 8th batch of train data: 1.5415335893630981
loss of the 9th batch of train data: 1.5463827848434448
loss of the 10th batch of train data: 1.5467138290405273
loss of the 11th batch of train data: 1.5476607084274292
training ended.
loss of test data: 1.545613408088684
acc of test data: 92.17999999999999 %

训练300轮达到了92.17999999999999 %

6. 总结本次实验;

1)学会了多进程处理数据,并用dataset和dataloader实现了小批量梯度下降。
2)学习了分类边界的表示方法,掌握了热力图的绘图原理。可以粗略的描绘多分类的类别边界,不过只适合于二维表示。本次实验的Iris是四维的数据,实际上可以对数据进行降维再进行描绘,比如主成分分析。不过这就太复杂了,以后有机会可以尝试一下。
3)在SVM分类问题中,我们分析Iris问题时也是只分析了两个变量。
4)对Iris数据集的理解加深了。

7. 全面总结前馈神经网络,梳理知识点,建议画思维导图。

在这里插入图片描述

ref:
SVM:
https://blog.csdn.net/xfChen2/article/details/79621396
https://blog.csdn.net/mm_bit/article/details/46988925
https://zhuanlan.zhihu.com/p/29862011
DataLoader:
https://blog.csdn.net/hxxjxw/article/details/119531239

喜欢我的文章请点赞支持一下下哦。

  人工智能 最新文章
2022吴恩达机器学习课程——第二课(神经网
第十五章 规则学习
FixMatch: Simplifying Semi-Supervised Le
数据挖掘Java——Kmeans算法的实现
大脑皮层的分割方法
【翻译】GPT-3是如何工作的
论文笔记:TEACHTEXT: CrossModal Generaliz
python从零学(六)
详解Python 3.x 导入(import)
【答读者问27】backtrader不支持最新版本的
上一篇文章      下一篇文章      查看所有文章
加:2022-10-17 12:33:42  更:2022-10-17 12:35:09 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年5日历 -2024/5/19 18:34:00-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码