IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 人工智能 -> pytorch实践(10月组队学习task4) -> 正文阅读

[人工智能]pytorch实践(10月组队学习task4)

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


前言

继续pytorch


提示:以下是本篇文章正文内容,下面案例可供参考

一、定义超参

import os
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import torch.optim as optimizer
# 配置GPU,这里有两种方式
## 方案一:使用os.environ
#os.environ['CUDA_VISIBLE_DEVICES'] = '0'
# 方案二:使用“device”,后续对要使用GPU的变量用.to(device)即可
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")

## 配置其他超参数,如batch_size, num_workers, learning rate, 以及总的epochs
batch_size = 256
num_workers = 0
lr = 1e-4
epochs = 20

二、读取数据

from torchvision import transforms
from torchvision import datasets
image_size = 28
data_transform = transforms.Compose([
    transforms.Resize(image_size),#改变图片大小
    transforms.ToTensor()#将数据类型变成tensor
])#一般 transforms模块用来对数据进行预处理

train_data = datasets.FashionMNIST(root='./', train=True, download=True, transform=data_transform)
test_data = datasets.FashionMNIST(root='./', train=False, download=True, transform=data_transform)
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=num_workers, drop_last=True)
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=num_workers)

三、定义模型

代码如下(示例):

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv = nn.Sequential(
            nn.Conv2d(1, 32, 5),
            nn.ReLU(),
            nn.MaxPool2d(2, stride=2),
            nn.Dropout(0.3),
            nn.Conv2d(32, 64, 5),
            nn.ReLU(),
            nn.MaxPool2d(2, stride=2),
            nn.Dropout(0.3)
        )
        self.fc = nn.Sequential(
            nn.Linear(64*4*4, 512),
            nn.ReLU(),
            nn.Linear(512, 10)
        )
        
    def forward(self, x):
        x = self.conv(x)
        x = x.view(-1, 64*4*4)
        x = self.fc(x)
        # x = nn.functional.normalize(x)
        return x

model = Net()
model = model.cuda()

四、定义损失函数和优化器-训练模型

import torch.optim as optim
criterion = nn.CrossEntropyLoss()
# criterion = nn.CrossEntropyLoss(weight=[1,1,1,1,3,1,1,1,1,1])
optimizer = optim.Adam(model.parameters(), lr=0.001)
def train(epoch):
    model.train()
    train_loss = 0
    for data, label in train_loader:
        data, label = data.cuda(), label.cuda()
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, label)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()*data.size(0)
    train_loss = train_loss/len(train_loader.dataset)
    print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))
def val(epoch):       
    model.eval()
    val_loss = 0
    gt_labels = []
    pred_labels = []
    with torch.no_grad():
        for data, label in test_loader:
            data, label = data.cuda(), label.cuda()
            output = model(data)
            preds = torch.argmax(output, 1)
            gt_labels.append(label.cpu().data.numpy())
            pred_labels.append(preds.cpu().data.numpy())
            loss = criterion(output, label)
            val_loss += loss.item()*data.size(0)
    val_loss = val_loss/len(test_loader.dataset)
    gt_labels, pred_labels = np.concatenate(gt_labels), np.concatenate(pred_labels)
    acc = np.sum(gt_labels==pred_labels)/len(pred_labels)
    print('Epoch: {} \tValidation Loss: {:.6f}, Accuracy: {:6f}'.format(epoch, val_loss, acc))
for epoch in range(1, epochs+1):
    train(epoch)
    val(epoch)

结果

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Epoch: 1 	Training Loss: 0.677033
Epoch: 1 	Validation Loss: 0.495211, Accuracy: 0.818800
Epoch: 2 	Training Loss: 0.425615
Epoch: 2 	Validation Loss: 0.358788, Accuracy: 0.871400
Epoch: 3 	Training Loss: 0.362201
Epoch: 3 	Validation Loss: 0.326068, Accuracy: 0.881500
Epoch: 4 	Training Loss: 0.327386
Epoch: 4 	Validation Loss: 0.305909, Accuracy: 0.890500
Epoch: 5 	Training Loss: 0.305946
Epoch: 5 	Validation Loss: 0.285962, Accuracy: 0.897400
Epoch: 6 	Training Loss: 0.285503
Epoch: 6 	Validation Loss: 0.280432, Accuracy: 0.896500
Epoch: 7 	Training Loss: 0.274258
Epoch: 7 	Validation Loss: 0.275422, Accuracy: 0.898300
Epoch: 8 	Training Loss: 0.262215
Epoch: 8 	Validation Loss: 0.253080, Accuracy: 0.908600
Epoch: 9 	Training Loss: 0.254621
Epoch: 9 	Validation Loss: 0.257004, Accuracy: 0.905500
Epoch: 10 	Training Loss: 0.240819
Epoch: 10 	Validation Loss: 0.243566, Accuracy: 0.911500
Epoch: 11 	Training Loss: 0.234381
Epoch: 11 	Validation Loss: 0.250187, Accuracy: 0.908900
Epoch: 12 	Training Loss: 0.226367
Epoch: 12 	Validation Loss: 0.248466, Accuracy: 0.910400
Epoch: 13 	Training Loss: 0.220683
Epoch: 13 	Validation Loss: 0.237766, Accuracy: 0.912500
Epoch: 14 	Training Loss: 0.212676
Epoch: 14 	Validation Loss: 0.237252, Accuracy: 0.910600
Epoch: 15 	Training Loss: 0.204036
Epoch: 15 	Validation Loss: 0.233667, Accuracy: 0.915500
Epoch: 16 	Training Loss: 0.201117
Epoch: 16 	Validation Loss: 0.235281, Accuracy: 0.911800
Epoch: 17 	Training Loss: 0.192603
Epoch: 17 	Validation Loss: 0.224099, Accuracy: 0.917600
Epoch: 18 	Training Loss: 0.189722
Epoch: 18 	Validation Loss: 0.239020, Accuracy: 0.909800
Epoch: 19 	Training Loss: 0.186247
Epoch: 19 	Validation Loss: 0.229205, Accuracy: 0.917100
Epoch: 20 	Training Loss: 0.175355
Epoch: 20 	Validation Loss: 0.220682, Accuracy: 0.920900
  人工智能 最新文章
2022吴恩达机器学习课程——第二课(神经网
第十五章 规则学习
FixMatch: Simplifying Semi-Supervised Le
数据挖掘Java——Kmeans算法的实现
大脑皮层的分割方法
【翻译】GPT-3是如何工作的
论文笔记:TEACHTEXT: CrossModal Generaliz
python从零学(六)
详解Python 3.x 导入(import)
【答读者问27】backtrader不支持最新版本的
上一篇文章      下一篇文章      查看所有文章
加:2021-10-22 10:56:24  更:2021-10-22 10:58:43 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/27 8:34:47-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码