主要包括以下几个部分:构建数据集,读数据集,初始化参数,定义模型,定义损失函数,定义优化算法,训练模型。
1.构建数据集
构建一个带有噪声的线性模型的1000个样本的数据集,每个样本从标准正态分布中随机采样2个特征
我们使用线性模型参数w=[2,?3.4]?、b=4.2和噪声项?生成数据集及其标签:y=X*w+b+?.
首先引入库
import random
import torch
def create_data(w, b, nums_example):
X = torch.normal(0, 1, (nums_example, len(w)))
y = torch.matmul(X, w) + b
print("y_shape:", y.shape)
y += torch.normal(0, 0.01, y.shape)
return X, y.reshape(-1, 1)
torch.normal(a,b,c):表示生产一个均值为a,标准差为b,size为c的张量
true_w = torch.tensor([2, -3.4])
true_b = 4.2
features, labels = create_data(true_w, true_b, 1000)
features是根据真实参数构造的X,lables是构造的y
2.读取数据
def read_data(batch_size, features, lables):
nums_example = len(features)
indices = list(range(nums_example))
random.shuffle(indices)
for i in range(0, nums_example, batch_size):
index_tensor = torch.tensor(indices[i: min(i + batch_size, nums_example)])
yield features[index_tensor], lables[index_tensor]
batch_size = 10
num = nums_example/batch_size
for X, y in read_data(batch_size, features, labels):
print("X:", X, "\ny", y,"num",num)
num = num - 1
if(num == 0):
break;
(1) **range()**常用于循环函数,range(start,end [,step])其中取左不取右,start可省略默认为0,step可省略默认为1
(2) **list()**将range()的值列表化
举例:print(list(range(1,5,2))) 输出结果是 [1, 3]
(3) shuffle() 方法将序列的所有元素随机排序
用法是:random.shuffle (lst ) lst可以是列表
3.初始化参数
随机初始化参数w,b初始化为0
w = torch.normal(0, 0.01, size=(2, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
4.定义模型
def net(X, w, b):
return torch.matmul(X, w) + b
5.定义损失函数
def loss(y_hat, y):
return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2
6.定义优化算法
def sgd(params, batch_size, lr):
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
上面的循环中要除以batch_size我想是因为算梯度的时候因为对损失函数的输出的张量要求和,而张量中有batch_size个标量,所以参数更新的时候要除一下。
7.训练模型
lr = 0.03
num_epochs = 3
for epoch in range(0, num_epochs):
for X, y in read_data(batch_size, features, labels):
f = loss(net(X, w, b), y)
f.sum().backward()
sgd([w, b], batch_size, lr)
with torch.no_grad():
train_l = loss(net(features, w, b), labels)
print(f'epoch {epoch + 1}, loss {float(train_l.mean()):f}')
print("w误差 ", true_w - w, "\nb误差 ", true_b - b)
这里输出为:
epoch 1, loss 0.045103
epoch 2, loss 0.000178
epoch 3, loss 0.000053
w误差 tensor([[ 1.6487e-04, -5.3998e+00],
[ 5.3994e+00, -6.4111e-04]], grad_fn=<SubBackward0>)
b误差 tensor([-6.0081e-05], grad_fn=<RsubBackward1>)
整体代码如下:
import random
import torch
def create_data(w, b, nums_example):
X = torch.normal(0, 1, (nums_example, len(w)))
y = torch.matmul(X, w) + b
print("y_shape:", y.shape)
y += torch.normal(0, 0.01, y.shape)
return X, y.reshape(-1, 1)
true_w = torch.tensor([2, -3.4])
true_b = 4.2
features, labels = create_data(true_w, true_b, 1000)
def read_data(batch_size, features, lables):
nums_example = len(features)
indices = list(range(nums_example))
random.shuffle(indices)
for i in range(0, nums_example, batch_size):
index_tensor = torch.tensor(indices[i: min(i + batch_size, nums_example)])
yield features[index_tensor], lables[index_tensor]
batch_size = 10
num = nums_example/batch_size
for X, y in read_data(batch_size, features, labels):
print("X:", X, "\ny", y,"num",num)
num = num - 1
if(num == 0):
break;
w = torch.normal(0, 0.01, size=(2, 1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)
def net(X, w, b):
return torch.matmul(X, w) + b
def loss(y_hat, y):
return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2
def sgd(params, batch_size, lr):
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
lr = 0.03
num_epochs = 3
for epoch in range(0, num_epochs):
for X, y in read_data(batch_size, features, labels):
f = loss(net(X, w, b), y)
f.sum().backward()
sgd([w, b], batch_size, lr)
with torch.no_grad():
train_l = loss(net(features, w, b), labels)
print(f'epoch {epoch + 1}, loss {float(train_l.mean()):f}')
print("w误差 ", true_w - w, "\nb误差 ", true_b - b)
|