?实现源码?
import torch
import numpy as np
pred = np.array([[-0.4089, -1.2471, 0.5907],
[-0.4897, -0.8267, -0.7349],
[0.5241, -0.1246, -0.4751]])
label = np.array([[0, 1, 1],
[0, 0, 1],
[1, 0, 1]])
pred = torch.from_numpy(pred).float()
label = torch.from_numpy(label).float()
## 通过BCEWithLogitsLoss直接计算输入值(pick)
crition1 = torch.nn.BCEWithLogitsLoss()
loss1 = crition1(pred, label)
print(loss1)
crition2 = torch.nn.MultiLabelSoftMarginLoss()
loss2 = crition2(pred, label)
print(loss2)
## 通过BCELoss计算sigmoid处理后的值
crition3 = torch.nn.BCELoss()
loss3 = crition3(torch.sigmoid(pred), label)
print(loss3)
关于BCEWithLogitsLoss?
这个东西,本质上和nn.BCELoss()没有区别,只是在BCELoss上加了个logits函数(也就是sigmoid函数),例子如下:
import torch
import torch.nn as nn
label = torch.Tensor([1, 1, 0])
pred = torch.Tensor([3, 2, 1])
pred_sig = torch.sigmoid(pred)
loss = nn.BCELoss()
print(loss(pred_sig, label))
loss = nn.BCEWithLogitsLoss()
print(loss(pred, label))
loss = nn.BCEWithLogitsLoss()
print(loss(pred_sig, label))
输出结果分别为:
tensor(0.4963)
tensor(0.4963)
tensor(0.5990)
可以看到,nn.BCEWithLogitsLoss()相当于是在nn.BCELoss()中预测结果pred的基础上先做了个sigmoid,然后继续正常算loss。所以这就涉及到一个比较奇葩的bug,如果网络本身在输出结果的时候已经用sigmoid去处理了,算loss的时候用nn.BCEWithLogitsLoss()…那么就会相当于预测结果算了两次sigmoid,可能会出现各种奇奇怪怪的问题——
比如网络收敛不了
原文链接:https://blog.csdn.net/qq_40714949/article/details/120295651
MultiLabelSoftMarginLoss
不知道pytorch为什么起这个名字,看loss计算公式,并没有涉及到margin,有可能后面会实现。按照我的理解其实就是多标签交叉熵损失函数,验证之后也和BCEWithLogitsLoss的结果输出一致,使用的torch版本为1.5.0
原文链接:https://blog.csdn.net/ltochange/article/details/118070885
import torch
import torch.nn.functional as F
import torch.nn as nn
import math
def validate_loss(output, target, weight=None, pos_weight=None):
output = F.sigmoid(output)
# 处理正负样本不均衡问题
if pos_weight is None:
label_size = output.size()[1]
pos_weight = torch.ones(label_size)
# 处理多标签不平衡问题
if weight is None:
label_size = output.size()[1]
weight = torch.ones(label_size)
val = 0
for li_x, li_y in zip(output, target):
for i, xy in enumerate(zip(li_x, li_y)):
x, y = xy
loss_val = pos_weight[i] * y * math.log(x, math.e) + (1 - y) * math.log(1 - x, math.e)
val += weight[i] * loss_val
return -val / (output.size()[0] * output.size(1))
weight = torch.Tensor([0.8, 1, 0.8])
loss = nn.MultiLabelSoftMarginLoss(weight=weight)
x = torch.Tensor([[0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3], [0.8, 0.9, 0.3]])
y = torch.Tensor([[1, 1, 0], [1, 1, 0], [1, 1, 0], [1, 1, 0]])
print(x.size())
print(y.size())
loss_val = loss(x, y)
print(loss_val.item())
validate_loss = validate_loss(x, y, weight=weight)
print(validate_loss.item())
loss = torch.nn.BCEWithLogitsLoss(weight=weight)
loss_val = loss(x, y)
print(loss_val.item())
# 输出
torch.Size([4, 3])
torch.Size([4, 3])
0.4405062198638916
0.4405062198638916
0.440506249666214
BCELoss
loss函数之BCELoss - 简书 (jianshu.com)
精度计算
2 准确率计算 依然是上面的例子,模型的输出是[0.2,0.6,0.8],真实值是[0,0,1]。准确率该怎么计算呢?
pred = torch.tensor([0.2, 0.6, 0.8])
y = torch.tensor([0, 0, 1])
accuracy = (pred.ge(0.5) == y).all().int().item()
accuracy
# output : 0
首先ge函数将pred中大于等于0.5的转化为True,小于0.5的转化成False,再比较pred和y(必须所有维度都相同才算分类准确),最后将逻辑值转化为整数输出即可。 训练时都是按照一个batch计算的,那就写一个循环吧。
pred = torch.tensor([[0.2, 0.5, 0.8], [0.4, 0.7, 0.1]])
y = torch.tensor([[0, 0, 1], [0, 1, 0]])
accuracy = sum(row.all().int().item() for row in (pred.ge(0.5) == y))
accuracy
# output : 1
原文链接:https://blog.csdn.net/qsmx666/article/details/121718548
|