yolov5核心代码理解: anchor匹配策略-跨网格预测,compute_loss(p, targets, model)和build_targets(p, targets, model)理解
本文主要讲述yolov5anchor匹配策略-跨网格预测以及损失函数计算的核心过程理解,网络部分相对容易这里不再赘述。
1. yolov5跨网格匹配策略
yolov5最重要的便是跨网格进行预测,从当前网格的上、下、左、右的四个网格中找到离目标中心点最近的两个网格,再加上当前网格共三个网格进行匹配。增大正样本的数量,加快模型收敛。
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
l, m = ((gxy % 1. > (1 - g)) & (gxy < (gain[[2, 3]] - 1.))).T
a, t = torch.cat((a, a[j], a[k], a[l], a[m]), 0), torch.cat((t, t[j], t[k], t[l], t[m]), 0)
offsets = torch.cat((z, z[j] + off[0], z[k] + off[1], z[l] + off[2], z[m] + off[3]), 0) * g
yolov5预测bbox公式如下:
- tx,ty,tw,th:预测的坐标信息
- bx,xy,bw,bh: 最终预测坐标信息
- б:表示sigmoid,将坐标归一化到0~1
- cx,cy: 中心点所在的网格的左上角坐标
- pw,py: anchor框的大小
代码如下:
pxy = ps[:, :2].sigmoid() * 2. - 0.5
pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
2. yolov5核心代码compute_loss和build_targets理解
def compute_loss(p, targets, model):
ft = torch.cuda.FloatTensor if p[0].is_cuda else torch.Tensor
lcls, lbox, lobj = ft([0]), ft([0]), ft([0])
tcls, tbox, indices, anchors = build_targets(p, targets, model)
h = model.hyp
red = 'mean'
BCEcls = nn.BCEWithLogitsLoss(pos_weight=ft([h['cls_pw']]), reduction=red)
BCEobj = nn.BCEWithLogitsLoss(pos_weight=ft([h['obj_pw']]), reduction=red)
cp, cn = smooth_BCE(eps=0.0)
g = h['fl_gamma']
if g > 0:
BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
nt = 0
for i, pi in enumerate(p):
b, a, gj, gi = indices[i]
tobj = torch.zeros_like(pi[..., 0])
nb = b.shape[0]
if nb:
nt += nb
ps = pi[b, a, gj, gi]
pxy = ps[:, :2].sigmoid() * 2. - 0.5
pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i]
pbox = torch.cat((pxy, pwh), 1)
giou = bbox_iou(pbox.t(), tbox[i], x1y1x2y2=False, GIoU=True)
lbox += (1.0 - giou).sum() if red == 'sum' else (1.0 - giou).mean()
tobj[b, a, gj, gi] = (1.0 - model.gr) + model.gr * giou.detach().clamp(0).type(tobj.dtype)
if model.nc > 1:
t = torch.full_like(ps[:, 5:], cn)
t[range(nb), tcls[i]] = cp
lcls += BCEcls(ps[:, 5:], t)
lobj += BCEobj(pi[..., 4], tobj)
lbox *= h['giou']
lobj *= h['obj']
lcls *= h['cls']
bs = tobj.shape[0]
if red == 'sum':
g = 3.0
lobj *= g / bs
if nt:
lcls *= g / nt / model.nc
lbox *= g / nt
loss = lbox + lobj + lcls
return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach()
def build_targets(p, targets, model):
det = model.module.model[-1] if type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) \
else model.model[-1]
na, nt = det.na, targets.shape[0]
tcls, tbox, indices, anch = [], [], [], []
gain = torch.ones(6, device=targets.device)
off = torch.tensor([[1, 0], [0, 1], [-1, 0], [0, -1]], device=targets.device).float()
at = torch.arange(na).view(na, 1).repeat(1, nt)
style = 'rect4'
for i in range(det.nl):
anchors = det.anchors[i]
gain[2:] = torch.tensor(p[i].shape)[[3, 2, 3, 2]]
a, t, offsets = [], targets * gain, 0
if nt:
r = t[None, :, 4:6] / anchors[:, None]
j = torch.max(r, 1. / r).max(2)[0] < model.hyp['anchor_t']
a, t = at[j], t.repeat(na, 1, 1)[j]
gxy = t[:, 2:4]
z = torch.zeros_like(gxy)
"""把相对于各个网格左上角x<0.5,y<0.5和相对于右下角的x<0.5,y<0.5的框提取出来,就是j,k,l,m
在选取gij(标签分配的网格)的时候对这四个部分都做一个偏移(加去上面的off),
"""
if style == 'rect2':
g = 0.2
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
a, t = torch.cat((a, a[j], a[k]), 0), torch.cat((t, t[j], t[k]), 0)
offsets = torch.cat((z, z[j] + off[0], z[k] + off[1]), 0) * g
elif style == 'rect4':
g = 0.5
j, k = ((gxy % 1. < g) & (gxy > 1.)).T
l, m = ((gxy % 1. > (1 - g)) & (gxy < (gain[[2, 3]] - 1.))).T
a, t = torch.cat((a, a[j], a[k], a[l], a[m]), 0), torch.cat((t, t[j], t[k], t[l], t[m]), 0)
offsets = torch.cat((z, z[j] + off[0], z[k] + off[1], z[l] + off[2], z[m] + off[3]), 0) * g
"""
对每个bbox找出对应的正样本anchor。
a 表示当前bbox和当前层的第几个anchor匹配
b 表示当前bbox属于batch内部的第几张图片,
c 是该bbox的类别
gi,gj 是对应的负责预测该bbox的网格坐标
gxy 负责预测网格中心点坐标xy
gwh 是对应的bbox的wh
"""
b, c = t[:, :2].long().T
gxy = t[:, 2:4]
gwh = t[:, 4:6]
gij = (gxy - offsets).long()
gi, gj = gij.T
indices.append((b, a, gj, gi))
tbox.append(torch.cat((gxy - gij, gwh), 1))
anch.append(anchors[a])
tcls.append(c)
return tcls, tbox, indices, anch
3. 总结
yolov5增加正样本的方法,最多可增大到原来的三倍,大大增加了正样本的数量,加速了模型的收敛。 目标检测重中之重可以理解为anchor的匹配策略,当下流行的anchor-free不过换了一种匹配策略罢了。 我想当下真正可创新之处在于更优的匹配策略。 鄙人拙见,请不吝指教。
|