mmdetection之anchor_head中loss和loss_single方法
loss方法
0、方法的输入与输出
输入
- cls_scores(list[Tensor]): 每个预测框的类别得分。每个tensor的形状为:(N, num_anchors * num_classes, H,
W),其中N表示batch_size,num_anchors表示基础anchor的数量,mun_classes表示类别数量,H、W分别表示特征图的高和宽。 - bbox_preds (list[Tensor]): 表示RPN输出预测框的位置,根据该值可以计算出预测框在原图上的位置。每个tensor的形状为: (N, num_anchors * 4,H, W)
- gt_bboxes (list[Tensor]): 表示真实标注框的位置。形状为 (num_gts, 4)的tensor,每一行表示标注框的左上角点和右下角点。
- gt_labels (list[Tensor]): 表示每个标注框的类别。
- img_metas (list[dict]): 表示输入图片的相关信息。
- gt_bboxes_ignore (None | list[Tensor]): 表示被忽略的标注,默认为None。
输出
- dict[str, Tensor]: 一个loss字典。其中包括类别损失和边界框回归损失。
1、根据输入特征的尺度与图片的相关信息在相应的设别上生成anchor
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
assert len(featmap_sizes) == self.anchor_generator.num_levels
device = cls_scores[0].device
anchor_list, valid_flag_list = self.get_anchors(featmap_sizes, img_metas, device=device)
- 获取输入特征的尺寸。
- 检查输入数据是否正确。
- 获取当前工作设备。
- 调用get_anchors()方法获取对应的anchor列表。
这里的anchor_list就是生成的anchor列表,valid_flag_list表示各个anchor是否合法。
2、根据真实标签构造一个tensor
label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
cls_reg_targets = self.get_targets(
anchor_list,
valid_flag_list,
gt_bboxes,
img_metas,
gt_bboxes_ignore_list=gt_bboxes_ignore,
gt_labels_list=gt_labels,
label_channels=label_channels)
if cls_reg_targets is None:
return None
(labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets
num_total_samples = (num_total_pos + num_total_neg if self.sampling else num_total_pos)
两个维度相同的tensor才能计算损失。该部分待完善。
3、整理生成anchor的格式,使之与构造的tensor相匹配
num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
concat_anchor_list = []
for i in range(len(anchor_list)):
concat_anchor_list.append(torch.cat(anchor_list[i]))
all_anchor_list = images_to_levels(concat_anchor_list, num_level_anchors)
4、计算预测值与真实标签之间的距离
losses_cls, losses_bbox = multi_apply(
self.loss_single,
cls_scores,
bbox_preds,
all_anchor_list,
labels_list,
label_weights_list,
bbox_targets_list,
bbox_weights_list,
num_total_samples=num_total_samples)
loss_single方法
0、输入与输出
输入
- cls_score (Tensor): Box scores for each scale level has shape (N, num_anchors * num_classes, H, W).
- bbox_pred (Tensor): Box energies / deltas for each scale level with shape (N, num_anchors * 4, H, W).
- anchors (Tensor): Box reference for each scale level with shape (N, num_total_anchors, 4).
- labels (Tensor): Labels of each anchors with shape (N, num_total_anchors).
- label_weights (Tensor): Label weights of each anchor with shape (N, num_total_anchors)
- bbox_targets (Tensor): BBox regression targets of each anchor wight shape (N, num_total_anchors, 4).
- bbox_weights (Tensor): BBox regression loss weights of each anchor with shape (N, num_total_anchors, 4).
- num_total_samples (int): If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.
输出
- dict[str, Tensor]: A dictionary of loss components.
1、使用交叉熵计算分类损失
labels = labels.reshape(-1)
label_weights = label_weights.reshape(-1)
cls_score = cls_score.permute(0, 2, 3,1).reshape(-1, self.cls_out_channels)
loss_cls = self.loss_cls(cls_score, labels, label_weights, avg_factor=num_total_samples)
1、使用Smooth L1计算边界框回归损失
bbox_targets = bbox_targets.reshape(-1, 4)
bbox_weights = bbox_weights.reshape(-1, 4)
bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4)
if self.reg_decoded_bbox:
anchors = anchors.reshape(-1, 4)
bbox_pred = self.bbox_coder.decode(anchors, bbox_pred)
loss_bbox = self.loss_bbox(
bbox_pred,
bbox_targets,
bbox_weights,
avg_factor=num_total_samples)
|