由于号称Yolov5_DeepSort_Pytorch之github官网(mikel-brostrom)改版,加入了多种reid,原来ZQPei提供的针对行人跟踪的权重ckpt.t7不能直接使用。以下记录如何在新版中使用osnet reid模型,以及使用ZQPei ckpt.t7模型的方法。
经本人验证,新版Yolov5_DeepSort_Pytorch,用osnet_x1_0, osnet_ain_x1_0均可运行,性能和ZQPei模型差不多,但速度慢。大约40ms vs 20ms/帧的差别。可能的原因,osnet 匹配图像大,256x128(h,w), ZQPei ckpt图像小128x64(h,w)。 尝试将ZQPei模型写成新版reid方式。 mikel-brostrom引入KaiyangZhou提供的reid,其使用方法如下。 如何导入torchreid: 将KaiyangZhou github克隆下来,放到Yolov5_DeepSort_Pytorch/deep_sort/deep目录下,目录名为改reid,即Yolov5_DeepSort_Pytorch/deep_sort/deep/reid。 假定已经安装了conda和虚拟环境,且安装好Yolov5_DeepSort_Pytorch所需的模块,进入reid目录,运行
python setup.py develop
如此,即安装好torchreid,可以在程序中加入import torchreid。 从KaiyangZhou的github中,Model zoo里下载权重文件,例如osnet_x1_0.pth,放到checkpoint目录:Yolov5_DeepSort_Pytorch/deep_sort/deep/checkpoint。 (1)修改deep_sort.yaml
DEEPSORT:
MODEL_TYPE: "osnet_x1_0"
REID_CKPT: '~/Yolov5_DeepSort_Pytorch/deep_sort/deep/checkpoint/osnet_x1_0_imagenet.pth'
MAX_DIST: 0.1
MAX_IOU_DISTANCE: 0.7
MAX_AGE: 90
N_INIT: 3
NN_BUDGET: 100
MIN_CONFIDENCE: 0.75
NMS_MAX_OVERLAP: 1.0
(2)track.py中指定reid模型,添加checkpoint路径。
parser.add_argument('--deep_sort_model', type=str, default='osnet_x1_0')
deepsort = DeepSort(deep_sort_model,
cfg.DEEPSORT.REID_CKPT,
device,
max_dist=cfg.DEEPSORT.MAX_DIST,
max_iou_distance=cfg.DEEPSORT.MAX_IOU_DISTANCE,
max_age=cfg.DEEPSORT.MAX_AGE, n_init=cfg.DEEPSORT.N_INIT, nn_budget=cfg.DEEPSORT.NN_BUDGET,
)
在deep_sort.yaml文件中给出权重文件路径,可跳过从网上下载权重的过程,直接从本地下载。 如此可运行osnet reid之deepsort跟踪程序track.py。
注:上述修改中,加注释#的部分是采用ZQPei模型。
将ZQPei模型添加到reid中的办法: 模型名称:ZQP, 模型文件名:model_ZQP.py (3)修改ZQPei github代码中 model.py, (a)在py文件中添加函数
def ZQP(num_classes=751, pretrained=True, loss='softmax', **kwargs):
model = Net(
num_classes=num_classes,
pretrained = pretrained,
loss = 'softmax',
**kwargs
)
return model
(b)reid 改为 pretrained。 为避免名称发生冲突,将原来的model.py改成model_ZQP.py。
(4)在deep_sort/deep/reid/torchreid/models/__init__.py 中添加:
from .model_ZQP import *
在
__model_factory = {
下面添加自己的模型:
'ZQP': ZQP
(5)在deep_sort/deep/reid/torchreid/utils/feature_extractor.py中添加
from deep_sort.deep.reid.torchreid.models.model_ZQP import Net
其中的__init__函数中修改image_size, num_classes:
def __init__(
self,
model_name='',
model_path='',
image_size=(128, 64),
pixel_mean=[0.485, 0.456, 0.406],
pixel_std=[0.229, 0.224, 0.225],
pixel_norm=True,
device='cuda',
verbose=True
):
model = build_model(
model_name,
num_classes=751,
pretrained=True,
use_gpu=device.startswith('cuda')
)
(6)此外,由于ckpt.t7中的state_dict名称为net_dict,与常规的不一致,需要修改: 修改deep_sort/deep/reid/torchreid/utils/torchtools.py
checkpoint = load_checkpoint(weight_path)
if 'state_dict' in checkpoint:
state_dict = checkpoint['state_dict']
elif 'net_dict' in checkpoint:
state_dict = checkpoint['net_dict']
else:
state_dict = checkpoint
现在可以运行ckpt.t7,变更处太多,但可以保持与其他的reid兼容使用feature_extractor.py。
或者直接改feature_extractor.py如下,但只能适用于ckpt.t7,其他的reid都不能用:
from __future__ import absolute_import
import numpy as np
import torch
import torchvision.transforms as T
from PIL import Image
import cv2
from torchreid.utils import (
check_isfile, load_pretrained_weights, compute_model_complexity
)
from torchreid.models import build_model
import logging
from deep_sort.deep.reid.torchreid.models.model_ZQP import Net
class FeatureExtractor(object):
def __init__(
self,
model_name='',
model_path='',
image_size=(64, 128),
pixel_mean=[0.485, 0.456, 0.406],
pixel_std=[0.229, 0.224, 0.225],
pixel_norm=True,
device='cuda'
):
self.net = Net(pretrained =True)
if model_path and check_isfile(model_path):
self.device = "cuda" if torch.cuda.is_available() else "cpu"
state_dict = torch.load(model_path, map_location=torch.device(self.device))['net_dict']
self.net.load_state_dict(state_dict)
self.net.eval()
self.size = (64,128)
import torchvision.transforms as transforms
self.norm = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])
device = torch.device(device)
self.net.to(device)
def _preprocess(self, im_crops):
def _resize(im, size):
return cv2.resize(im.astype(np.float32)/255., size)
im_batch = torch.cat([self.norm(_resize(im, self.size)).unsqueeze(
0) for im in im_crops], dim=0).float()
return im_batch
def __call__(self, im_crops):
im_batch = self._preprocess(im_crops)
with torch.no_grad():
im_batch = im_batch.to(self.device)
features = self.net(im_batch)
return features
总结 兼容方式 (1)修改deep_sort.yaml,指定权重文件路径,模型名称。 (2)track.py命令行变更reid模型名称。 (3)model_ZQP.py修改 (4)__init__.py 添加模型名称ZQP (5)feature_extractor.py修改image_size, num_classes,添加import reid (6)torchtools.py调整模型net_dict为state_dict 非兼容方式 (1)修改deep_sort.yaml,指定权重文件路径,模型名称。 (2)track.py命令行变更reid模型名称。 (3)model_ZQP.py修改 (4)__init__.py 添加模型名称ZQP (5)替换feature_extractor.py
|