1. paddlepaddle安装
1.1 paddle_env 搭建与测试
conda create -n paddle_env python=3.9 -y
conda activate paddle_env
conda install paddlepaddle-gpu==2.2.2 cudatoolkit=10.2 --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/ -y
python -c 'import paddle; paddle.utils.run_check()'
python -c "import paddle; print(paddle.__version__)"
1.2 PaddleDetection 测试
git clone https://github.do/https://github.com/PaddlePaddle/PaddleDetection.git
python dataset/voc/download_voc.py
cd PaddleDetection
python setup.py install
python ppdet/modeling/tests/test_architectures.py
export CUDA_VISIBLE_DEVICES=0
python tools/infer.py -c configs/ppyolo/ppyolo_r50vd_dcn_1x_coco.yml -o use_gpu=true weights=https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_1x_coco.pdparams --infer_img=demo/000000014439.jpg
2. 数据集准备
python dataset/voc/download_voc.py
"""
note: 通过 ariac2 下载的会提示 md5sum 不符合, 不知道是不是我的问题; 我是通过aistudio保存并下载的, 放置到了硬盘<扬帆起航: /LY/datasets/voc>
1. 首先下载数据集: https://aistudio.baidu.com/aistudio/datasetdetail/9837
2. 将3个压缩包放入: {PaddleDetection}/dataset/voc/
3. 修改代码: vim {anaconda3/envs/paddle_env}/lib/python3.9/site-packages/paddledet-2.3.0-py3.9.egg/ppdet/utils/download.py 的 395行附近, 在下面添加 `return fullname`
4. 运行 `python dataset/voc/download_voc.py `
"""
上述的数据集比较大, 因此我选择其中一个小的;
python dataset/roadsign_voc/download_roadsign_voc.py
下载完之后数据集格式为:
├── download_roadsign_voc.py
├── annotations
│ ├── road0.xml
│ ├── road1.xml
│ | ...
├── images
│ ├── road0.png
│ ├── road1.png
│ | ...
├── label_list.txt
├── train.txt
├── valid.txt
3. 配置文件解释与修改指南
configs/yolov3/yolov3_mobilenet_v1_roadsign.yml
文件内容如下:
_BASE_: [
'../datasets/roadsign_voc.yml',
'../runtime.yml',
'_base_/optimizer_40e.yml',
'_base_/yolov3_mobilenet_v1.yml',
'_base_/yolov3_reader.yml',
]
pretrain_weights: https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_270e_coco.pdparams
weights: output/yolov3_mobilenet_v1_roadsign/model_final
YOLOv3Loss:
ignore_thresh: 0.7
label_smooth: true
4. 训练
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
python -m paddle.distributed.launch --gpus 0,1,2,3,4,5,6,7 tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml
export CUDA_VISIBLE_DEVICES=0
python -m paddle.distributed.launch --gpus 0 tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o pretrain_weights=output/model_final
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -r output/faster_rcnn_r50_1x_coco/10000
5. 评估
export CUDA_VISIBLE_DEVICES=0
python tools/eval.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_roadsign.pdparams
export CUDA_VISIBLE_DEVICES=0
python tools/eval.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml -o weights=output/yolov3_mobilenet_v1_roadsign/model_final.pdparams
export CUDA_VISIBLE_DEVICES=0
python -m paddle.distributed.launch --gpus 0 tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml --eval
export CUDA_VISIBLE_DEVICES=0
python tools/eval.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \
--json_eval \
-output_eval evaluation/
6. 预测
python tools/infer.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml --infer_img=demo/000000570688.jpg -o weights=https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_roadsign.pdparams
export CUDA_VISIBLE_DEVICES=0
python tools/infer.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \
--infer_img=demo/road554.png \
--output_dir=infer_output/ \
--draw_threshold=0.5 \
-o weights=output/yolov3_mobilenet_v1_roadsign/model_final \
--use_vdl=Ture
7. 训练可视化
- loss 变化趋势
- mAP变化趋势
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml \
--use_vdl=true \
--vdl_log_dir=vdl_dir/scalar \
visualdl --logdir vdl_dir/scalar/
note: 更详尽的参数列表(https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/GETTING_STARTED_cn.md):
FLAG | 支持脚本 | 用途 | 默认值 | 备注 |
---|
-c | ALL | 指定配置文件 | None | 必选,例如-c configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml | -o | ALL | 设置或更改配置文件里的参数内容 | None | 相较于-c 设置的配置文件有更高优先级,例如:-o use_gpu=False | –eval | train | 是否边训练边测试 | False | 如需指定,直接--eval 即可 | -r/–resume_checkpoint | train | 恢复训练加载的权重路径 | None | 例如:-r output/faster_rcnn_r50_1x_coco/10000 | –slim_config | ALL | 模型压缩策略配置文件 | None | 例如--slim_config configs/slim/prune/yolov3_prune_l1_norm.yml | –use_vdl | train/infer | 是否使用VisualDL记录数据,进而在VisualDL面板中显示 | False | VisualDL需Python>=3.5 | –vdl_log_dir | train/infer | 指定 VisualDL 记录数据的存储路径 | train:vdl_log_dir/scalar infer: vdl_log_dir/image | VisualDL需Python>=3.5 | –output_eval | eval | 评估阶段保存json路径 | None | 例如 --output_eval=eval_output , 默认为当前路径 | –json_eval | eval | 是否通过已存在的bbox.json或者mask.json进行评估 | False | 如需指定,直接--json_eval 即可, json文件路径在--output_eval 中设置 | –classwise | eval | 是否评估单类AP和绘制单类PR曲线 | False | 如需指定,直接--classwise 即可 | –output_dir | infer/export_model | 预测后结果或导出模型保存路径 | ./output | 例如--output_dir=output | –draw_threshold | infer | 可视化时分数阈值 | 0.5 | 例如--draw_threshold=0.7 | –infer_dir | infer | 用于预测的图片文件夹路径 | None | --infer_img 和--infer_dir 必须至少设置一个 | –infer_img | infer | 用于预测的图片路径 | None | --infer_img 和--infer_dir 必须至少设置一个,infer_img 具有更高优先级 | –save_txt | infer | 是否在文件夹下将图片的预测结果保存到文本文件中 | False | 可选 |
8. 模型导出
在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。 在PaddleDetection中提供了 tools/export_model.py 脚本来导出模型(https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml --output_dir=./inference_model -o weights=output/yolov3_mobilenet_v1_roadsign/best_model
预测模型会导出到inference_model/yolov3_mobilenet_v1_roadsign 目录下,分别为infer_cfg.yml , model.pdiparams , model.pdiparams.info ,model.pdmodel 如果不指定文件夹,模型则会导出在output_inference
9. 模型压缩
文档: https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/configs/slim/README.md
pip install paddleslim -i https://pypi.tuna.tsinghua.edu.cn/simple
10. 预测部署
文档: https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/python.md
python deploy/python/infer.py --model_dir=./output_inference/yolov3_mobilenet_v1_roadsign --image_file=demo/road554.png --device=GPU
reference
@online{PaddlePaddle2022Mar, author = {PaddlePaddle}, title = {{PaddleDetection}}, organization = {GitHub}, year = {2022}, month = {3}, date = {2022-03-30}, urldate = {2022-03-30}, language = {english}, hyphenation = {english}, note = {[Online; accessed 30. Mar. 2022]}, url = {https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/docs/tutorials/GETTING_STARTED_cn.md}, abstract = {{Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection. - PaddleDetection/GETTING_STARTED_cn.md at release/2.4 · PaddlePaddle/PaddleDetection}} }
|