一、关键步骤
- 导入镜像
docker load -i object-detection-v1.0-gpu-nginx.tar.xz
805802706667: Loading layer [==================================================>] 65.61MB/65.61MB
3fd9df553184: Loading layer [==================================================>] 15.87kB/15.87kB
7a694df0ad6c: Loading layer [==================================================>] 3.072kB/3.072kB
964ee116c0c0: Loading layer [==================================================>] 17.1MB/17.1MB
ef8330bcc944: Loading layer [==================================================>] 30.5MB/30.5MB
53194dce1444: Loading layer [==================================================>] 22.02kB/22.02kB
daf57e1d9792: Loading layer [==================================================>] 3.776GB/3.776GB
38482f47bc58: Loading layer [==================================================>] 432.2MB/432.2MB
d5f0eff44d91: Loading layer [==================================================>] 43.52kB/43.52kB
2ae012a1a57f: Loading layer [==================================================>] 136.2MB/136.2MB
c3c619e5af23: Loading layer [==================================================>] 13.05MB/13.05MB
35bd243339b0: Loading layer [==================================================>] 3.072kB/3.072kB
86be072aca6f: Loading layer [==================================================>] 1.082GB/1.082GB
a5e8cef3a916: Loading layer [==================================================>] 4.096kB/4.096kB
fc608a6d4b4a: Loading layer [==================================================>] 4.096kB/4.096kB
f92536afb0dd: Loading layer [==================================================>] 1.324GB/1.324GB
cef87c2af37e: Loading layer [==================================================>] 23.27MB/23.27MB
31b51f9f42d2: Loading layer [==================================================>] 11.26kB/11.26kB
dac31b13a735: Loading layer [==================================================>] 3.584kB/3.584kB
2086dcc27289: Loading layer [==================================================>] 4.608kB/4.608kB
9a0dcae8e4ee: Loading layer [==================================================>] 281.6MB/281.6MB
af0c7b90d1c5: Loading layer [==================================================>] 13.31kB/13.31kB
ef12689bc5cf: Loading layer [==================================================>] 3.584kB/3.584kB
a0c55190fbd9: Loading layer [==================================================>] 970.3MB/970.3MB
Loaded image: cmit/object_detection:v1.0-gpu-nginx
- 创建容器
docker run -p 8000:3000 --rm --gpus all -it cmit/object_detection:v1.0-gpu-nginx /bin/bash
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
██╗ ██╗███████╗██╗ ██████╗ ██████╗ ███╗ ███╗███████╗ ████████╗ ██████╗
██║ ██║██╔════╝██║ ██╔════╝██╔═══██╗████╗ ████║██╔════╝ ╚══██╔══╝██╔═══██╗
██║ █╗ ██║█████╗ ██║ ██║ ██║ ██║██╔████╔██║█████╗ ██║ ██║ ██║
██║███╗██║██╔══╝ ██║ ██║ ██║ ██║██║╚██╔╝██║██╔══╝ ██║ ██║ ██║
╚███╔███╔╝███████╗███████╗╚██████╗╚██████╔╝██║ ╚═╝ ██║███████╗ ██║ ╚██████╔╝
╚══╝╚══╝ ╚══════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═════╝
___
/\ \ ___
/::\ \ /\ \
/:/\:\ \ \:\ \
/::\~\:\ \ /::\__\
/:/\:\ \:\__\ __/:/\/__/
\/__\:\/:/ / /\/:/ /
\::/ / \::/__/
/:/ / \:\__\
/:/ / \/__/
\/__/
___ ___ ___ ___ ___
/\ \ /\ \ /\ \ /\__\ /\ \
/::\ \ /::\ \ /::\ \ /:/ / /::\ \
/:/\:\ \ /:/\:\ \ /:/\:\ \ /:/ / /:/\:\ \
/:/ \:\ \ /::\~\:\ \ /:/ \:\ \ /:/ / ___ /::\~\:\ \
/:/__/_\:\__\ /:/\:\ \:\__\ /:/__/ \:\__\ /:/__/ /\__\ /:/\:\ \:\__\
\:\ /\ \/__/ \/_|::\/:/ / \:\ \ /:/ / \:\ \ /:/ / \/__\:\/:/ /
\:\ \:\__\ |:|::/ / \:\ /:/ / \:\ /:/ / \::/ /
\:\/:/ / |:|\/__/ \:\/:/ / \:\/:/ / \/__/
\::/ / |:| | \::/ / \::/ /
\/__/ \|__| \/__/ \/__/
This Container is used for
* AI能力 - 目标检测
The custom path
* /app
Start service
* /app/start.sh
- 修改test.py文件中的模型路径
/app/Algorithm-Source-Code/automl-master-20210310/efficientdet/test.py
saved_model_path = 'saved_model_fineture/efficientdet-d7x_frozen.pb'
改为
saved_model_path = 'saved_model/efficientdet-d7x_frozen.pb'
- 调用模型
python /app/Algorithm-Source-Code/automl-master-20210310/efficientdet/test.py
[(0.90894175, 'dog', 405, 64, 591, 535), (0.86522406, 'dog', 27, 24, 286, 525), (0.8456877, 'person', 317, 1, 393, 537)]
- 模型路径
/app/Algorithm-Source-Code/automl-master-20210310/efficientdet/saved_model
testdata/img1.jpg
二、可能存在的问题
tensorflow.python.framework.errors_impl.NotFoundError: saved_model_finetune/efficientdet-d7x_frozen.pb; No such file or directory
错误原因:
test.py里写的pb路径错误
解决办法:
修改test.py文件中pb的路径
saved_model_path='saved_model/efficientdet-d7x_frozen.pb'
#saved_model_path='saved_model_finetune/efficientdet-d7x_frozen.pb'
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
错误原因:
当前docker不支持GPU
解决办法:
1. 卸载docker
2. 重新装docker
2021-08-17 09:35:54.838342: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: UNKNOWN ERROR (-1)
2021-08-17 09:35:54.838362: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] no NVIDIA GPU device is present: /dev/nvidia0 does not exist
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.0, please update your driver to a newer version, or use an earlier cuda container: unknown.
错误原因:
博主的CUDA为10.0,而镜像所使用的CUDA>=11.0,CUDA版本不匹配
解决办法:
CUDA多版本共存,切换到CUDA11.0即可
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[{{node efficientnet-b7/stem/conv2d/Conv2D}}]]
[[strided_slice_19/_35]]
(1) Not found: No algorithm worked!
[[{{node efficientnet-b7/stem/conv2d/Conv2D}}]]
0 successful operations.
0 derived errors ignored.
[解决 conda tensorflow failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED](https://www.cnblogs.com/xbit/p/11336962.html)
错误原因:
显存不够,可以设置GPU config,在需要的时候动态申请显存
解决办法:
vi /usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py
# 找到tf.Session的基类BaseSession,修改其__init__函数中的代码,在适当位置添加如下脚本:
if config is None:
config = context.context().config
# Grow GPU memory as needed at the cost of fragmentation.
config.gpu_options.allow_growth = True
|