Xin Li
commited on
Commit
·
35793b2
1
Parent(s):
2a7e607
doc(ncnn): update README (#1519)
Browse files- demo/ncnn/README.md +8 -0
- demo/ncnn/cpp/README.md +29 -9
demo/ncnn/README.md
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# YOLOX-ncnn
|
2 |
+
|
3 |
+
Compile files of YOLOX object detection base on [ncnn](https://github.com/Tencent/ncnn).
|
4 |
+
YOLOX is included in ncnn now, you could also try building from ncnn, it's better.
|
5 |
+
|
6 |
+
## Acknowledgement
|
7 |
+
|
8 |
+
* [ncnn](https://github.com/Tencent/ncnn)
|
demo/ncnn/cpp/README.md
CHANGED
@@ -1,7 +1,6 @@
|
|
1 |
# YOLOX-CPP-ncnn
|
2 |
|
3 |
Cpp file compile of YOLOX object detection base on [ncnn](https://github.com/Tencent/ncnn).
|
4 |
-
YOLOX is included in ncnn now, you could also try building from ncnn, it's better.
|
5 |
|
6 |
## Tutorial
|
7 |
|
@@ -9,13 +8,13 @@ YOLOX is included in ncnn now, you could also try building from ncnn, it's bette
|
|
9 |
Clone [ncnn](https://github.com/Tencent/ncnn) first, then please following [build tutorial of ncnn](https://github.com/Tencent/ncnn/wiki/how-to-build) to build on your own device.
|
10 |
|
11 |
### Step2
|
12 |
-
|
13 |
For example, if you want to generate onnx file of yolox-s, please run the following command:
|
14 |
```shell
|
15 |
cd <path of yolox>
|
16 |
python3 tools/export_onnx.py -n yolox-s
|
17 |
```
|
18 |
-
Then
|
19 |
|
20 |
### Step3
|
21 |
Generate ncnn param and bin file.
|
@@ -25,14 +24,15 @@ cd build/tools/ncnn
|
|
25 |
./onnx2ncnn yolox.onnx model.param model.bin
|
26 |
```
|
27 |
|
28 |
-
Since Focus module is not supported in ncnn.
|
29 |
```shell
|
30 |
-
Unsupported slice step
|
31 |
```
|
32 |
-
|
33 |
|
34 |
### Step4
|
35 |
-
Open **model.param**, and modify it.
|
|
|
36 |
Before (just an example):
|
37 |
```
|
38 |
295 328
|
@@ -49,7 +49,7 @@ Crop Slice_39 1 1 677 682 -23309=1,1 -23310=1,214748
|
|
49 |
Concat Concat_40 4 1 652 672 662 682 683 0=0
|
50 |
...
|
51 |
```
|
52 |
-
* Change first number for 295 to 295 - 9 = 286(since we will remove 10 layers and add 1 layers, total layers number should minus 9).
|
53 |
* Then remove 10 lines of code from Split to Concat, but remember the last but 2nd number: 683.
|
54 |
* Add YoloV5Focus layer After Input (using previous number 683):
|
55 |
```
|
@@ -71,7 +71,7 @@ Use ncnn_optimize to generate new param and bin:
|
|
71 |
```
|
72 |
|
73 |
### Step6
|
74 |
-
Copy or Move yolox.cpp file into ncnn/examples, modify the CMakeList.txt, then build
|
75 |
|
76 |
### Step7
|
77 |
Inference image with executable file yolox, enjoy the detect result:
|
@@ -79,6 +79,26 @@ Inference image with executable file yolox, enjoy the detect result:
|
|
79 |
./yolox demo.jpg
|
80 |
```
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
## Acknowledgement
|
83 |
|
84 |
* [ncnn](https://github.com/Tencent/ncnn)
|
|
|
1 |
# YOLOX-CPP-ncnn
|
2 |
|
3 |
Cpp file compile of YOLOX object detection base on [ncnn](https://github.com/Tencent/ncnn).
|
|
|
4 |
|
5 |
## Tutorial
|
6 |
|
|
|
8 |
Clone [ncnn](https://github.com/Tencent/ncnn) first, then please following [build tutorial of ncnn](https://github.com/Tencent/ncnn/wiki/how-to-build) to build on your own device.
|
9 |
|
10 |
### Step2
|
11 |
+
First, we try the original onnx2ncnn solution by using provided tools to generate onnx file.
|
12 |
For example, if you want to generate onnx file of yolox-s, please run the following command:
|
13 |
```shell
|
14 |
cd <path of yolox>
|
15 |
python3 tools/export_onnx.py -n yolox-s
|
16 |
```
|
17 |
+
Then a yolox.onnx file is generated.
|
18 |
|
19 |
### Step3
|
20 |
Generate ncnn param and bin file.
|
|
|
24 |
./onnx2ncnn yolox.onnx model.param model.bin
|
25 |
```
|
26 |
|
27 |
+
Since Focus module is not supported in ncnn. You will see warnings like:
|
28 |
```shell
|
29 |
+
Unsupported slice step!
|
30 |
```
|
31 |
+
However, don't worry on this as a C++ version of Focus layer is already implemented in yolox.cpp.
|
32 |
|
33 |
### Step4
|
34 |
+
Open **model.param**, and modify it. For more information on the ncnn param and model file structure, please take a look at this [wiki](https://github.com/Tencent/ncnn/wiki/param-and-model-file-structure).
|
35 |
+
|
36 |
Before (just an example):
|
37 |
```
|
38 |
295 328
|
|
|
49 |
Concat Concat_40 4 1 652 672 662 682 683 0=0
|
50 |
...
|
51 |
```
|
52 |
+
* Change first number for 295 to 295 - 9 = 286 (since we will remove 10 layers and add 1 layers, total layers number should minus 9).
|
53 |
* Then remove 10 lines of code from Split to Concat, but remember the last but 2nd number: 683.
|
54 |
* Add YoloV5Focus layer After Input (using previous number 683):
|
55 |
```
|
|
|
71 |
```
|
72 |
|
73 |
### Step6
|
74 |
+
Copy or Move yolox.cpp file into ncnn/examples, modify the CMakeList.txt to add our implementation, then build.
|
75 |
|
76 |
### Step7
|
77 |
Inference image with executable file yolox, enjoy the detect result:
|
|
|
79 |
./yolox demo.jpg
|
80 |
```
|
81 |
|
82 |
+
### Bounus Solution:
|
83 |
+
As ncnn has released another model conversion tool called [pnnx](https://zhuanlan.zhihu.com/p/427620428) which directly finishs the pytorch2ncnn process via torchscript, we can also try on this.
|
84 |
+
|
85 |
+
```shell
|
86 |
+
# take yolox-s as an example
|
87 |
+
python3 tools/export_torchscript.py -n yolox-s -c /path/to/your_checkpoint_files
|
88 |
+
```
|
89 |
+
Then a `yolox.torchscript.pt` will be generated. Copy this file to your pnnx build directory (pnnx also provides pre-built packages [here](https://github.com/pnnx/pnnx/releases/tag/20220720)).
|
90 |
+
|
91 |
+
```shell
|
92 |
+
# suppose you put the yolox.torchscript.pt in a seperate folder
|
93 |
+
./pnnx yolox/yolox.torchscript.pt inputshape=[1,3,640,640]
|
94 |
+
# for zsh users, please use inputshape='[1,3,640,640]'
|
95 |
+
```
|
96 |
+
Still, as ncnn does not support `slice` op as we mentioned in [Step3](https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ncnn/cpp#step3). You will still see the warnings during this process.
|
97 |
+
|
98 |
+
Then multiple pnnx related files will be genreated in your yolox folder. Use `yolox.torchscript.ncnn.param` and `yolox.torchscript.ncnn.bin` as your converted model.
|
99 |
+
|
100 |
+
Then we can follow back to our [Step4](https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ncnn/cpp#step4) for the rest of our implementation.
|
101 |
+
|
102 |
## Acknowledgement
|
103 |
|
104 |
* [ncnn](https://github.com/Tencent/ncnn)
|