Songyang Zhang
commited on
Commit
·
b4604ea
1
Parent(s):
7e1e02f
build webpage of documentation (#352)
Browse files- .readthedocs.yaml +21 -0
- demo/MegEngine/cpp/README.md +110 -79
- demo/ONNXRuntime/README.md +2 -1
- demo/OpenVINO/cpp/README.md +1 -0
- demo/OpenVINO/python/README.md +2 -1
- demo/TensorRT/cpp/README.md +1 -1
- docs/.gitignore +1 -0
- docs/Makefile +19 -0
- docs/_static/css/custom.css +31 -0
- docs/conf.py +384 -0
- docs/demo/megengine_cpp_readme.md +1 -0
- docs/demo/megengine_py_readme.md +1 -0
- docs/demo/ncnn_android_readme.md +1 -0
- docs/demo/ncnn_cpp_readme.md +1 -0
- docs/demo/onnx_readme.md +1 -0
- docs/demo/openvino_cpp_readme.md +1 -0
- docs/demo/openvino_py_readme.md +1 -0
- docs/demo/trt_cpp_readme.md +1 -0
- docs/demo/trt_py_readme.md +1 -0
- docs/index.rst +32 -0
- docs/model_zoo.md +18 -0
- docs/quick_run.md +101 -0
- docs/requirements-doc.txt +8 -0
- docs/train_custom_data.md +10 -9
.readthedocs.yaml
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# .readthedocs.yaml
|
2 |
+
# Read the Docs configuration file
|
3 |
+
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
4 |
+
|
5 |
+
# Required
|
6 |
+
version: 2
|
7 |
+
|
8 |
+
# Build documentation in the docs/ directory with Sphinx
|
9 |
+
sphinx:
|
10 |
+
configuration: docs/conf.py
|
11 |
+
|
12 |
+
# Optionally build your docs in additional formats such as PDF
|
13 |
+
formats:
|
14 |
+
- pdf
|
15 |
+
|
16 |
+
# Optionally set the version of Python and requirements required to build your docs
|
17 |
+
python:
|
18 |
+
version: "3.7"
|
19 |
+
install:
|
20 |
+
- requirements: docs/requirements-doc.txt
|
21 |
+
- requirements: requirements.txt
|
demo/MegEngine/cpp/README.md
CHANGED
@@ -12,85 +12,108 @@ Cpp file compile of YOLOX object detection base on [MegEngine](https://github.co
|
|
12 |
|
13 |
### Step2: build MegEngine
|
14 |
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
* you can refs [build tutorial of MegEngine](https://github.com/MegEngine/MegEngine/blob/master/scripts/cmake-build/BUILD_README.md) to build other platform, eg, windows/macos/ etc!
|
29 |
|
30 |
### Step3: build OpenCV
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
51 |
|
52 |
-
|
|
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
|
64 |
* build for android-aarch64
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
|
76 |
* after build OpenCV, you need export `OPENCV_INSTALL_INCLUDE_PATH ` and `OPENCV_INSTALL_LIB_PATH`
|
77 |
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
|
|
|
|
84 |
|
85 |
### Step4: build test demo
|
86 |
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
|
|
|
|
|
|
94 |
|
95 |
### Step5: run demo
|
96 |
|
@@ -99,30 +122,38 @@ Cpp file compile of YOLOX object detection base on [MegEngine](https://github.co
|
|
99 |
> * reference to python demo's `dump.py` script.
|
100 |
> * wget https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.mge
|
101 |
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
116 |
|
117 |
## Bechmark
|
118 |
|
119 |
-
*
|
120 |
|
121 |
* test devices
|
122 |
|
|
|
123 |
* x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
|
124 |
* aarch64 -- xiamo phone mi9
|
125 |
* cuda -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
|
|
|
126 |
|
127 |
| megengine @ tag1.4(fastrun + weight\_preprocess)/sec | 1 thread |
|
128 |
| ---------------------------------------------------- | -------- |
|
|
|
12 |
|
13 |
### Step2: build MegEngine
|
14 |
|
15 |
+
```shell
|
16 |
+
git clone https://github.com/MegEngine/MegEngine.git
|
17 |
+
|
18 |
+
# then init third_party
|
19 |
+
|
20 |
+
export megengine_root="path of MegEngine"
|
21 |
+
cd $megengine_root && ./third_party/prepare.sh && ./third_party/install-mkl.sh
|
22 |
+
|
23 |
+
# build example:
|
24 |
+
# build host without cuda:
|
25 |
+
./scripts/cmake-build/host_build.sh
|
26 |
+
# or build host with cuda:
|
27 |
+
./scripts/cmake-build/host_build.sh -c
|
28 |
+
# or cross build for android aarch64:
|
29 |
+
./scripts/cmake-build/cross_build_android_arm_inference.sh
|
30 |
+
# or cross build for android aarch64(with V8.2+fp16):
|
31 |
+
./scripts/cmake-build/cross_build_android_arm_inference.sh -f
|
32 |
+
|
33 |
+
# after build MegEngine, you need export the `MGE_INSTALL_PATH`
|
34 |
+
# host without cuda:
|
35 |
+
export MGE_INSTALL_PATH=${megengine_root}/build_dir/host/MGE_WITH_CUDA_OFF/MGE_INFERENCE_ONLY_ON/Release/install
|
36 |
+
# or host with cuda:
|
37 |
+
export MGE_INSTALL_PATH=${megengine_root}/build_dir/host/MGE_WITH_CUDA_ON/MGE_INFERENCE_ONLY_ON/Release/install
|
38 |
+
# or cross build for android aarch64:
|
39 |
+
export MGE_INSTALL_PATH=${megengine_root}/build_dir/android/arm64-v8a/Release/install
|
40 |
+
```
|
41 |
* you can refs [build tutorial of MegEngine](https://github.com/MegEngine/MegEngine/blob/master/scripts/cmake-build/BUILD_README.md) to build other platform, eg, windows/macos/ etc!
|
42 |
|
43 |
### Step3: build OpenCV
|
44 |
+
|
45 |
+
```shell
|
46 |
+
git clone https://github.com/opencv/opencv.git
|
47 |
+
|
48 |
+
git checkout 3.4.15 (we test at 3.4.15, if test other version, may need modify some build)
|
49 |
+
```
|
50 |
+
|
51 |
+
- patch diff for android:
|
52 |
+
|
53 |
+
```
|
54 |
+
# ```
|
55 |
+
# diff --git a/CMakeLists.txt b/CMakeLists.txt
|
56 |
+
# index f6a2da5310..10354312c9 100644
|
57 |
+
# --- a/CMakeLists.txt
|
58 |
+
# +++ b/CMakeLists.txt
|
59 |
+
# @@ -643,7 +643,7 @@ if(UNIX)
|
60 |
+
# if(NOT APPLE)
|
61 |
+
# CHECK_INCLUDE_FILE(pthread.h HAVE_PTHREAD)
|
62 |
+
# if(ANDROID)
|
63 |
+
# - set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} dl m log)
|
64 |
+
# + set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} dl m log z)
|
65 |
+
# elseif(CMAKE_SYSTEM_NAME MATCHES "FreeBSD|NetBSD|DragonFly|OpenBSD|Haiku")
|
66 |
+
# set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m pthread)
|
67 |
+
# elseif(EMSCRIPTEN)
|
68 |
|
69 |
+
# ```
|
70 |
+
```
|
71 |
|
72 |
+
- build for host
|
73 |
|
74 |
+
```shell
|
75 |
+
cd root_dir_of_opencv
|
76 |
+
mkdir -p build/install
|
77 |
+
cd build
|
78 |
+
cmake -DBUILD_JAVA=OFF -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=$PWD/install
|
79 |
+
make install -j32
|
80 |
+
```
|
81 |
|
82 |
* build for android-aarch64
|
83 |
|
84 |
+
```shell
|
85 |
+
cd root_dir_of_opencv
|
86 |
+
mkdir -p build_android/install
|
87 |
+
cd build_android
|
88 |
+
|
89 |
+
cmake -DCMAKE_TOOLCHAIN_FILE="$NDK_ROOT/build/cmake/android.toolchain.cmake" -DANDROID_NDK="$NDK_ROOT" -DANDROID_ABI=arm64-v8a -DANDROID_NATIVE_API_LEVEL=21 -DBUILD_JAVA=OFF -DBUILD_ANDROID_PROJECTS=OFF -DBUILD_ANDROID_EXAMPLES=OFF -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=$PWD/install ..
|
90 |
+
|
91 |
+
make install -j32
|
92 |
+
```
|
93 |
|
94 |
* after build OpenCV, you need export `OPENCV_INSTALL_INCLUDE_PATH ` and `OPENCV_INSTALL_LIB_PATH`
|
95 |
|
96 |
+
```shell
|
97 |
+
# host build:
|
98 |
+
export OPENCV_INSTALL_INCLUDE_PATH=${path of opencv}/build/install/include
|
99 |
+
export OPENCV_INSTALL_LIB_PATH=${path of opencv}/build/install/lib
|
100 |
+
# or cross build for android aarch64:
|
101 |
+
export OPENCV_INSTALL_INCLUDE_PATH=${path of opencv}/build_android/install/sdk/native/jni/include
|
102 |
+
export OPENCV_INSTALL_LIB_PATH=${path of opencv}/build_android/install/sdk/native/libs/arm64-v8a
|
103 |
+
```
|
104 |
|
105 |
### Step4: build test demo
|
106 |
|
107 |
+
```shell
|
108 |
+
run build.sh
|
109 |
+
|
110 |
+
# if host:
|
111 |
+
export CXX=g++
|
112 |
+
./build.sh
|
113 |
+
# or cross android aarch64
|
114 |
+
export CXX=aarch64-linux-android21-clang++
|
115 |
+
./build.sh
|
116 |
+
```
|
117 |
|
118 |
### Step5: run demo
|
119 |
|
|
|
122 |
> * reference to python demo's `dump.py` script.
|
123 |
> * wget https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.mge
|
124 |
|
125 |
+
```shell
|
126 |
+
# if host:
|
127 |
+
LD_LIBRARY_PATH=$MGE_INSTALL_PATH/lib/:$OPENCV_INSTALL_LIB_PATH ./yolox yolox_s.mge ../../../assets/dog.jpg cuda/cpu/multithread <warmup_count> <thread_number>
|
128 |
+
|
129 |
+
# or cross android
|
130 |
+
adb push/scp $MGE_INSTALL_PATH/lib/libmegengine.so android_phone
|
131 |
+
adb push/scp $OPENCV_INSTALL_LIB_PATH/*.so android_phone
|
132 |
+
adb push/scp ./yolox yolox_s.mge android_phone
|
133 |
+
adb push/scp ../../../assets/dog.jpg android_phone
|
134 |
+
|
135 |
+
# login in android_phone by adb or ssh
|
136 |
+
# then run:
|
137 |
+
LD_LIBRARY_PATH=. ./yolox yolox_s.mge dog.jpg cpu/multithread <warmup_count> <thread_number> <use_fast_run> <use_weight_preprocess> <run_with_fp16>
|
138 |
+
|
139 |
+
# * <warmup_count> means warmup count, valid number >=0
|
140 |
+
# * <thread_number> means thread number, valid number >=1, only take effect `multithread` device
|
141 |
+
# * <use_fast_run> if >=1 , will use fastrun to choose best algo
|
142 |
+
# * <use_weight_preprocess> if >=1, will handle weight preprocess before exe
|
143 |
+
# * <run_with_fp16> if >=1, will run with fp16 mode
|
144 |
+
```
|
145 |
|
146 |
## Bechmark
|
147 |
|
148 |
+
* model info: yolox-s @ input(1,3,640,640)
|
149 |
|
150 |
* test devices
|
151 |
|
152 |
+
```
|
153 |
* x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
|
154 |
* aarch64 -- xiamo phone mi9
|
155 |
* cuda -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
|
156 |
+
```
|
157 |
|
158 |
| megengine @ tag1.4(fastrun + weight\_preprocess)/sec | 1 thread |
|
159 |
| ---------------------------------------------------- | -------- |
|
demo/ONNXRuntime/README.md
CHANGED
@@ -3,6 +3,7 @@
|
|
3 |
This doc introduces how to convert your pytorch model into onnx, and how to run an onnxruntime demo to verify your convertion.
|
4 |
|
5 |
### Download ONNX models.
|
|
|
6 |
| Model | Parameters | GFLOPs | Test Size | mAP | Weights |
|
7 |
|:------| :----: | :----: | :---: | :---: | :---: |
|
8 |
| YOLOX-Nano | 0.91M | 1.08 | 416x416 | 25.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EfAGwvevU-lNhW5OqFAyHbwBJdI_7EaKu5yU04fgF5BU7w?e=gvq4hf)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano.onnx) |
|
@@ -29,7 +30,7 @@ python3 tools/export_onnx.py --output-name yolox_s.onnx -n yolox-s -c yolox_s.pt
|
|
29 |
Notes:
|
30 |
* -n: specify a model name. The model name must be one of the [yolox-s,m,l,x and yolox-nane, yolox-tiny, yolov3]
|
31 |
* -c: the model you have trained
|
32 |
-
* -o: opset version, default 11. **However, if you will further convert your onnx model to [OpenVINO](
|
33 |
* --no-onnxsim: disable onnxsim
|
34 |
* To customize an input shape for onnx model, modify the following code in tools/export.py:
|
35 |
|
|
|
3 |
This doc introduces how to convert your pytorch model into onnx, and how to run an onnxruntime demo to verify your convertion.
|
4 |
|
5 |
### Download ONNX models.
|
6 |
+
|
7 |
| Model | Parameters | GFLOPs | Test Size | mAP | Weights |
|
8 |
|:------| :----: | :----: | :---: | :---: | :---: |
|
9 |
| YOLOX-Nano | 0.91M | 1.08 | 416x416 | 25.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EfAGwvevU-lNhW5OqFAyHbwBJdI_7EaKu5yU04fgF5BU7w?e=gvq4hf)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano.onnx) |
|
|
|
30 |
Notes:
|
31 |
* -n: specify a model name. The model name must be one of the [yolox-s,m,l,x and yolox-nane, yolox-tiny, yolov3]
|
32 |
* -c: the model you have trained
|
33 |
+
* -o: opset version, default 11. **However, if you will further convert your onnx model to [OpenVINO](https://github.com/Megvii-BaseDetection/YOLOX/demo/OpenVINO/), please specify the opset version to 10.**
|
34 |
* --no-onnxsim: disable onnxsim
|
35 |
* To customize an input shape for onnx model, modify the following code in tools/export.py:
|
36 |
|
demo/OpenVINO/cpp/README.md
CHANGED
@@ -3,6 +3,7 @@
|
|
3 |
This toturial includes a C++ demo for OpenVINO, as well as some converted models.
|
4 |
|
5 |
### Download OpenVINO models.
|
|
|
6 |
| Model | Parameters | GFLOPs | Test Size | mAP | Weights |
|
7 |
|:------| :----: | :----: | :---: | :---: | :---: |
|
8 |
| [YOLOX-Nano](../../../exps/nano.py) | 0.91M | 1.08 | 416x416 | 25.3 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EeWY57o5wQZFtXYd1KJw6Z8B4vxZru649XxQHYIFgio3Qw?e=ZS81ce)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano_openvino.tar.gz) |
|
|
|
3 |
This toturial includes a C++ demo for OpenVINO, as well as some converted models.
|
4 |
|
5 |
### Download OpenVINO models.
|
6 |
+
|
7 |
| Model | Parameters | GFLOPs | Test Size | mAP | Weights |
|
8 |
|:------| :----: | :----: | :---: | :---: | :---: |
|
9 |
| [YOLOX-Nano](../../../exps/nano.py) | 0.91M | 1.08 | 416x416 | 25.3 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EeWY57o5wQZFtXYd1KJw6Z8B4vxZru649XxQHYIFgio3Qw?e=ZS81ce)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano_openvino.tar.gz) |
|
demo/OpenVINO/python/README.md
CHANGED
@@ -3,6 +3,7 @@
|
|
3 |
This toturial includes a Python demo for OpenVINO, as well as some converted models.
|
4 |
|
5 |
### Download OpenVINO models.
|
|
|
6 |
| Model | Parameters | GFLOPs | Test Size | mAP | Weights |
|
7 |
|:------| :----: | :----: | :---: | :---: | :---: |
|
8 |
| [YOLOX-Nano](../../../exps/default/nano.py) | 0.91M | 1.08 | 416x416 | 25.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EeWY57o5wQZFtXYd1KJw6Z8B4vxZru649XxQHYIFgio3Qw?e=ZS81ce)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano_openvino.tar.gz) |
|
@@ -51,7 +52,7 @@ source ~/.bashrc
|
|
51 |
|
52 |
1. Export ONNX model
|
53 |
|
54 |
-
Please refer to the [ONNX toturial](
|
55 |
|
56 |
2. Convert ONNX to OpenVINO
|
57 |
|
|
|
3 |
This toturial includes a Python demo for OpenVINO, as well as some converted models.
|
4 |
|
5 |
### Download OpenVINO models.
|
6 |
+
|
7 |
| Model | Parameters | GFLOPs | Test Size | mAP | Weights |
|
8 |
|:------| :----: | :----: | :---: | :---: | :---: |
|
9 |
| [YOLOX-Nano](../../../exps/default/nano.py) | 0.91M | 1.08 | 416x416 | 25.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EeWY57o5wQZFtXYd1KJw6Z8B4vxZru649XxQHYIFgio3Qw?e=ZS81ce)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano_openvino.tar.gz) |
|
|
|
52 |
|
53 |
1. Export ONNX model
|
54 |
|
55 |
+
Please refer to the [ONNX toturial](https://github.com/Megvii-BaseDetection/YOLOX/demo/ONNXRuntime). **Note that you should set --opset to 10, otherwise your next step will fail.**
|
56 |
|
57 |
2. Convert ONNX to OpenVINO
|
58 |
|
demo/TensorRT/cpp/README.md
CHANGED
@@ -6,7 +6,7 @@ our C++ demo does not include the model converting or constructing like other te
|
|
6 |
|
7 |
## Step 1: Prepare serialized engine file
|
8 |
|
9 |
-
Follow the trt [python demo README](
|
10 |
|
11 |
Check the 'model_trt.engine' file generated from Step 1, which will be automatically saved at the current demo dir.
|
12 |
|
|
|
6 |
|
7 |
## Step 1: Prepare serialized engine file
|
8 |
|
9 |
+
Follow the trt [python demo README](https://github.com/Megvii-BaseDetection/YOLOX/demo/TensorRT/python/README.md) to convert and save the serialized engine file.
|
10 |
|
11 |
Check the 'model_trt.engine' file generated from Step 1, which will be automatically saved at the current demo dir.
|
12 |
|
docs/.gitignore
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
_build
|
docs/Makefile
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Minimal makefile for Sphinx documentation
|
2 |
+
# Copyright (c) Facebook, Inc. and its affiliates.
|
3 |
+
|
4 |
+
# You can set these variables from the command line.
|
5 |
+
SPHINXOPTS =
|
6 |
+
SPHINXBUILD = sphinx-build
|
7 |
+
SOURCEDIR = .
|
8 |
+
BUILDDIR = _build
|
9 |
+
|
10 |
+
# Put it first so that "make" without argument is like "make help".
|
11 |
+
help:
|
12 |
+
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
13 |
+
|
14 |
+
.PHONY: help Makefile
|
15 |
+
|
16 |
+
# Catch-all target: route all unknown targets to Sphinx using the new
|
17 |
+
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
18 |
+
%: Makefile
|
19 |
+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
docs/_static/css/custom.css
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/*
|
2 |
+
* Copyright (c) Facebook, Inc. and its affiliates.
|
3 |
+
* some extra css to make markdown look similar between github/sphinx
|
4 |
+
*/
|
5 |
+
|
6 |
+
/*
|
7 |
+
* Below is for install.md:
|
8 |
+
*/
|
9 |
+
.rst-content code {
|
10 |
+
white-space: pre;
|
11 |
+
border: 0px;
|
12 |
+
}
|
13 |
+
|
14 |
+
.rst-content th {
|
15 |
+
border: 1px solid #e1e4e5;
|
16 |
+
}
|
17 |
+
|
18 |
+
.rst-content th p {
|
19 |
+
/* otherwise will be default 24px for regular paragraph */
|
20 |
+
margin-bottom: 0px;
|
21 |
+
}
|
22 |
+
|
23 |
+
.rst-content .line-block {
|
24 |
+
/* otherwise will be 24px */
|
25 |
+
margin-bottom: 0px;
|
26 |
+
}
|
27 |
+
|
28 |
+
div.section > details {
|
29 |
+
padding-bottom: 1em;
|
30 |
+
}
|
31 |
+
|
docs/conf.py
ADDED
@@ -0,0 +1,384 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
# Code are based on
|
3 |
+
# https://github.com/facebookresearch/detectron2/blob/master/docs/conf.py
|
4 |
+
# Copyright (c) Facebook, Inc. and its affiliates.
|
5 |
+
# Copyright (c) Megvii, Inc. and its affiliates.
|
6 |
+
|
7 |
+
# flake8: noqa
|
8 |
+
|
9 |
+
# Configuration file for the Sphinx documentation builder.
|
10 |
+
#
|
11 |
+
# This file does only contain a selection of the most common options. For a
|
12 |
+
# full list see the documentation:
|
13 |
+
# http://www.sphinx-doc.org/en/master/config
|
14 |
+
|
15 |
+
# -- Path setup --------------------------------------------------------------
|
16 |
+
|
17 |
+
# If extensions (or modules to document with autodoc) are in another directory,
|
18 |
+
# add these directories to sys.path here. If the directory is relative to the
|
19 |
+
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
20 |
+
#
|
21 |
+
import os
|
22 |
+
import sys
|
23 |
+
from unittest import mock
|
24 |
+
from sphinx.domains import Domain
|
25 |
+
from typing import Dict, List, Tuple
|
26 |
+
|
27 |
+
# The theme to use for HTML and HTML Help pages. See the documentation for
|
28 |
+
# a list of builtin themes.
|
29 |
+
#
|
30 |
+
import sphinx_rtd_theme
|
31 |
+
|
32 |
+
|
33 |
+
class GithubURLDomain(Domain):
|
34 |
+
"""
|
35 |
+
Resolve certain links in markdown files to github source.
|
36 |
+
"""
|
37 |
+
|
38 |
+
name = "githuburl"
|
39 |
+
ROOT = "https://github.com/Megvii-BaseDetection/YOLOX"
|
40 |
+
# LINKED_DOC = ["tutorials/install", "tutorials/getting_started"]
|
41 |
+
LINKED_DOC = ["tutorials/install",]
|
42 |
+
|
43 |
+
def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode):
|
44 |
+
github_url = None
|
45 |
+
if not target.endswith("html") and target.startswith("../../"):
|
46 |
+
url = target.replace("../", "")
|
47 |
+
github_url = url
|
48 |
+
if fromdocname in self.LINKED_DOC:
|
49 |
+
# unresolved links in these docs are all github links
|
50 |
+
github_url = target
|
51 |
+
|
52 |
+
if github_url is not None:
|
53 |
+
if github_url.endswith("MODEL_ZOO") or github_url.endswith("README"):
|
54 |
+
# bug of recommonmark.
|
55 |
+
# https://github.com/readthedocs/recommonmark/blob/ddd56e7717e9745f11300059e4268e204138a6b1/recommonmark/parser.py#L152-L155
|
56 |
+
github_url += ".md"
|
57 |
+
print("Ref {} resolved to github:{}".format(target, github_url))
|
58 |
+
contnode["refuri"] = self.ROOT + github_url
|
59 |
+
return [("githuburl:any", contnode)]
|
60 |
+
else:
|
61 |
+
return []
|
62 |
+
|
63 |
+
|
64 |
+
# to support markdown
|
65 |
+
from recommonmark.parser import CommonMarkParser
|
66 |
+
|
67 |
+
sys.path.insert(0, os.path.abspath("../"))
|
68 |
+
os.environ["_DOC_BUILDING"] = "True"
|
69 |
+
DEPLOY = os.environ.get("READTHEDOCS") == "True"
|
70 |
+
|
71 |
+
|
72 |
+
# -- Project information -----------------------------------------------------
|
73 |
+
|
74 |
+
# fmt: off
|
75 |
+
try:
|
76 |
+
import torch # noqa
|
77 |
+
except ImportError:
|
78 |
+
for m in [
|
79 |
+
"torch", "torchvision", "torch.nn", "torch.nn.parallel", "torch.distributed", "torch.multiprocessing", "torch.autograd",
|
80 |
+
"torch.autograd.function", "torch.nn.modules", "torch.nn.modules.utils", "torch.utils", "torch.utils.data", "torch.onnx",
|
81 |
+
"torchvision", "torchvision.ops",
|
82 |
+
]:
|
83 |
+
sys.modules[m] = mock.Mock(name=m)
|
84 |
+
sys.modules['torch'].__version__ = "1.7" # fake version
|
85 |
+
HAS_TORCH = False
|
86 |
+
else:
|
87 |
+
try:
|
88 |
+
torch.ops.yolox = mock.Mock(name="torch.ops.yolox")
|
89 |
+
except:
|
90 |
+
pass
|
91 |
+
HAS_TORCH = True
|
92 |
+
|
93 |
+
for m in [
|
94 |
+
"cv2", "scipy", "portalocker", "yolox._C",
|
95 |
+
"pycocotools", "pycocotools.mask", "pycocotools.coco", "pycocotools.cocoeval",
|
96 |
+
"google", "google.protobuf", "google.protobuf.internal", "onnx",
|
97 |
+
"caffe2", "caffe2.proto", "caffe2.python", "caffe2.python.utils", "caffe2.python.onnx", "caffe2.python.onnx.backend",
|
98 |
+
]:
|
99 |
+
sys.modules[m] = mock.Mock(name=m)
|
100 |
+
# fmt: on
|
101 |
+
sys.modules["cv2"].__version__ = "3.4"
|
102 |
+
|
103 |
+
import yolox # isort: skip
|
104 |
+
|
105 |
+
# if HAS_TORCH:
|
106 |
+
# from detectron2.utils.env import fixup_module_metadata
|
107 |
+
|
108 |
+
# fixup_module_metadata("torch.nn", torch.nn.__dict__)
|
109 |
+
# fixup_module_metadata("torch.utils.data", torch.utils.data.__dict__)
|
110 |
+
|
111 |
+
|
112 |
+
project = "YOLOX"
|
113 |
+
copyright = "2021-2021, YOLOX contributors"
|
114 |
+
author = "YOLOX contributors"
|
115 |
+
|
116 |
+
# The short X.Y version
|
117 |
+
version = yolox.__version__
|
118 |
+
# The full version, including alpha/beta/rc tags
|
119 |
+
release = version
|
120 |
+
|
121 |
+
|
122 |
+
# -- General configuration ---------------------------------------------------
|
123 |
+
|
124 |
+
# If your documentation needs a minimal Sphinx version, state it here.
|
125 |
+
#
|
126 |
+
needs_sphinx = "3.0"
|
127 |
+
|
128 |
+
# Add any Sphinx extension module names here, as strings. They can be
|
129 |
+
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
130 |
+
# ones.
|
131 |
+
extensions = [
|
132 |
+
"recommonmark",
|
133 |
+
"sphinx.ext.autodoc",
|
134 |
+
"sphinx.ext.napoleon",
|
135 |
+
"sphinx.ext.intersphinx",
|
136 |
+
"sphinx.ext.todo",
|
137 |
+
"sphinx.ext.coverage",
|
138 |
+
"sphinx.ext.mathjax",
|
139 |
+
"sphinx.ext.viewcode",
|
140 |
+
"sphinx.ext.githubpages",
|
141 |
+
'sphinx_markdown_tables',
|
142 |
+
]
|
143 |
+
|
144 |
+
# -- Configurations for plugins ------------
|
145 |
+
napoleon_google_docstring = True
|
146 |
+
napoleon_include_init_with_doc = True
|
147 |
+
napoleon_include_special_with_doc = True
|
148 |
+
napoleon_numpy_docstring = False
|
149 |
+
napoleon_use_rtype = False
|
150 |
+
autodoc_inherit_docstrings = False
|
151 |
+
autodoc_member_order = "bysource"
|
152 |
+
|
153 |
+
if DEPLOY:
|
154 |
+
intersphinx_timeout = 10
|
155 |
+
else:
|
156 |
+
# skip this when building locally
|
157 |
+
intersphinx_timeout = 0.5
|
158 |
+
intersphinx_mapping = {
|
159 |
+
"python": ("https://docs.python.org/3.6", None),
|
160 |
+
"numpy": ("https://docs.scipy.org/doc/numpy/", None),
|
161 |
+
"torch": ("https://pytorch.org/docs/master/", None),
|
162 |
+
}
|
163 |
+
# -------------------------
|
164 |
+
|
165 |
+
|
166 |
+
# Add any paths that contain templates here, relative to this directory.
|
167 |
+
templates_path = ["_templates"]
|
168 |
+
|
169 |
+
source_suffix = [".rst", ".md"]
|
170 |
+
|
171 |
+
# The master toctree document.
|
172 |
+
master_doc = "index"
|
173 |
+
|
174 |
+
# The language for content autogenerated by Sphinx. Refer to documentation
|
175 |
+
# for a list of supported languages.
|
176 |
+
#
|
177 |
+
# This is also used if you do content translation via gettext catalogs.
|
178 |
+
# Usually you set "language" from the command line for these cases.
|
179 |
+
language = None
|
180 |
+
|
181 |
+
# List of patterns, relative to source directory, that match files and
|
182 |
+
# directories to ignore when looking for source files.
|
183 |
+
# This pattern also affects html_static_path and html_extra_path.
|
184 |
+
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "build", "README.md", "tutorials/README.md"]
|
185 |
+
|
186 |
+
# The name of the Pygments (syntax highlighting) style to use.
|
187 |
+
pygments_style = "sphinx"
|
188 |
+
|
189 |
+
|
190 |
+
# -- Options for HTML output -------------------------------------------------
|
191 |
+
|
192 |
+
html_theme = "sphinx_rtd_theme"
|
193 |
+
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
|
194 |
+
|
195 |
+
# Theme options are theme-specific and customize the look and feel of a theme
|
196 |
+
# further. For a list of options available for each theme, see the
|
197 |
+
# documentation.
|
198 |
+
#
|
199 |
+
# html_theme_options = {}
|
200 |
+
|
201 |
+
# Add any paths that contain custom static files (such as style sheets) here,
|
202 |
+
# relative to this directory. They are copied after the builtin static files,
|
203 |
+
# so a file named "default.css" will overwrite the builtin "default.css".
|
204 |
+
html_static_path = ["_static"]
|
205 |
+
html_css_files = ["css/custom.css"]
|
206 |
+
|
207 |
+
# Custom sidebar templates, must be a dictionary that maps document names
|
208 |
+
# to template names.
|
209 |
+
#
|
210 |
+
# The default sidebars (for documents that don't match any pattern) are
|
211 |
+
# defined by theme itself. Builtin themes are using these templates by
|
212 |
+
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
|
213 |
+
# 'searchbox.html']``.
|
214 |
+
#
|
215 |
+
# html_sidebars = {}
|
216 |
+
|
217 |
+
|
218 |
+
# -- Options for HTMLHelp output ---------------------------------------------
|
219 |
+
|
220 |
+
# Output file base name for HTML help builder.
|
221 |
+
htmlhelp_basename = "yoloxdoc"
|
222 |
+
|
223 |
+
|
224 |
+
# -- Options for LaTeX output ------------------------------------------------
|
225 |
+
|
226 |
+
latex_elements = {
|
227 |
+
# The paper size ('letterpaper' or 'a4paper').
|
228 |
+
#
|
229 |
+
# 'papersize': 'letterpaper',
|
230 |
+
# The font size ('10pt', '11pt' or '12pt').
|
231 |
+
#
|
232 |
+
# 'pointsize': '10pt',
|
233 |
+
# Additional stuff for the LaTeX preamble.
|
234 |
+
#
|
235 |
+
# 'preamble': '',
|
236 |
+
# Latex figure (float) alignment
|
237 |
+
#
|
238 |
+
# 'figure_align': 'htbp',
|
239 |
+
}
|
240 |
+
|
241 |
+
# Grouping the document tree into LaTeX files. List of tuples
|
242 |
+
# (source start file, target name, title,
|
243 |
+
# author, documentclass [howto, manual, or own class]).
|
244 |
+
latex_documents = [
|
245 |
+
(master_doc, "yolox.tex", "yolox Documentation", "yolox contributors", "manual")
|
246 |
+
]
|
247 |
+
|
248 |
+
|
249 |
+
# -- Options for manual page output ------------------------------------------
|
250 |
+
|
251 |
+
# One entry per manual page. List of tuples
|
252 |
+
# (source start file, name, description, authors, manual section).
|
253 |
+
man_pages = [(master_doc, "YOLOX", "YOLOX Documentation", [author], 1)]
|
254 |
+
|
255 |
+
|
256 |
+
# -- Options for Texinfo output ----------------------------------------------
|
257 |
+
|
258 |
+
# Grouping the document tree into Texinfo files. List of tuples
|
259 |
+
# (source start file, target name, title, author,
|
260 |
+
# dir menu entry, description, category)
|
261 |
+
texinfo_documents = [
|
262 |
+
(
|
263 |
+
master_doc,
|
264 |
+
"YOLOX",
|
265 |
+
"YOLOX Documentation",
|
266 |
+
author,
|
267 |
+
"YOLOX",
|
268 |
+
"One line description of project.",
|
269 |
+
"Miscellaneous",
|
270 |
+
)
|
271 |
+
]
|
272 |
+
|
273 |
+
|
274 |
+
# -- Options for todo extension ----------------------------------------------
|
275 |
+
|
276 |
+
# If true, `todo` and `todoList` produce output, else they produce nothing.
|
277 |
+
todo_include_todos = True
|
278 |
+
|
279 |
+
|
280 |
+
def autodoc_skip_member(app, what, name, obj, skip, options):
|
281 |
+
# we hide something deliberately
|
282 |
+
if getattr(obj, "__HIDE_SPHINX_DOC__", False):
|
283 |
+
return True
|
284 |
+
|
285 |
+
# Hide some that are deprecated or not intended to be used
|
286 |
+
HIDDEN = {
|
287 |
+
"ResNetBlockBase",
|
288 |
+
"GroupedBatchSampler",
|
289 |
+
"build_transform_gen",
|
290 |
+
"export_caffe2_model",
|
291 |
+
"export_onnx_model",
|
292 |
+
"apply_transform_gens",
|
293 |
+
"TransformGen",
|
294 |
+
"apply_augmentations",
|
295 |
+
"StandardAugInput",
|
296 |
+
"build_batch_data_loader",
|
297 |
+
"draw_panoptic_seg_predictions",
|
298 |
+
"WarmupCosineLR",
|
299 |
+
"WarmupMultiStepLR",
|
300 |
+
}
|
301 |
+
try:
|
302 |
+
if name in HIDDEN or (
|
303 |
+
hasattr(obj, "__doc__") and obj.__doc__.lower().strip().startswith("deprecated")
|
304 |
+
):
|
305 |
+
print("Skipping deprecated object: {}".format(name))
|
306 |
+
return True
|
307 |
+
except:
|
308 |
+
pass
|
309 |
+
return skip
|
310 |
+
|
311 |
+
|
312 |
+
# _PAPER_DATA = {
|
313 |
+
# "resnet": ("1512.03385", "Deep Residual Learning for Image Recognition"),
|
314 |
+
# "fpn": ("1612.03144", "Feature Pyramid Networks for Object Detection"),
|
315 |
+
# "mask r-cnn": ("1703.06870", "Mask R-CNN"),
|
316 |
+
# "faster r-cnn": (
|
317 |
+
# "1506.01497",
|
318 |
+
# "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
|
319 |
+
# ),
|
320 |
+
# "deformconv": ("1703.06211", "Deformable Convolutional Networks"),
|
321 |
+
# "deformconv2": ("1811.11168", "Deformable ConvNets v2: More Deformable, Better Results"),
|
322 |
+
# "panopticfpn": ("1901.02446", "Panoptic Feature Pyramid Networks"),
|
323 |
+
# "retinanet": ("1708.02002", "Focal Loss for Dense Object Detection"),
|
324 |
+
# "cascade r-cnn": ("1712.00726", "Cascade R-CNN: Delving into High Quality Object Detection"),
|
325 |
+
# "lvis": ("1908.03195", "LVIS: A Dataset for Large Vocabulary Instance Segmentation"),
|
326 |
+
# "rrpn": ("1703.01086", "Arbitrary-Oriented Scene Text Detection via Rotation Proposals"),
|
327 |
+
# "imagenet in 1h": ("1706.02677", "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour"),
|
328 |
+
# "xception": ("1610.02357", "Xception: Deep Learning with Depthwise Separable Convolutions"),
|
329 |
+
# "mobilenet": (
|
330 |
+
# "1704.04861",
|
331 |
+
# "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications",
|
332 |
+
# ),
|
333 |
+
# "deeplabv3+": (
|
334 |
+
# "1802.02611",
|
335 |
+
# "Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation",
|
336 |
+
# ),
|
337 |
+
# "dds": ("2003.13678", "Designing Network Design Spaces"),
|
338 |
+
# "scaling": ("2103.06877", "Fast and Accurate Model Scaling"),
|
339 |
+
# }
|
340 |
+
|
341 |
+
|
342 |
+
# def paper_ref_role(
|
343 |
+
# typ: str,
|
344 |
+
# rawtext: str,
|
345 |
+
# text: str,
|
346 |
+
# lineno: int,
|
347 |
+
# inliner,
|
348 |
+
# options: Dict = {},
|
349 |
+
# content: List[str] = [],
|
350 |
+
# ):
|
351 |
+
# """
|
352 |
+
# Parse :paper:`xxx`. Similar to the "extlinks" sphinx extension.
|
353 |
+
# """
|
354 |
+
# from docutils import nodes, utils
|
355 |
+
# from sphinx.util.nodes import split_explicit_title
|
356 |
+
|
357 |
+
# text = utils.unescape(text)
|
358 |
+
# has_explicit_title, title, link = split_explicit_title(text)
|
359 |
+
# link = link.lower()
|
360 |
+
# if link not in _PAPER_DATA:
|
361 |
+
# inliner.reporter.warning("Cannot find paper " + link)
|
362 |
+
# paper_url, paper_title = "#", link
|
363 |
+
# else:
|
364 |
+
# paper_url, paper_title = _PAPER_DATA[link]
|
365 |
+
# if "/" not in paper_url:
|
366 |
+
# paper_url = "https://arxiv.org/abs/" + paper_url
|
367 |
+
# if not has_explicit_title:
|
368 |
+
# title = paper_title
|
369 |
+
# pnode = nodes.reference(title, title, internal=False, refuri=paper_url)
|
370 |
+
# return [pnode], []
|
371 |
+
|
372 |
+
|
373 |
+
def setup(app):
|
374 |
+
from recommonmark.transform import AutoStructify
|
375 |
+
|
376 |
+
app.add_domain(GithubURLDomain)
|
377 |
+
app.connect("autodoc-skip-member", autodoc_skip_member)
|
378 |
+
# app.add_role("paper", paper_ref_role)
|
379 |
+
app.add_config_value(
|
380 |
+
"recommonmark_config",
|
381 |
+
{"enable_math": True, "enable_inline_math": True, "enable_eval_rst": True},
|
382 |
+
True,
|
383 |
+
)
|
384 |
+
app.add_transform(AutoStructify)
|
docs/demo/megengine_cpp_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/MegEngine/cpp/README.md
|
docs/demo/megengine_py_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/MegEngine/python/README.md
|
docs/demo/ncnn_android_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/ncnn/android/README.md
|
docs/demo/ncnn_cpp_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/ncnn/cpp/README.md
|
docs/demo/onnx_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/ONNXRuntime/README.md
|
docs/demo/openvino_cpp_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/OpenVINO/cpp/README.md
|
docs/demo/openvino_py_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/OpenVINO/python/README.md
|
docs/demo/trt_cpp_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/TensorRT/cpp/README.md
|
docs/demo/trt_py_readme.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
../../demo/TensorRT/python/README.md
|
docs/index.rst
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
Welcome to YOLOX's documentation!
|
3 |
+
======================================
|
4 |
+
|
5 |
+
.. image:: ../assets/logo.png
|
6 |
+
|
7 |
+
.. toctree::
|
8 |
+
:maxdepth: 2
|
9 |
+
:caption: Quick Run
|
10 |
+
|
11 |
+
quick_run
|
12 |
+
model_zoo
|
13 |
+
|
14 |
+
.. toctree::
|
15 |
+
:maxdepth: 2
|
16 |
+
:caption: Tutorials
|
17 |
+
|
18 |
+
train_custom_data
|
19 |
+
|
20 |
+
.. toctree::
|
21 |
+
:maxdepth: 2
|
22 |
+
:caption: Demployment
|
23 |
+
|
24 |
+
demo/trt_py_readme
|
25 |
+
demo/trt_cpp_readme
|
26 |
+
demo/megengine_cpp_readme
|
27 |
+
demo/megengine_py_readme
|
28 |
+
demo/ncnn_android_readme
|
29 |
+
demo/ncnn_cpp_readme
|
30 |
+
demo/onnx_readme
|
31 |
+
demo/openvino_py_readme
|
32 |
+
demo/openvino_cpp_readme
|
docs/model_zoo.md
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Zoo
|
2 |
+
|
3 |
+
## Standard Models.
|
4 |
+
|
5 |
+
|Model |size |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |
|
6 |
+
| ------ |:---: | :---: |:---: |:---: | :---: | :----: |
|
7 |
+
|[YOLOX-s](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_s.py) |640 |39.6 |9.8 |9.0 | 26.8 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EW62gmO2vnNNs5npxjzunVwB9p307qqygaCkXdTO88BLUg?e=NMTQYw)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.pth) |
|
8 |
+
|[YOLOX-m](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_m.py) |640 |46.4 |12.3 |25.3 |73.8| [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERMTP7VFqrVBrXKMU7Vl4TcBQs0SUeCT7kvc-JdIbej4tQ?e=1MDo9y)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_m.pth) |
|
9 |
+
|[YOLOX-l](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_l.py) |640 |50.0 |14.5 |54.2| 155.6 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EWA8w_IEOzBKvuueBqfaZh0BeoG5sVzR-XYbOJO4YlOkRw?e=wHWOBE)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_l.pth) |
|
10 |
+
|[YOLOX-x](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_x.py) |640 |**51.2** | 17.3 |99.1 |281.9 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdgVPHBziOVBtGAXHfeHI5kBza0q9yyueMGdT0wXZfI1rQ?e=tABO5u)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_x.pth) |
|
11 |
+
|[YOLOX-Darknet53](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolov3.py) |640 | 47.4 | 11.1 |63.7 | 185.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZ-MV1r_fMFPkPrNjvbJEMoBLOLAnXH-XKEB77w8LhXL6Q?e=mf6wOc)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_darknet53.pth) |
|
12 |
+
|
13 |
+
## Light Models.
|
14 |
+
|
15 |
+
|Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights |
|
16 |
+
| ------ |:---: | :---: |:---: |:---: | :---: |
|
17 |
+
|[YOLOX-Nano](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/nano.py) |416 |25.3 | 0.91 |1.08 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdcREey-krhLtdtSnxolxiUBjWMy6EFdiaO9bdOwZ5ygCQ?e=yQpdds)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano.pth) |
|
18 |
+
|[YOLOX-Tiny](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/default/yolox_tiny.py) |416 |32.8 | 5.06 |6.45 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EbZuinX5X1dJmNy8nqSRegABWspKw3QpXxuO82YSoFN1oQ?e=Q7V7XE)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_tiny_32dot8.pth) |
|
docs/quick_run.md
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Get Started
|
3 |
+
|
4 |
+
## 1.Installation
|
5 |
+
|
6 |
+
Step1. Install YOLOX.
|
7 |
+
```shell
|
8 |
+
git clone [email protected]:Megvii-BaseDetection/YOLOX.git
|
9 |
+
cd YOLOX
|
10 |
+
pip3 install -U pip && pip3 install -r requirements.txt
|
11 |
+
pip3 install -v -e . # or python3 setup.py develop
|
12 |
+
```
|
13 |
+
Step2. Install [apex](https://github.com/NVIDIA/apex).
|
14 |
+
|
15 |
+
```shell
|
16 |
+
# skip this step if you don't want to train model.
|
17 |
+
git clone https://github.com/NVIDIA/apex
|
18 |
+
cd apex
|
19 |
+
pip3 install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
|
20 |
+
```
|
21 |
+
Step3. Install [pycocotools](https://github.com/cocodataset/cocoapi).
|
22 |
+
|
23 |
+
```shell
|
24 |
+
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
|
25 |
+
```
|
26 |
+
|
27 |
+
## 2.Demo
|
28 |
+
|
29 |
+
Step1. Download a pretrained model from the benchmark table.
|
30 |
+
|
31 |
+
Step2. Use either -n or -f to specify your detector's config. For example:
|
32 |
+
|
33 |
+
```shell
|
34 |
+
python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
|
35 |
+
```
|
36 |
+
or
|
37 |
+
```shell
|
38 |
+
python tools/demo.py image -f exps/default/yolox_s.py -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
|
39 |
+
```
|
40 |
+
Demo for video:
|
41 |
+
```shell
|
42 |
+
python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
|
43 |
+
```
|
44 |
+
|
45 |
+
|
46 |
+
## 3.Reproduce our results on COCO
|
47 |
+
|
48 |
+
Step1. Prepare COCO dataset
|
49 |
+
```shell
|
50 |
+
cd <YOLOX_HOME>
|
51 |
+
ln -s /path/to/your/COCO ./datasets/COCO
|
52 |
+
```
|
53 |
+
|
54 |
+
Step2. Reproduce our results on COCO by specifying -n:
|
55 |
+
|
56 |
+
```shell
|
57 |
+
python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o
|
58 |
+
yolox-m
|
59 |
+
yolox-l
|
60 |
+
yolox-x
|
61 |
+
```
|
62 |
+
* -d: number of gpu devices
|
63 |
+
* -b: total batch size, the recommended number for -b is num-gpu * 8
|
64 |
+
* --fp16: mixed precision training
|
65 |
+
|
66 |
+
**Multi Machine Training**
|
67 |
+
|
68 |
+
We also support multi-nodes training. Just add the following args:
|
69 |
+
* --num\_machines: num of your total training nodes
|
70 |
+
* --machine\_rank: specify the rank of each node
|
71 |
+
|
72 |
+
When using -f, the above commands are equivalent to:
|
73 |
+
|
74 |
+
```shell
|
75 |
+
python tools/train.py -f exps/default/yolox-s.py -d 8 -b 64 --fp16 -o
|
76 |
+
exps/default/yolox-m.py
|
77 |
+
exps/default/yolox-l.py
|
78 |
+
exps/default/yolox-x.py
|
79 |
+
```
|
80 |
+
|
81 |
+
## 4.Evaluation
|
82 |
+
|
83 |
+
We support batch testing for fast evaluation:
|
84 |
+
|
85 |
+
```shell
|
86 |
+
python tools/eval.py -n yolox-s -c yolox_s.pth -b 64 -d 8 --conf 0.001 [--fp16] [--fuse]
|
87 |
+
yolox-m
|
88 |
+
yolox-l
|
89 |
+
yolox-x
|
90 |
+
```
|
91 |
+
* --fuse: fuse conv and bn
|
92 |
+
* -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
|
93 |
+
* -b: total batch size across on all GPUs
|
94 |
+
|
95 |
+
To reproduce speed test, we use the following command:
|
96 |
+
```shell
|
97 |
+
python tools/eval.py -n yolox-s -c yolox_s.pth -b 1 -d 1 --conf 0.001 --fp16 --fuse
|
98 |
+
yolox-m
|
99 |
+
yolox-l
|
100 |
+
yolox-x
|
101 |
+
```
|
docs/requirements-doc.txt
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
docutils==0.16
|
2 |
+
# https://github.com/sphinx-doc/sphinx/commit/7acd3ada3f38076af7b2b5c9f3b60bb9c2587a3d
|
3 |
+
sphinx==3.2.0
|
4 |
+
recommonmark==0.6.0
|
5 |
+
sphinx_rtd_theme
|
6 |
+
omegaconf>=2.1.0.dev24
|
7 |
+
hydra-core>=1.1.0.dev5
|
8 |
+
sphinx-markdown-tables==0.0.15
|
docs/train_custom_data.md
CHANGED
@@ -1,17 +1,18 @@
|
|
1 |
-
# Train Custom Data
|
|
|
2 |
This page explains how to train your own custom data with YOLOX.
|
3 |
|
4 |
We take an example of fine-tuning YOLOX-S model on VOC dataset to give a more clear guide.
|
5 |
|
6 |
## 0. Before you start
|
7 |
-
Clone this repo and follow the [README](
|
8 |
|
9 |
## 1. Create your own dataset
|
10 |
**Step 1** Prepare your own dataset with images and labels first. For labeling images, you can use tools like [Labelme](https://github.com/wkentaro/labelme) or [CVAT](https://github.com/openvinotoolkit/cvat).
|
11 |
|
12 |
**Step 2** Then, you should write the corresponding Dataset Class which can load images and labels through `__getitem__` method. We currently support COCO format and VOC format.
|
13 |
|
14 |
-
You can also write the Dataset by your own. Let's take the [VOC](
|
15 |
```python
|
16 |
@Dataset.resize_getitem
|
17 |
def __getitem__(self, index):
|
@@ -23,9 +24,9 @@ You can also write the Dataset by your own. Let's take the [VOC](../yolox/data/d
|
|
23 |
return img, target, img_info, img_id
|
24 |
```
|
25 |
|
26 |
-
One more thing worth noting is that you should also implement [pull_item](
|
27 |
|
28 |
-
**Step 3** Prepare the evaluator. We currently have [COCO evaluator](
|
29 |
If you have your own format data or evaluation metric, you can write your own evaluator.
|
30 |
|
31 |
**Step 4** Put your dataset under `$YOLOX_DIR/datasets`, for VOC:
|
@@ -40,9 +41,9 @@ ln -s /path/to/your/VOCdevkit ./datasets/VOCdevkit
|
|
40 |
## 2. Create your Exp file to control everything
|
41 |
We put everything involved in a model to one single Exp file, including model setting, training setting, and testing setting.
|
42 |
|
43 |
-
**A complete Exp file is at [yolox_base.py](
|
44 |
|
45 |
-
Let's take the [VOC Exp file](
|
46 |
|
47 |
We select `YOLOX-S` model here, so we should change the network depth and width. VOC has only 20 classes, so we should also change the `num_classes`.
|
48 |
|
@@ -59,12 +60,12 @@ class Exp(MyExp):
|
|
59 |
|
60 |
Besides, you should also overwrite the `dataset` and `evaluator`, prepared before training the model on your own data.
|
61 |
|
62 |
-
Please see [get_data_loader](
|
63 |
|
64 |
✧✧✧ You can also see the `exps/example/custom` directory for more details.
|
65 |
|
66 |
## 3. Train
|
67 |
-
Except special cases, we always recommend to use our [COCO pretrained weights](
|
68 |
|
69 |
Once you get the Exp file and the COCO pretrained weights we provided, you can train your own model by the following below command:
|
70 |
```bash
|
|
|
1 |
+
# Train Custom Data
|
2 |
+
|
3 |
This page explains how to train your own custom data with YOLOX.
|
4 |
|
5 |
We take an example of fine-tuning YOLOX-S model on VOC dataset to give a more clear guide.
|
6 |
|
7 |
## 0. Before you start
|
8 |
+
Clone this repo and follow the [README](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/README.md) to install YOLOX.
|
9 |
|
10 |
## 1. Create your own dataset
|
11 |
**Step 1** Prepare your own dataset with images and labels first. For labeling images, you can use tools like [Labelme](https://github.com/wkentaro/labelme) or [CVAT](https://github.com/openvinotoolkit/cvat).
|
12 |
|
13 |
**Step 2** Then, you should write the corresponding Dataset Class which can load images and labels through `__getitem__` method. We currently support COCO format and VOC format.
|
14 |
|
15 |
+
You can also write the Dataset by your own. Let's take the [VOC](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/data/datasets/voc.py#L151) Dataset file for example:
|
16 |
```python
|
17 |
@Dataset.resize_getitem
|
18 |
def __getitem__(self, index):
|
|
|
24 |
return img, target, img_info, img_id
|
25 |
```
|
26 |
|
27 |
+
One more thing worth noting is that you should also implement [pull_item](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/data/datasets/voc.py#L129) and [load_anno](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/data/datasets/voc.py#L121) method for the `Mosiac` and `MixUp` augmentations.
|
28 |
|
29 |
+
**Step 3** Prepare the evaluator. We currently have [COCO evaluator](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/evaluators/coco_evaluator.py) and [VOC evaluator](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/evaluators/voc_evaluator.py).
|
30 |
If you have your own format data or evaluation metric, you can write your own evaluator.
|
31 |
|
32 |
**Step 4** Put your dataset under `$YOLOX_DIR/datasets`, for VOC:
|
|
|
41 |
## 2. Create your Exp file to control everything
|
42 |
We put everything involved in a model to one single Exp file, including model setting, training setting, and testing setting.
|
43 |
|
44 |
+
**A complete Exp file is at [yolox_base.py](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/exp/base_exp.py).** It may be too long to write for every exp, but you can inherit the base Exp file and only overwrite the changed part.
|
45 |
|
46 |
+
Let's take the [VOC Exp file](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/example/yolox_voc/yolox_voc_s.py) as an example.
|
47 |
|
48 |
We select `YOLOX-S` model here, so we should change the network depth and width. VOC has only 20 classes, so we should also change the `num_classes`.
|
49 |
|
|
|
60 |
|
61 |
Besides, you should also overwrite the `dataset` and `evaluator`, prepared before training the model on your own data.
|
62 |
|
63 |
+
Please see [get_data_loader](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/example/yolox_voc/yolox_voc_s.py#L20), [get_eval_loader](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/example/yolox_voc/yolox_voc_s.py#L82), and [get_evaluator](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/exps/example/yolox_voc/yolox_voc_s.py#L113) for more details.
|
64 |
|
65 |
✧✧✧ You can also see the `exps/example/custom` directory for more details.
|
66 |
|
67 |
## 3. Train
|
68 |
+
Except special cases, we always recommend to use our [COCO pretrained weights](https://github.com/Megvii-BaseDetection/YOLOX/blob/main/README.md) for initializing the model.
|
69 |
|
70 |
Once you get the Exp file and the COCO pretrained weights we provided, you can train your own model by the following below command:
|
71 |
```bash
|