tidalove commited on
Commit
833e9a7
·
verified ·
1 Parent(s): 0125293

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -249
README.md CHANGED
@@ -1,249 +1,7 @@
1
- <div align="center"><img src="assets/logo.png" width="350"></div>
2
- <img src="assets/demo.png" >
3
-
4
- ## Introduction
5
- YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities.
6
- For more details, please refer to our [report on Arxiv](https://arxiv.org/abs/2107.08430).
7
-
8
- This repo is an implementation of PyTorch version YOLOX, there is also a [MegEngine implementation](https://github.com/MegEngine/YOLOX).
9
-
10
- <img src="assets/git_fig.png" width="1000" >
11
-
12
- ## Updates!!
13
- * 【2023/02/28】 We support assignment visualization tool, see doc [here](./docs/assignment_visualization.md).
14
- * 【2022/04/14】 We support jit compile op.
15
- * 【2021/08/19】 We optimize the training process with **2x** faster training and **~1%** higher performance! See [notes](docs/updates_note.md) for more details.
16
- * 【2021/08/05】 We release [MegEngine version YOLOX](https://github.com/MegEngine/YOLOX).
17
- * 【2021/07/28】 We fix the fatal error of [memory leak](https://github.com/Megvii-BaseDetection/YOLOX/issues/103)
18
- * 【2021/07/26】 We now support [MegEngine](https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/MegEngine) deployment.
19
- * 【2021/07/20】 We have released our technical report on [Arxiv](https://arxiv.org/abs/2107.08430).
20
-
21
- ## Benchmark
22
-
23
- #### Standard Models.
24
-
25
- |Model |size |mAP<sup>val<br>0.5:0.95 |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |
26
- | ------ |:---: | :---: | :---: |:---: |:---: | :---: | :----: |
27
- |[YOLOX-s](./exps/default/yolox_s.py) |640 |40.5 |40.5 |9.8 |9.0 | 26.8 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth) |
28
- |[YOLOX-m](./exps/default/yolox_m.py) |640 |46.9 |47.2 |12.3 |25.3 |73.8| [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_m.pth) |
29
- |[YOLOX-l](./exps/default/yolox_l.py) |640 |49.7 |50.1 |14.5 |54.2| 155.6 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.pth) |
30
- |[YOLOX-x](./exps/default/yolox_x.py) |640 |51.1 |**51.5** | 17.3 |99.1 |281.9 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x.pth) |
31
- |[YOLOX-Darknet53](./exps/default/yolov3.py) |640 | 47.7 | 48.0 | 11.1 |63.7 | 185.3 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_darknet.pth) |
32
-
33
- <details>
34
- <summary>Legacy models</summary>
35
-
36
- |Model |size |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |
37
- | ------ |:---: | :---: |:---: |:---: | :---: | :----: |
38
- |[YOLOX-s](./exps/default/yolox_s.py) |640 |39.6 |9.8 |9.0 | 26.8 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EW62gmO2vnNNs5npxjzunVwB9p307qqygaCkXdTO88BLUg?e=NMTQYw)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.pth) |
39
- |[YOLOX-m](./exps/default/yolox_m.py) |640 |46.4 |12.3 |25.3 |73.8| [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERMTP7VFqrVBrXKMU7Vl4TcBQs0SUeCT7kvc-JdIbej4tQ?e=1MDo9y)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_m.pth) |
40
- |[YOLOX-l](./exps/default/yolox_l.py) |640 |50.0 |14.5 |54.2| 155.6 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EWA8w_IEOzBKvuueBqfaZh0BeoG5sVzR-XYbOJO4YlOkRw?e=wHWOBE)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_l.pth) |
41
- |[YOLOX-x](./exps/default/yolox_x.py) |640 |**51.2** | 17.3 |99.1 |281.9 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdgVPHBziOVBtGAXHfeHI5kBza0q9yyueMGdT0wXZfI1rQ?e=tABO5u)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_x.pth) |
42
- |[YOLOX-Darknet53](./exps/default/yolov3.py) |640 | 47.4 | 11.1 |63.7 | 185.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZ-MV1r_fMFPkPrNjvbJEMoBLOLAnXH-XKEB77w8LhXL6Q?e=mf6wOc)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_darknet53.pth) |
43
-
44
- </details>
45
-
46
- #### Light Models.
47
-
48
- |Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights |
49
- | ------ |:---: | :---: |:---: |:---: | :---: |
50
- |[YOLOX-Nano](./exps/default/yolox_nano.py) |416 |25.8 | 0.91 |1.08 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano.pth) |
51
- |[YOLOX-Tiny](./exps/default/yolox_tiny.py) |416 |32.8 | 5.06 |6.45 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_tiny.pth) |
52
-
53
-
54
- <details>
55
- <summary>Legacy models</summary>
56
-
57
- |Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights |
58
- | ------ |:---: | :---: |:---: |:---: | :---: |
59
- |[YOLOX-Nano](./exps/default/yolox_nano.py) |416 |25.3 | 0.91 |1.08 | [github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano.pth) |
60
- |[YOLOX-Tiny](./exps/default/yolox_tiny.py) |416 |32.8 | 5.06 |6.45 | [github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_tiny_32dot8.pth) |
61
-
62
- </details>
63
-
64
- ## Quick Start
65
-
66
- <details>
67
- <summary>Installation</summary>
68
-
69
- Step1. Install YOLOX from source.
70
- ```shell
71
- git clone [email protected]:Megvii-BaseDetection/YOLOX.git
72
- cd YOLOX
73
- pip3 install -v -e . # or python3 setup.py develop
74
- ```
75
-
76
- </details>
77
-
78
- <details>
79
- <summary>Demo</summary>
80
-
81
- Step1. Download a pretrained model from the benchmark table.
82
-
83
- Step2. Use either -n or -f to specify your detector's config. For example:
84
-
85
- ```shell
86
- python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
87
- ```
88
- or
89
- ```shell
90
- python tools/demo.py image -f exps/default/yolox_s.py -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
91
- ```
92
- Demo for video:
93
- ```shell
94
- python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
95
- ```
96
-
97
-
98
- </details>
99
-
100
- <details>
101
- <summary>Reproduce our results on COCO</summary>
102
-
103
- Step1. Prepare COCO dataset
104
- ```shell
105
- cd <YOLOX_HOME>
106
- ln -s /path/to/your/COCO ./datasets/COCO
107
- ```
108
-
109
- Step2. Reproduce our results on COCO by specifying -n:
110
-
111
- ```shell
112
- python -m yolox.tools.train -n yolox-s -d 8 -b 64 --fp16 -o [--cache]
113
- yolox-m
114
- yolox-l
115
- yolox-x
116
- ```
117
- * -d: number of gpu devices
118
- * -b: total batch size, the recommended number for -b is num-gpu * 8
119
- * --fp16: mixed precision training
120
- * --cache: caching imgs into RAM to accelarate training, which need large system RAM.
121
-
122
-
123
-
124
- When using -f, the above commands are equivalent to:
125
- ```shell
126
- python -m yolox.tools.train -f exps/default/yolox_s.py -d 8 -b 64 --fp16 -o [--cache]
127
- exps/default/yolox_m.py
128
- exps/default/yolox_l.py
129
- exps/default/yolox_x.py
130
- ```
131
-
132
- **Multi Machine Training**
133
-
134
- We also support multi-nodes training. Just add the following args:
135
- * --num\_machines: num of your total training nodes
136
- * --machine\_rank: specify the rank of each node
137
-
138
- Suppose you want to train YOLOX on 2 machines, and your master machines's IP is 123.123.123.123, use port 12312 and TCP.
139
-
140
- On master machine, run
141
- ```shell
142
- python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num_machines 2 --machine_rank 0
143
- ```
144
- On the second machine, run
145
- ```shell
146
- python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num_machines 2 --machine_rank 1
147
- ```
148
-
149
- **Logging to Weights & Biases**
150
-
151
- To log metrics, predictions and model checkpoints to [W&B](https://docs.wandb.ai/guides/integrations/other/yolox) use the command line argument `--logger wandb` and use the prefix "wandb-" to specify arguments for initializing the wandb run.
152
-
153
- ```shell
154
- python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o [--cache] --logger wandb wandb-project <project name>
155
- yolox-m
156
- yolox-l
157
- yolox-x
158
- ```
159
-
160
- An example wandb dashboard is available [here](https://wandb.ai/manan-goel/yolox-nano/runs/3pzfeom0)
161
-
162
- **Others**
163
-
164
- See more information with the following command:
165
- ```shell
166
- python -m yolox.tools.train --help
167
- ```
168
-
169
- </details>
170
-
171
-
172
- <details>
173
- <summary>Evaluation</summary>
174
-
175
- We support batch testing for fast evaluation:
176
-
177
- ```shell
178
- python -m yolox.tools.eval -n yolox-s -c yolox_s.pth -b 64 -d 8 --conf 0.001 [--fp16] [--fuse]
179
- yolox-m
180
- yolox-l
181
- yolox-x
182
- ```
183
- * --fuse: fuse conv and bn
184
- * -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
185
- * -b: total batch size across on all GPUs
186
-
187
- To reproduce speed test, we use the following command:
188
- ```shell
189
- python -m yolox.tools.eval -n yolox-s -c yolox_s.pth -b 1 -d 1 --conf 0.001 --fp16 --fuse
190
- yolox-m
191
- yolox-l
192
- yolox-x
193
- ```
194
-
195
- </details>
196
-
197
-
198
- <details>
199
- <summary>Tutorials</summary>
200
-
201
- * [Training on custom data](docs/train_custom_data.md)
202
- * [Caching for custom data](docs/cache.md)
203
- * [Manipulating training image size](docs/manipulate_training_image_size.md)
204
- * [Assignment visualization](docs/assignment_visualization.md)
205
- * [Freezing model](docs/freeze_module.md)
206
-
207
- </details>
208
-
209
- ## Deployment
210
-
211
-
212
- 1. [MegEngine in C++ and Python](./demo/MegEngine)
213
- 2. [ONNX export and an ONNXRuntime](./demo/ONNXRuntime)
214
- 3. [TensorRT in C++ and Python](./demo/TensorRT)
215
- 4. [ncnn in C++ and Java](./demo/ncnn)
216
- 5. [OpenVINO in C++ and Python](./demo/OpenVINO)
217
- 6. [Accelerate YOLOX inference with nebullvm in Python](./demo/nebullvm)
218
-
219
- ## Third-party resources
220
- * YOLOX for streaming perception: [StreamYOLO (CVPR 2022 Oral)](https://github.com/yancie-yjr/StreamYOLO)
221
- * The YOLOX-s and YOLOX-nano are Integrated into [ModelScope](https://www.modelscope.cn/home). Try out the Online Demo at [YOLOX-s](https://www.modelscope.cn/models/damo/cv_cspnet_image-object-detection_yolox/summary) and [YOLOX-Nano](https://www.modelscope.cn/models/damo/cv_cspnet_image-object-detection_yolox_nano_coco/summary) respectively 🚀.
222
- * Integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio). Try out the Web Demo: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Sultannn/YOLOX-Demo)
223
- * The ncnn android app with video support: [ncnn-android-yolox](https://github.com/FeiGeChuanShu/ncnn-android-yolox) from [FeiGeChuanShu](https://github.com/FeiGeChuanShu)
224
- * YOLOX with Tengine support: [Tengine](https://github.com/OAID/Tengine/blob/tengine-lite/examples/tm_yolox.cpp) from [BUG1989](https://github.com/BUG1989)
225
- * YOLOX + ROS2 Foxy: [YOLOX-ROS](https://github.com/Ar-Ray-code/YOLOX-ROS) from [Ar-Ray](https://github.com/Ar-Ray-code)
226
- * YOLOX Deploy DeepStream: [YOLOX-deepstream](https://github.com/nanmi/YOLOX-deepstream) from [nanmi](https://github.com/nanmi)
227
- * YOLOX MNN/TNN/ONNXRuntime: [YOLOX-MNN](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/mnn/cv/mnn_yolox.cpp)、[YOLOX-TNN](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/tnn/cv/tnn_yolox.cpp) and [YOLOX-ONNXRuntime C++](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/ort/cv/yolox.cpp) from [DefTruth](https://github.com/DefTruth)
228
- * Converting darknet or yolov5 datasets to COCO format for YOLOX: [YOLO2COCO](https://github.com/RapidAI/YOLO2COCO) from [Daniel](https://github.com/znsoftm)
229
-
230
- ## Cite YOLOX
231
- If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
232
-
233
- ```latex
234
- @article{yolox2021,
235
- title={YOLOX: Exceeding YOLO Series in 2021},
236
- author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
237
- journal={arXiv preprint arXiv:2107.08430},
238
- year={2021}
239
- }
240
- ```
241
- ## In memory of Dr. Jian Sun
242
- Without the guidance of [Dr. Jian Sun](https://scholar.google.com/citations?user=ALVSZAYAAAAJ), YOLOX would not have been released and open sourced to the community.
243
- The passing away of Dr. Sun is a huge loss to the Computer Vision field. We add this section here to express our remembrance and condolences to our captain Dr. Sun.
244
- It is hoped that every AI practitioner in the world will stick to the belief of "continuous innovation to expand cognitive boundaries, and extraordinary technology to achieve product value" and move forward all the way.
245
-
246
- <div align="center"><img src="assets/sunjian.png" width="200"></div>
247
- 没有孙剑博士的指导,YOLOX也不会问世并开源给社区使用。
248
- 孙剑博士的离去是CV领域的一大损失,我们在此特别添加了这个部分来表达对我们的“船长”孙老师的纪念和哀思。
249
- 希望世界上的每个AI从业者秉持着“持续创新拓展认知边界,非凡科技成就产品价值”的观念,一路向前。
 
1
+ ---
2
+ title: YOLOX Auto Crop
3
+ sdk: gradio
4
+ emoji: 👁
5
+ colorFrom: yellow
6
+ colorTo: green
7
+ ---