CloudAnts commited on
Commit
5989ad1
·
1 Parent(s): e85a0a2
Files changed (1) hide show
  1. README.md +0 -181
README.md DELETED
@@ -1,181 +0,0 @@
1
- # [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458)
2
-
3
-
4
- Official PyTorch implementation of **YOLOv10**. NeurIPS 2024.
5
-
6
- <p align="center">
7
- <img src="figures/latency.svg" width=48%>
8
- <img src="figures/params.svg" width=48%> <br>
9
- Comparisons with others in terms of latency-accuracy (left) and size-accuracy (right) trade-offs.
10
- </p>
11
-
12
- [YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458).\
13
- Ao Wang, Hui Chen, Lihao Liu, Kai Chen, Zijia Lin, Jungong Han, and Guiguang Ding\
14
- [![arXiv](https://img.shields.io/badge/arXiv-2405.14458-b31b1b.svg)](https://arxiv.org/abs/2405.14458) <a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/collections/jameslahm/yolov10-665b0d90b0b5bb85129460c2) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/jameslahm/YOLOv10) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/kadirnar/Yolov10) [![Transformers.js Demo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Transformers.js-blue)](https://huggingface.co/spaces/Xenova/yolov10-web) [![LearnOpenCV](https://img.shields.io/badge/BlogPost-blue?logo=data%3Aimage%2Fpng%3Bbase64%2CiVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAMAAAC67D%2BPAAAALVBMVEX%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F6%2Bfn6%2Bvq3y%2BJ8rOFSne9Jm%2FQcOlr5DJ7GAAAAB3RSTlMAB2LM94H1yMxlvwAAADNJREFUCFtjZGAEAob%2FQMDIyAJl%2FmFkYmEGM%2F%2F%2BYWRmYWYCMv8BmSxYmUgKkLQhGYawAgApySgfFDPqowAAAABJRU5ErkJggg%3D%3D&logoColor=black&labelColor=gray)](https://learnopencv.com/yolov10/) [![Openbayes Demo](https://img.shields.io/static/v1?label=Demo&message=OpenBayes%E8%B4%9D%E5%BC%8F%E8%AE%A1%E7%AE%97&color=green)](https://openbayes.com/console/public/tutorials/im29uYrnIoz)
15
-
16
-
17
- <details>
18
- <summary>
19
- <font size="+1">Abstract</font>
20
- </summary>
21
- Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\% less latency and 25\% fewer parameters for the same performance.
22
- </details>
23
-
24
- ## Notes
25
- - 2024/05/31: Please use the [exported format](https://github.com/THU-MIG/yolov10?tab=readme-ov-file#export) for benchmark. In the non-exported format, e.g., pytorch, the speed of YOLOv10 is biased because the unnecessary `cv2` and `cv3` operations in the `v10Detect` are executed during inference.
26
- - 2024/05/30: We provide [some clarifications and suggestions](https://github.com/THU-MIG/yolov10/issues/136) for detecting smaller objects or objects in the distance with YOLOv10. Thanks to [SkalskiP](https://github.com/SkalskiP)!
27
- - 2024/05/27: We have updated the [checkpoints](https://huggingface.co/collections/jameslahm/yolov10-665b0d90b0b5bb85129460c2) with class names, for ease of use.
28
-
29
- ## UPDATES 🔥
30
- - 2024/06/01: Thanks to [ErlanggaYudiPradana](https://github.com/rlggyp) for the integration with [C++ | OpenVINO | OpenCV](https://github.com/rlggyp/YOLOv10-OpenVINO-CPP-Inference)
31
- - 2024/06/01: Thanks to [NielsRogge](https://github.com/NielsRogge) and [AK](https://x.com/_akhaliq) for hosting the models on the HuggingFace Hub!
32
- - 2024/05/31: Build [yolov10-jetson](https://github.com/Seeed-Projects/jetson-examples/blob/main/reComputer/scripts/yolov10/README.md) docker image by [youjiang](https://github.com/yuyoujiang)!
33
- - 2024/05/31: Thanks to [mohamedsamirx](https://github.com/mohamedsamirx) for the integration with [BoTSORT, DeepOCSORT, OCSORT, HybridSORT, ByteTrack, StrongSORT using BoxMOT library](https://colab.research.google.com/drive/1-QV2TNfqaMsh14w5VxieEyanugVBG14V?usp=sharing)!
34
- - 2024/05/31: Thanks to [kaylorchen](https://github.com/kaylorchen) for the integration with [rk3588](https://github.com/kaylorchen/rk3588-yolo-demo)!
35
- - 2024/05/30: Thanks to [eaidova](https://github.com/eaidova) for the integration with [OpenVINO™](https://github.com/openvinotoolkit/openvino_notebooks/blob/0ba3c0211bcd49aa860369feddffdf7273a73c64/notebooks/yolov10-optimization/yolov10-optimization.ipynb)!
36
- - 2024/05/29: Add the gradio demo for running the models locally. Thanks to [AK](https://x.com/_akhaliq)!
37
- - 2024/05/27: Thanks to [sujanshresstha](sujanshresstha) for the integration with [DeepSORT](https://github.com/sujanshresstha/YOLOv10_DeepSORT.git)!
38
- - 2024/05/26: Thanks to [CVHub520](https://github.com/CVHub520) for the integration into [X-AnyLabeling](https://github.com/CVHub520/X-AnyLabeling)!
39
- - 2024/05/26: Thanks to [DanielSarmiento04](https://github.com/DanielSarmiento04) for integrate in [c++ | ONNX | OPENCV](https://github.com/DanielSarmiento04/yolov10cpp)!
40
- - 2024/05/25: Add [Transformers.js demo](https://huggingface.co/spaces/Xenova/yolov10-web) and onnx weights(yolov10[n](https://huggingface.co/onnx-community/yolov10n)/[s](https://huggingface.co/onnx-community/yolov10s)/[m](https://huggingface.co/onnx-community/yolov10m)/[b](https://huggingface.co/onnx-community/yolov10b)/[l](https://huggingface.co/onnx-community/yolov10l)/[x](https://huggingface.co/onnx-community/yolov10x)). Thanks to [xenova](https://github.com/xenova)!
41
- - 2024/05/25: Add [colab demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov10-object-detection-on-custom-dataset.ipynb#scrollTo=SaKTSzSWnG7s), [HuggingFace Demo](https://huggingface.co/spaces/kadirnar/Yolov10), and [HuggingFace Model Page](https://huggingface.co/kadirnar/Yolov10). Thanks to [SkalskiP](https://github.com/SkalskiP) and [kadirnar](https://github.com/kadirnar)!
42
-
43
- ## Performance
44
- COCO
45
-
46
- | Model | Test Size | #Params | FLOPs | AP<sup>val</sup> | Latency |
47
- |:---------------|:----:|:---:|:--:|:--:|:--:|
48
- | [YOLOv10-N](https://huggingface.co/jameslahm/yolov10n) | 640 | 2.3M | 6.7G | 38.5% | 1.84ms |
49
- | [YOLOv10-S](https://huggingface.co/jameslahm/yolov10s) | 640 | 7.2M | 21.6G | 46.3% | 2.49ms |
50
- | [YOLOv10-M](https://huggingface.co/jameslahm/yolov10m) | 640 | 15.4M | 59.1G | 51.1% | 4.74ms |
51
- | [YOLOv10-B](https://huggingface.co/jameslahm/yolov10b) | 640 | 19.1M | 92.0G | 52.5% | 5.74ms |
52
- | [YOLOv10-L](https://huggingface.co/jameslahm/yolov10l) | 640 | 24.4M | 120.3G | 53.2% | 7.28ms |
53
- | [YOLOv10-X](https://huggingface.co/jameslahm/yolov10x) | 640 | 29.5M | 160.4G | 54.4% | 10.70ms |
54
-
55
- ## Installation
56
- `conda` virtual environment is recommended.
57
- ```
58
- conda create -n yolov10 python=3.9
59
- conda activate yolov10
60
- pip install -r requirements.txt
61
- pip install -e .
62
- ```
63
- ## Demo
64
- ```
65
- python app.py
66
- # Please visit http://127.0.0.1:7860
67
- ```
68
-
69
- ## Validation
70
- [`yolov10n`](https://huggingface.co/jameslahm/yolov10n) [`yolov10s`](https://huggingface.co/jameslahm/yolov10s) [`yolov10m`](https://huggingface.co/jameslahm/yolov10m) [`yolov10b`](https://huggingface.co/jameslahm/yolov10b) [`yolov10l`](https://huggingface.co/jameslahm/yolov10l) [`yolov10x`](https://huggingface.co/jameslahm/yolov10x)
71
- ```
72
- yolo val model=jameslahm/yolov10{n/s/m/b/l/x} data=coco.yaml batch=256
73
- ```
74
-
75
- Or
76
- ```python
77
- from ultralytics import YOLOv10
78
-
79
- model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
80
- # or
81
- # wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
82
- model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
83
-
84
- model.val(data='coco.yaml', batch=256)
85
- ```
86
-
87
-
88
- ## Training
89
- ```
90
- yolo detect train data=coco.yaml model=yolov10n/s/m/b/l/x.yaml epochs=500 batch=256 imgsz=640 device=0,1,2,3,4,5,6,7
91
- ```
92
-
93
- Or
94
- ```python
95
- from ultralytics import YOLOv10
96
-
97
- model = YOLOv10()
98
- # If you want to finetune the model with pretrained weights, you could load the
99
- # pretrained weights like below
100
- # model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
101
- # or
102
- # wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
103
- # model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
104
-
105
- model.train(data='coco.yaml', epochs=500, batch=256, imgsz=640)
106
- ```
107
-
108
- ## Push to hub to 🤗
109
-
110
- Optionally, you can push your fine-tuned model to the [Hugging Face hub](https://huggingface.co/) as a public or private model:
111
-
112
- ```python
113
- # let's say you have fine-tuned a model for crop detection
114
- model.push_to_hub("<your-hf-username-or-organization/yolov10-finetuned-crop-detection")
115
-
116
- # you can also pass `private=True` if you don't want everyone to see your model
117
- model.push_to_hub("<your-hf-username-or-organization/yolov10-finetuned-crop-detection", private=True)
118
- ```
119
-
120
- ## Prediction
121
- Note that a smaller confidence threshold can be set to detect smaller objects or objects in the distance. Please refer to [here](https://github.com/THU-MIG/yolov10/issues/136) for details.
122
- ```
123
- yolo predict model=jameslahm/yolov10{n/s/m/b/l/x}
124
- ```
125
-
126
- Or
127
- ```python
128
- from ultralytics import YOLOv10
129
-
130
- model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
131
- # or
132
- # wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
133
- model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
134
-
135
- model.predict()
136
- ```
137
-
138
- ## Export
139
- ```
140
- # End-to-End ONNX
141
- yolo export model=jameslahm/yolov10{n/s/m/b/l/x} format=onnx opset=13 simplify
142
- # Predict with ONNX
143
- yolo predict model=yolov10n/s/m/b/l/x.onnx
144
-
145
- # End-to-End TensorRT
146
- yolo export model=jameslahm/yolov10{n/s/m/b/l/x} format=engine half=True simplify opset=13 workspace=16
147
- # or
148
- trtexec --onnx=yolov10n/s/m/b/l/x.onnx --saveEngine=yolov10n/s/m/b/l/x.engine --fp16
149
- # Predict with TensorRT
150
- yolo predict model=yolov10n/s/m/b/l/x.engine
151
- ```
152
-
153
- Or
154
- ```python
155
- from ultralytics import YOLOv10
156
-
157
- model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
158
- # or
159
- # wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10{n/s/m/b/l/x}.pt
160
- model = YOLOv10('yolov10{n/s/m/b/l/x}.pt')
161
-
162
- model.export(...)
163
- ```
164
-
165
- ## Acknowledgement
166
-
167
- The code base is built with [ultralytics](https://github.com/ultralytics/ultralytics) and [RT-DETR](https://github.com/lyuwenyu/RT-DETR).
168
-
169
- Thanks for the great implementations!
170
-
171
- ## Citation
172
-
173
- If our code or models help your work, please cite our paper:
174
- ```BibTeX
175
- @article{wang2024yolov10,
176
- title={YOLOv10: Real-Time End-to-End Object Detection},
177
- author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang},
178
- journal={arXiv preprint arXiv:2405.14458},
179
- year={2024}
180
- }
181
- ```