🚑️ [Add] Back accidentally removed README
Browse files
README.md
CHANGED
@@ -56,3 +56,110 @@ pip install -r requirements.txt
|
|
56 |
|
57 |
<table>
|
58 |
<tr><td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
<table>
|
58 |
<tr><td>
|
59 |
+
|
60 |
+
## Task
|
61 |
+
|
62 |
+
These are simple examples. For more customization details, please refer to [Notebooks](examples) and lower-level modifications **[HOWTO](docs/HOWTO.md)**.
|
63 |
+
|
64 |
+
## Training
|
65 |
+
|
66 |
+
To train YOLO on your machine/dataset:
|
67 |
+
|
68 |
+
1. Modify the configuration file `yolo/config/dataset/**.yaml` to point to your dataset.
|
69 |
+
2. Run the training script:
|
70 |
+
|
71 |
+
```shell
|
72 |
+
python yolo/lazy.py task=train dataset=** use_wandb=True
|
73 |
+
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c weight=False # or more args
|
74 |
+
```
|
75 |
+
|
76 |
+
### Transfer Learning
|
77 |
+
|
78 |
+
To perform transfer learning with YOLOv9:
|
79 |
+
|
80 |
+
```shell
|
81 |
+
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c dataset={dataset_config} device={cpu, mps, cuda}
|
82 |
+
```
|
83 |
+
|
84 |
+
### Inference
|
85 |
+
|
86 |
+
To use a model for object detection, use:
|
87 |
+
|
88 |
+
```shell
|
89 |
+
python yolo/lazy.py # if cloned from GitHub
|
90 |
+
python yolo/lazy.py task=inference \ # default is inference
|
91 |
+
name=AnyNameYouWant \ # AnyNameYouWant
|
92 |
+
device=cpu \ # hardware cuda, cpu, mps
|
93 |
+
model=v9-s \ # model version: v9-c, m, s
|
94 |
+
task.nms.min_confidence=0.1 \ # nms config
|
95 |
+
task.fast_inference=onnx \ # onnx, trt, deploy
|
96 |
+
task.data.source=data/toy/images/train \ # file, dir, webcam
|
97 |
+
+quite=True \ # Quite Output
|
98 |
+
yolo task.data.source={Any Source} # if pip installed
|
99 |
+
yolo task=inference task.data.source={Any}
|
100 |
+
```
|
101 |
+
|
102 |
+
### Validation
|
103 |
+
|
104 |
+
To validate model performance, or generate a json file in COCO format:
|
105 |
+
|
106 |
+
```shell
|
107 |
+
python yolo/lazy.py task=validation
|
108 |
+
python yolo/lazy.py task=validation dataset=toy
|
109 |
+
```
|
110 |
+
|
111 |
+
## Contributing
|
112 |
+
|
113 |
+
Contributions to the YOLO project are welcome! See [CONTRIBUTING](docs/CONTRIBUTING.md) for guidelines on how to contribute.
|
114 |
+
|
115 |
+
### TODO Diagrams
|
116 |
+
|
117 |
+
```mermaid
|
118 |
+
flowchart TB
|
119 |
+
subgraph Features
|
120 |
+
Taskv7-->Segmentation["#35 Segmentation"]
|
121 |
+
Taskv7-->Classification["#34 Classification"]
|
122 |
+
Taskv9-->Segmentation
|
123 |
+
Taskv9-->Classification
|
124 |
+
Trainv7
|
125 |
+
end
|
126 |
+
subgraph Model
|
127 |
+
MODELv7-->v7-X
|
128 |
+
MODELv7-->v7-E6
|
129 |
+
MODELv7-->v7-E6E
|
130 |
+
MODELv9-->v9-T
|
131 |
+
MODELv9-->v9-S
|
132 |
+
MODELv9-->v9-E
|
133 |
+
end
|
134 |
+
subgraph Bugs
|
135 |
+
Fix-->Fix1["#12 mAP > 1"]
|
136 |
+
Fix-->Fix2["v9 Gradient Bump"]
|
137 |
+
Reply-->Reply1["#39"]
|
138 |
+
Reply-->Reply2["#36"]
|
139 |
+
end
|
140 |
+
```
|
141 |
+
|
142 |
+
## Star History
|
143 |
+
|
144 |
+
[](https://star-history.com/#WongKinYiu/YOLO&Date)
|
145 |
+
|
146 |
+
## Citations
|
147 |
+
|
148 |
+
```
|
149 |
+
@misc{wang2022yolov7,
|
150 |
+
title={YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
|
151 |
+
author={Chien-Yao Wang and Alexey Bochkovskiy and Hong-Yuan Mark Liao},
|
152 |
+
year={2022},
|
153 |
+
eprint={2207.02696},
|
154 |
+
archivePrefix={arXiv},
|
155 |
+
primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'}
|
156 |
+
}
|
157 |
+
@misc{wang2024yolov9,
|
158 |
+
title={YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information},
|
159 |
+
author={Chien-Yao Wang and I-Hau Yeh and Hong-Yuan Mark Liao},
|
160 |
+
year={2024},
|
161 |
+
eprint={2402.13616},
|
162 |
+
archivePrefix={arXiv},
|
163 |
+
primaryClass={cs.CV}
|
164 |
+
}
|
165 |
+
```
|