YOLOv5 v5.0 Release (#2762)
Browse files- README.md +25 -18
- hubconf.py +38 -54
- utils/plots.py +3 -3
README.md
CHANGED
@@ -6,36 +6,43 @@
|
|
6 |
|
7 |
This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.
|
8 |
|
9 |
-
<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/
|
|
|
|
|
|
|
|
|
|
|
10 |
<details>
|
11 |
<summary>Figure Notes (click to expand)</summary>
|
12 |
|
13 |
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
|
14 |
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
|
|
|
15 |
</details>
|
16 |
|
|
|
17 |
- **January 5, 2021**: [v4.0 release](https://github.com/ultralytics/yolov5/releases/tag/v4.0): nn.SiLU() activations, [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) logging, [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/) integration.
|
18 |
- **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP.
|
19 |
- **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.
|
20 |
-
- **June 22, 2020**: [PANet](https://arxiv.org/abs/1803.01534) updates: new heads, reduced parameters, improved speed and mAP [364fcfd](https://github.com/ultralytics/yolov5/commit/364fcfd7dba53f46edd4f04c037a039c0a287972).
|
21 |
-
- **June 19, 2020**: [FP16](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.half) as new default for smaller checkpoints and faster inference [d4c6674](https://github.com/ultralytics/yolov5/commit/d4c6674c98e19df4c40e33a777610a18d1961145).
|
22 |
|
23 |
|
24 |
## Pretrained Checkpoints
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
|
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
|
|
|
|
39 |
|
40 |
<details>
|
41 |
<summary>Table Notes (click to expand)</summary>
|
@@ -44,7 +51,7 @@ This repository represents Ultralytics open-source research into future object d
|
|
44 |
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
45 |
* Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45`
|
46 |
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
|
47 |
-
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python test.py --data coco.yaml --img
|
48 |
</details>
|
49 |
|
50 |
|
@@ -85,7 +92,7 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with
|
|
85 |
|
86 |
## Inference
|
87 |
|
88 |
-
detect.py runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
|
89 |
```bash
|
90 |
$ python detect.py --source 0 # webcam
|
91 |
file.jpg # image
|
|
|
6 |
|
7 |
This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.
|
8 |
|
9 |
+
<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p>
|
10 |
+
<details>
|
11 |
+
<summary>YOLOv5-P5 640 Figure (click to expand)</summary>
|
12 |
+
|
13 |
+
<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313219-f1d70e00-9af5-11eb-9973-52b1f98d321a.png"></p>
|
14 |
+
</details>
|
15 |
<details>
|
16 |
<summary>Figure Notes (click to expand)</summary>
|
17 |
|
18 |
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
|
19 |
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
|
20 |
+
* **Reproduce** by `python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
|
21 |
</details>
|
22 |
|
23 |
+
- **April 11, 2021**: [v5.0 release](https://github.com/ultralytics/yolov5/releases/tag/v5.0): YOLOv5-P6 1280 models, [AWS](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart), [Supervise.ly](https://github.com/ultralytics/yolov5/issues/2518) and [YouTube](https://github.com/ultralytics/yolov5/pull/2752) integrations.
|
24 |
- **January 5, 2021**: [v4.0 release](https://github.com/ultralytics/yolov5/releases/tag/v4.0): nn.SiLU() activations, [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) logging, [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/) integration.
|
25 |
- **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP.
|
26 |
- **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.
|
|
|
|
|
27 |
|
28 |
|
29 |
## Pretrained Checkpoints
|
30 |
|
31 |
+
[assets]: https://github.com/ultralytics/yolov5/releases
|
32 |
+
|
33 |
+
Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>test<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>V100 (ms) | |params<br><sup>(M) |FLOPS<br><sup>640 (B)
|
34 |
+
--- |--- |--- |--- |--- |--- |---|--- |---
|
35 |
+
[YOLOv5s][assets] |640 |36.7 |36.7 |55.4 |**2.0** | |7.3 |17.0
|
36 |
+
[YOLOv5m][assets] |640 |44.5 |44.5 |63.3 |2.7 | |21.4 |51.3
|
37 |
+
[YOLOv5l][assets] |640 |48.2 |48.2 |66.9 |3.8 | |47.0 |115.4
|
38 |
+
[YOLOv5x][assets] |640 |**50.4** |**50.4** |**68.8** |6.1 | |87.7 |218.8
|
39 |
+
| | | | | | || |
|
40 |
+
[YOLOv5s6][assets] |1280 |43.3 |43.3 |61.9 |**4.3** | |12.7 |17.4
|
41 |
+
[YOLOv5m6][assets] |1280 |50.5 |50.5 |68.7 |8.4 | |35.9 |52.4
|
42 |
+
[YOLOv5l6][assets] |1280 |53.4 |53.4 |71.1 |12.3 | |77.2 |117.7
|
43 |
+
[YOLOv5x6][assets] |1280 |**54.4** |**54.4** |**72.0** |22.4 | |141.8 |222.9
|
44 |
+
| | | | | | || |
|
45 |
+
[YOLOv5x6][assets] TTA |1280 |**55.0** |**55.0** |**72.0** |70.8 | |- |-
|
46 |
|
47 |
<details>
|
48 |
<summary>Table Notes (click to expand)</summary>
|
|
|
51 |
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
52 |
* Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45`
|
53 |
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
|
54 |
+
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python test.py --data coco.yaml --img 1536 --iou 0.7 --augment`
|
55 |
</details>
|
56 |
|
57 |
|
|
|
92 |
|
93 |
## Inference
|
94 |
|
95 |
+
`detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
|
96 |
```bash
|
97 |
$ python detect.py --source 0 # webcam
|
98 |
file.jpg # image
|
hubconf.py
CHANGED
@@ -55,84 +55,68 @@ def create(name, pretrained, channels, classes, autoshape):
|
|
55 |
raise Exception(s) from e
|
56 |
|
57 |
|
58 |
-
def
|
59 |
-
"""YOLOv5-
|
60 |
|
61 |
-
Arguments:
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
|
66 |
Returns:
|
67 |
pytorch model
|
68 |
"""
|
69 |
-
|
|
|
|
|
70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
-
def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True):
|
73 |
-
"""YOLOv5-medium model from https://github.com/ultralytics/yolov5
|
74 |
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
classes (int): number of model classes, default=80
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
return create('yolov5m', pretrained, channels, classes, autoshape)
|
84 |
|
85 |
|
86 |
def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True):
|
87 |
-
|
88 |
-
|
89 |
-
Arguments:
|
90 |
-
pretrained (bool): load pretrained weights into the model, default=False
|
91 |
-
channels (int): number of input channels, default=3
|
92 |
-
classes (int): number of model classes, default=80
|
93 |
-
|
94 |
-
Returns:
|
95 |
-
pytorch model
|
96 |
-
"""
|
97 |
return create('yolov5l', pretrained, channels, classes, autoshape)
|
98 |
|
99 |
|
100 |
def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True):
|
101 |
-
|
|
|
102 |
|
103 |
-
Arguments:
|
104 |
-
pretrained (bool): load pretrained weights into the model, default=False
|
105 |
-
channels (int): number of input channels, default=3
|
106 |
-
classes (int): number of model classes, default=80
|
107 |
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
return create('yolov5x', pretrained, channels, classes, autoshape)
|
112 |
|
113 |
|
114 |
-
def
|
115 |
-
|
|
|
116 |
|
117 |
-
Arguments (3 options):
|
118 |
-
path_or_model (str): 'path/to/model.pt'
|
119 |
-
path_or_model (dict): torch.load('path/to/model.pt')
|
120 |
-
path_or_model (nn.Module): torch.load('path/to/model.pt')['model']
|
121 |
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
model = torch.load(path_or_model) if isinstance(path_or_model, str) else path_or_model # load checkpoint
|
126 |
-
if isinstance(model, dict):
|
127 |
-
model = model['ema' if model.get('ema') else 'model'] # load model
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
hub_model = hub_model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS
|
134 |
-
device = select_device('0' if torch.cuda.is_available() else 'cpu') # default to GPU if available
|
135 |
-
return hub_model.to(device)
|
136 |
|
137 |
|
138 |
if __name__ == '__main__':
|
|
|
55 |
raise Exception(s) from e
|
56 |
|
57 |
|
58 |
+
def custom(path_or_model='path/to/model.pt', autoshape=True):
|
59 |
+
"""YOLOv5-custom model https://github.com/ultralytics/yolov5
|
60 |
|
61 |
+
Arguments (3 options):
|
62 |
+
path_or_model (str): 'path/to/model.pt'
|
63 |
+
path_or_model (dict): torch.load('path/to/model.pt')
|
64 |
+
path_or_model (nn.Module): torch.load('path/to/model.pt')['model']
|
65 |
|
66 |
Returns:
|
67 |
pytorch model
|
68 |
"""
|
69 |
+
model = torch.load(path_or_model) if isinstance(path_or_model, str) else path_or_model # load checkpoint
|
70 |
+
if isinstance(model, dict):
|
71 |
+
model = model['ema' if model.get('ema') else 'model'] # load model
|
72 |
|
73 |
+
hub_model = Model(model.yaml).to(next(model.parameters()).device) # create
|
74 |
+
hub_model.load_state_dict(model.float().state_dict()) # load state_dict
|
75 |
+
hub_model.names = model.names # class names
|
76 |
+
if autoshape:
|
77 |
+
hub_model = hub_model.autoshape() # for file/URI/PIL/cv2/np inputs and NMS
|
78 |
+
device = select_device('0' if torch.cuda.is_available() else 'cpu') # default to GPU if available
|
79 |
+
return hub_model.to(device)
|
80 |
|
|
|
|
|
81 |
|
82 |
+
def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True):
|
83 |
+
# YOLOv5-small model https://github.com/ultralytics/yolov5
|
84 |
+
return create('yolov5s', pretrained, channels, classes, autoshape)
|
|
|
85 |
|
86 |
+
|
87 |
+
def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True):
|
88 |
+
# YOLOv5-medium model https://github.com/ultralytics/yolov5
|
89 |
return create('yolov5m', pretrained, channels, classes, autoshape)
|
90 |
|
91 |
|
92 |
def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True):
|
93 |
+
# YOLOv5-large model https://github.com/ultralytics/yolov5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
return create('yolov5l', pretrained, channels, classes, autoshape)
|
95 |
|
96 |
|
97 |
def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True):
|
98 |
+
# YOLOv5-xlarge model https://github.com/ultralytics/yolov5
|
99 |
+
return create('yolov5x', pretrained, channels, classes, autoshape)
|
100 |
|
|
|
|
|
|
|
|
|
101 |
|
102 |
+
def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True):
|
103 |
+
# YOLOv5-small model https://github.com/ultralytics/yolov5
|
104 |
+
return create('yolov5s6', pretrained, channels, classes, autoshape)
|
|
|
105 |
|
106 |
|
107 |
+
def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True):
|
108 |
+
# YOLOv5-medium model https://github.com/ultralytics/yolov5
|
109 |
+
return create('yolov5m6', pretrained, channels, classes, autoshape)
|
110 |
|
|
|
|
|
|
|
|
|
111 |
|
112 |
+
def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True):
|
113 |
+
# YOLOv5-large model https://github.com/ultralytics/yolov5
|
114 |
+
return create('yolov5l6', pretrained, channels, classes, autoshape)
|
|
|
|
|
|
|
115 |
|
116 |
+
|
117 |
+
def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True):
|
118 |
+
# YOLOv5-xlarge model https://github.com/ultralytics/yolov5
|
119 |
+
return create('yolov5x6', pretrained, channels, classes, autoshape)
|
|
|
|
|
|
|
120 |
|
121 |
|
122 |
if __name__ == '__main__':
|
utils/plots.py
CHANGED
@@ -243,7 +243,7 @@ def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_tx
|
|
243 |
# ax = ax.ravel()
|
244 |
|
245 |
fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
|
246 |
-
# for f in [Path(path) / f'study_coco_{x}.txt' for x in ['
|
247 |
for f in sorted(Path(path).glob('study*.txt')):
|
248 |
y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
|
249 |
x = np.arange(y.shape[1]) if x is None else np.array(x)
|
@@ -253,7 +253,7 @@ def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_tx
|
|
253 |
# ax[i].set_title(s[i])
|
254 |
|
255 |
j = y[3].argmax() + 1
|
256 |
-
ax2.plot(y[6, :j], y[3, :j] * 1E2, '.-', linewidth=2, markersize=8,
|
257 |
label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
|
258 |
|
259 |
ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
|
@@ -261,7 +261,7 @@ def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_tx
|
|
261 |
|
262 |
ax2.grid(alpha=0.2)
|
263 |
ax2.set_yticks(np.arange(20, 60, 5))
|
264 |
-
ax2.set_xlim(0,
|
265 |
ax2.set_ylim(30, 55)
|
266 |
ax2.set_xlabel('GPU Speed (ms/img)')
|
267 |
ax2.set_ylabel('COCO AP val')
|
|
|
243 |
# ax = ax.ravel()
|
244 |
|
245 |
fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
|
246 |
+
# for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]:
|
247 |
for f in sorted(Path(path).glob('study*.txt')):
|
248 |
y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
|
249 |
x = np.arange(y.shape[1]) if x is None else np.array(x)
|
|
|
253 |
# ax[i].set_title(s[i])
|
254 |
|
255 |
j = y[3].argmax() + 1
|
256 |
+
ax2.plot(y[6, 1:j], y[3, 1:j] * 1E2, '.-', linewidth=2, markersize=8,
|
257 |
label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
|
258 |
|
259 |
ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
|
|
|
261 |
|
262 |
ax2.grid(alpha=0.2)
|
263 |
ax2.set_yticks(np.arange(20, 60, 5))
|
264 |
+
ax2.set_xlim(0, 57)
|
265 |
ax2.set_ylim(30, 55)
|
266 |
ax2.set_xlabel('GPU Speed (ms/img)')
|
267 |
ax2.set_ylabel('COCO AP val')
|