README Update (#1207)
Browse files* README Update
* Update README.md
* README Update
* Update README.md
README.md
CHANGED
@@ -69,7 +69,7 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with
|
|
69 |
|
70 |
## Inference
|
71 |
|
72 |
-
|
73 |
```bash
|
74 |
$ python detect.py --source 0 # webcam
|
75 |
file.jpg # image
|
@@ -81,22 +81,43 @@ $ python detect.py --source 0 # webcam
|
|
81 |
http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream
|
82 |
```
|
83 |
|
84 |
-
To run inference on
|
85 |
-
|
86 |
```bash
|
87 |
-
$ python detect.py --source
|
88 |
|
89 |
-
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.
|
90 |
-
Using CUDA device0 _CudaDeviceProperties(name='Tesla
|
91 |
|
92 |
-
Downloading https://
|
93 |
|
94 |
-
|
95 |
-
|
96 |
-
|
|
|
|
|
|
|
97 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
98 |
|
99 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
|
101 |
|
102 |
## Training
|
|
|
69 |
|
70 |
## Inference
|
71 |
|
72 |
+
detect.py runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `inference/output`.
|
73 |
```bash
|
74 |
$ python detect.py --source 0 # webcam
|
75 |
file.jpg # image
|
|
|
81 |
http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream
|
82 |
```
|
83 |
|
84 |
+
To run inference on example images in `inference/images`:
|
|
|
85 |
```bash
|
86 |
+
$ python detect.py --source inference/images --weights yolov5s.pt --conf 0.25
|
87 |
|
88 |
+
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', img_size=640, iou_thres=0.45, output='inference/output', save_conf=False, save_txt=False, source='inference/images', update=False, view_img=False, weights='yolov5s.pt')
|
89 |
+
Using CUDA device0 _CudaDeviceProperties(name='Tesla V100-SXM2-16GB', total_memory=16160MB)
|
90 |
|
91 |
+
Downloading https://github.com/ultralytics/yolov5/releases/download/v3.0/yolov5s.pt to yolov5s.pt... 100%|ββββββββββββββ| 14.5M/14.5M [00:00<00:00, 21.3MB/s]
|
92 |
|
93 |
+
Fusing layers...
|
94 |
+
Model Summary: 140 layers, 7.45958e+06 parameters, 0 gradients
|
95 |
+
image 1/2 yolov5/inference/images/bus.jpg: 640x480 4 persons, 1 buss, 1 skateboards, Done. (0.013s)
|
96 |
+
image 2/2 yolov5/inference/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.013s)
|
97 |
+
Results saved to yolov5/inference/output
|
98 |
+
Done. (0.124s)
|
99 |
```
|
100 |
+
<img src="https://user-images.githubusercontent.com/26833433/97107365-685a8d80-16c7-11eb-8c2e-83aac701d8b9.jpeg" width="500">
|
101 |
+
|
102 |
+
### PyTorch Hub
|
103 |
+
|
104 |
+
To run **batched inference** with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36):
|
105 |
+
```python
|
106 |
+
import torch
|
107 |
+
from PIL import Image
|
108 |
|
109 |
+
# Model
|
110 |
+
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().eval() # yolov5s.pt
|
111 |
+
model = model.autoshape() # for autoshaping of PIL/cv2/np inputs and NMS
|
112 |
+
|
113 |
+
# Images
|
114 |
+
img1 = Image.open('zidane.jpg')
|
115 |
+
img2 = Image.open('bus.jpg')
|
116 |
+
imgs = [img1, img2] # batched list of images
|
117 |
+
|
118 |
+
# Inference
|
119 |
+
prediction = model(imgs, size=640) # includes NMS
|
120 |
+
```
|
121 |
|
122 |
|
123 |
## Training
|