nehulagrawal commited on
Commit
58d399a
1 Parent(s): 4988c5e

add ultralytics model card

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ tags:
4
+ - ultralyticsplus
5
+ - yolov8
6
+ - ultralytics
7
+ - yolo
8
+ - vision
9
+ - object-detection
10
+ - pytorch
11
+
12
+ library_name: ultralytics
13
+ library_version: 8.0.43
14
+ inference: false
15
+
16
+ model-index:
17
+ - name: foduucom/product-detection-in-shelf-yolov8
18
+ results:
19
+ - task:
20
+ type: object-detection
21
+
22
+ metrics:
23
+ - type: precision # since [email protected] is not available on hf.co/metrics
24
+ value: 0.91294 # min: 0.0 - max: 1.0
25
+ name: [email protected](box)
26
+ ---
27
+
28
+ <div align="center">
29
+ <img width="640" alt="foduucom/product-detection-in-shelf-yolov8" src="https://huggingface.co/foduucom/product-detection-in-shelf-yolov8/resolve/main/thumbnail.jpg">
30
+ </div>
31
+
32
+ ### Supported Labels
33
+
34
+ ```
35
+ ['empty', 'product']
36
+ ```
37
+
38
+ ### How to use
39
+
40
+ - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
41
+
42
+ ```bash
43
+ pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
44
+ ```
45
+
46
+ - Load model and perform prediction:
47
+
48
+ ```python
49
+ from ultralyticsplus import YOLO, render_result
50
+
51
+ # load model
52
+ model = YOLO('foduucom/product-detection-in-shelf-yolov8')
53
+
54
+ # set model parameters
55
+ model.overrides['conf'] = 0.25 # NMS confidence threshold
56
+ model.overrides['iou'] = 0.45 # NMS IoU threshold
57
+ model.overrides['agnostic_nms'] = False # NMS class-agnostic
58
+ model.overrides['max_det'] = 1000 # maximum number of detections per image
59
+
60
+ # set image
61
+ image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
62
+
63
+ # perform inference
64
+ results = model.predict(image)
65
+
66
+ # observe results
67
+ print(results[0].boxes)
68
+ render = render_result(model=model, image=image, result=results[0])
69
+ render.show()
70
+ ```
71
+