File size: 2,028 Bytes
666a1f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- image-segmentation
- pytorch
library_name: ultralytics
library_version: 8.0.6
inference: false
datasets:
- keremberke/pothole-segmentation
model-index:
- name: keremberke/yolov8n-pothole-segmentation
results:
- task:
type: image-segmentation
dataset:
type: keremberke/pothole-segmentation
name: pothole-segmentation
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.00706 # min: 0.0 - max: 1.0
name: [email protected](box)
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.00456 # min: 0.0 - max: 1.0
name: [email protected](mask)
---
<div align="center">
<img width="640" alt="keremberke/yolov8n-pothole-segmentation" src="https://huggingface.co/keremberke/yolov8n-pothole-segmentation/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['pothole']
```
### How to use
- Install [ultralytics](https://github.com/ultralytics/ultralytics) and [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install -U ultralytics ultralyticsplus
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_model_output
# load model
model = YOLO('keremberke/yolov8n-pothole-segmentation')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
for result in model.predict(image, return_outputs=True):
print(result["det"]) # [[x1, y1, x2, y2, conf, class]]
print(result["segment"]) # [segmentation mask]
render = render_model_output(model=model, image=image, model_output=result)
render.show()
```
|