Spaces:
Sleeping
Sleeping
File size: 5,345 Bytes
3ae0b30 12aa779 f965db0 6168220 12aa779 3ae0b30 f965db0 3ae0b30 6168220 3ae0b30 12aa779 f965db0 12aa779 f965db0 1b40095 12aa779 1b40095 12aa779 1b40095 12aa779 f965db0 1b40095 ccb23fb f965db0 12aa779 f965db0 12aa779 1b40095 12aa779 1b40095 12aa779 1b40095 12aa779 f965db0 3359d6e 12aa779 f965db0 12aa779 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
---
title: det-metrics
tags:
- evaluate
- metric
description: >-
Modified cocoevals.py which is wrapped into torchmetrics' mAP metric with numpy instead of torch dependency.
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
emoji: 🕵️
---
# SEA-AI/det-metrics
This hugging face metric uses `seametrics.detection.PrecisionRecallF1Support` under the hood to compute coco-like metrics for object detection tasks. It is a [modified cocoeval.py](https://github.com/SEA-AI/seametrics/blob/develop/seametrics/detection/cocoeval.py) wrapped inside [torchmetrics' mAP metric](https://lightning.ai/docs/torchmetrics/stable/detection/mean_average_precision.html) but with numpy arrays instead of torch tensors.
## Getting Started
To get started with det-metrics, make sure you have the necessary dependencies installed. This metric relies on the `evaluate` and `seametrics` libraries for metric calculation and integration with FiftyOne datasets.
### Installation
First, ensure you have Python 3.8 or later installed. Then, install det-metrics using pip:
```sh
pip install evaluate git+https://github.com/SEA-AI/seametrics@develop
```
### Basic Usage
Here's how to quickly evaluate your object detection models using SEA-AI/det-metrics:
```python
import evaluate
# Define your predictions and references (dict values can also by numpy arrays)
predictions = [
{
"boxes": [[449.3, 197.75390625, 6.25, 7.03125], [334.3, 181.58203125, 11.5625, 6.85546875]],
"labels": [0, 0],
"scores": [0.153076171875, 0.72314453125],
}
]
references = [
{
"boxes": [[449.3, 197.75390625, 6.25, 7.03125], [334.3, 181.58203125, 11.5625, 6.85546875]],
"labels": [0, 0],
"area": [132.2, 83.8],
}
]
# Load SEA-AI/det-metrics and evaluate
module = evaluate.load("SEA-AI/det-metrics")
module.add(prediction=predictions, reference=references)
results = module.compute()
print(results)
```
This will output the evaluation metrics for your detection model.
```
{'all': {'range': [0, 10000000000.0],
'iouThr': '0.00',
'maxDets': 100,
'tp': 2,
'fp': 0,
'fn': 0,
'duplicates': 0,
'precision': 1.0,
'recall': 1.0,
'f1': 1.0,
'support': 2,
'fpi': 0,
'nImgs': 1}
```
## FiftyOne Integration
Integrate SEA-AI/det-metrics with FiftyOne datasets for enhanced analysis and visualization:
```python
import evaluate
import logging
from seametrics.payload import PayloadProcessor
logging.basicConfig(level=logging.WARNING)
# Configure your dataset and model details
processor = PayloadProcessor(
dataset_name="SAILING_DATASET_QA",
gt_field="ground_truth_det",
models=["yolov5n6_RGB_D2304-v1_9C"],
sequence_list=["Trip_14_Seq_1"],
data_type="rgb",
)
# Evaluate using SEA-AI/det-metrics
module = evaluate.load("SEA-AI/det-metrics")
module.add_payload(processor.payload)
results = module.compute()
print(results)
```
```console
{'all': {'range': [0, 10000000000.0],
'iouThr': '0.00',
'maxDets': 100,
'tp': 89,
'fp': 13,
'fn': 15,
'duplicates': 1,
'precision': 0.8725490196078431,
'recall': 0.8557692307692307,
'f1': 0.8640776699029126,
'support': 104,
'fpi': 0,
'nImgs': 22}}
```
## Metric Settings
Customize your evaluation by specifying various parameters when loading SEA-AI/det-metrics:
- **area_ranges_tuples**: Define different area ranges for metrics calculation.
- **bbox_format**: Set the bounding box format (e.g., `"xywh"`).
- **iou_threshold**: Choose the IOU threshold for determining correct detections.
- **class_agnostic**: Specify whether to calculate metrics disregarding class labels.
```python
area_ranges_tuples = [
("all", [0, 1e5**2]),
("small", [0**2, 6**2]),
("medium", [6**2, 12**2]),
("large", [12**2, 1e5**2]),
]
module = evaluate.load(
"SEA-AI/det-metrics",
iou_thresholds=[0.00001],
area_ranges_tuples=area_ranges_tuples,
)
```
## Output Values
SEA-AI/det-metrics provides a detailed breakdown of performance metrics for each specified area range:
- **range**: The area range considered.
- **iouThr**: The IOU threshold applied.
- **maxDets**: The maximum number of detections evaluated.
- **tp/fp/fn**: Counts of true positives, false positives, and false negatives.
- **duplicates**: Number of duplicate detections.
- **precision/recall/f1**: Calculated precision, recall, and F1 score.
- **support**: Number of ground truth boxes considered.
- **fpi**: Number of images with predictions but no ground truths.
- **nImgs**: Total number of images evaluated.
## Further References
- **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
- **Pycoco Tools**: SEA-AI/det-metrics calculations are based on [pycoco tools](https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools), a widely used library for COCO dataset evaluation.
- **Understanding Metrics**: For a deeper understanding of precision, recall, and other metrics, read [this comprehensive guide](https://www.analyticsvidhya.com/blog/2020/09/precision-recall-machine-learning/).
## Contribution
Your contributions are welcome! If you'd like to improve SEA-AI/det-metrics or add new features, please feel free to fork the repository, make your changes, and submit a pull request. |