Spaces:
Sleeping
Sleeping
cp readme to prepare documentation
Browse files
README.md
CHANGED
@@ -1,18 +1,170 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
tags:
|
4 |
-
|
5 |
-
|
6 |
description: >-
|
7 |
-
Modified cocoevals.py which is wrapped into torchmetrics' mAP metric with
|
8 |
-
numpy instead of torch dependency.
|
9 |
-
emoji: 🐠
|
10 |
-
colorFrom: red
|
11 |
-
colorTo: blue
|
12 |
sdk: gradio
|
13 |
-
sdk_version:
|
14 |
app_file: app.py
|
15 |
pinned: false
|
|
|
16 |
---
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: det-metrics
|
3 |
tags:
|
4 |
+
- evaluate
|
5 |
+
- metric
|
6 |
description: >-
|
7 |
+
Modified cocoevals.py which is wrapped into torchmetrics' mAP metric with numpy instead of torch dependency.
|
|
|
|
|
|
|
|
|
8 |
sdk: gradio
|
9 |
+
sdk_version: 3.19.1
|
10 |
app_file: app.py
|
11 |
pinned: false
|
12 |
+
emoji: 🕵️
|
13 |
---
|
14 |
|
15 |
+
# SEA-AI/det-metrics
|
16 |
+
|
17 |
+
This hugging face metric uses `seametrics.detection.PrecisionRecallF1Support` under the hood to compute coco-like metrics for object detection tasks. It is a [modified cocoeval.py](https://github.com/SEA-AI/seametrics/blob/develop/seametrics/detection/cocoeval.py) wrapped inside [torchmetrics' mAP metric](https://lightning.ai/docs/torchmetrics/stable/detection/mean_average_precision.html) but with numpy arrays instead of torch tensors.
|
18 |
+
|
19 |
+
## Getting Started
|
20 |
+
|
21 |
+
To get started with det-metrics, make sure you have the necessary dependencies installed. This metric relies on the `evaluate` and `seametrics` libraries for metric calculation and integration with FiftyOne datasets.
|
22 |
+
|
23 |
+
### Installation
|
24 |
+
|
25 |
+
First, ensure you have Python 3.8 or later installed. Then, install det-metrics using pip:
|
26 |
+
|
27 |
+
```sh
|
28 |
+
pip install evaluate git+https://github.com/SEA-AI/seametrics@develop
|
29 |
+
```
|
30 |
+
|
31 |
+
### Basic Usage
|
32 |
+
|
33 |
+
Here's how to quickly evaluate your object detection models using SEA-AI/det-metrics:
|
34 |
+
|
35 |
+
```python
|
36 |
+
import evaluate
|
37 |
+
|
38 |
+
# Define your predictions and references (dict values can also by numpy arrays)
|
39 |
+
predictions = [
|
40 |
+
{
|
41 |
+
"boxes": [[449.3, 197.75390625, 6.25, 7.03125], [334.3, 181.58203125, 11.5625, 6.85546875]],
|
42 |
+
"labels": [0, 0],
|
43 |
+
"scores": [0.153076171875, 0.72314453125],
|
44 |
+
}
|
45 |
+
]
|
46 |
+
|
47 |
+
references = [
|
48 |
+
{
|
49 |
+
"boxes": [[449.3, 197.75390625, 6.25, 7.03125], [334.3, 181.58203125, 11.5625, 6.85546875]],
|
50 |
+
"labels": [0, 0],
|
51 |
+
"area": [132.2, 83.8],
|
52 |
+
}
|
53 |
+
]
|
54 |
+
|
55 |
+
# Load SEA-AI/det-metrics and evaluate
|
56 |
+
module = evaluate.load("SEA-AI/det-metrics")
|
57 |
+
module.add(prediction=predictions, reference=references)
|
58 |
+
results = module.compute()
|
59 |
+
|
60 |
+
print(results)
|
61 |
+
```
|
62 |
+
|
63 |
+
This will output the evaluation metrics for your detection model.
|
64 |
+
```
|
65 |
+
{'all': {'range': [0, 10000000000.0],
|
66 |
+
'iouThr': '0.00',
|
67 |
+
'maxDets': 100,
|
68 |
+
'tp': 2,
|
69 |
+
'fp': 0,
|
70 |
+
'fn': 0,
|
71 |
+
'duplicates': 0,
|
72 |
+
'precision': 1.0,
|
73 |
+
'recall': 1.0,
|
74 |
+
'f1': 1.0,
|
75 |
+
'support': 2,
|
76 |
+
'fpi': 0,
|
77 |
+
'nImgs': 1}
|
78 |
+
```
|
79 |
+
|
80 |
+
## FiftyOne Integration
|
81 |
+
|
82 |
+
Integrate SEA-AI/det-metrics with FiftyOne datasets for enhanced analysis and visualization:
|
83 |
+
|
84 |
+
```python
|
85 |
+
import evaluate
|
86 |
+
import logging
|
87 |
+
from seametrics.payload.processor import PayloadProcessor
|
88 |
+
|
89 |
+
logging.basicConfig(level=logging.WARNING)
|
90 |
+
|
91 |
+
# Configure your dataset and model details
|
92 |
+
processor = PayloadProcessor(
|
93 |
+
dataset_name="SAILING_DATASET_QA",
|
94 |
+
gt_field="ground_truth_det",
|
95 |
+
models=["yolov5n6_RGB_D2304-v1_9C"],
|
96 |
+
sequence_list=["Trip_14_Seq_1"],
|
97 |
+
data_type="rgb",
|
98 |
+
)
|
99 |
+
|
100 |
+
# Evaluate using SEA-AI/det-metrics
|
101 |
+
module = evaluate.load("SEA-AI/det-metrics")
|
102 |
+
module.add_payload(processor.payload)
|
103 |
+
results = module.compute()
|
104 |
+
|
105 |
+
print(results)
|
106 |
+
```
|
107 |
+
|
108 |
+
```console
|
109 |
+
{'all': {'range': [0, 10000000000.0],
|
110 |
+
'iouThr': '0.00',
|
111 |
+
'maxDets': 100,
|
112 |
+
'tp': 89,
|
113 |
+
'fp': 13,
|
114 |
+
'fn': 15,
|
115 |
+
'duplicates': 1,
|
116 |
+
'precision': 0.8725490196078431,
|
117 |
+
'recall': 0.8557692307692307,
|
118 |
+
'f1': 0.8640776699029126,
|
119 |
+
'support': 104,
|
120 |
+
'fpi': 0,
|
121 |
+
'nImgs': 22}}
|
122 |
+
```
|
123 |
+
|
124 |
+
## Metric Settings
|
125 |
+
|
126 |
+
Customize your evaluation by specifying various parameters when loading SEA-AI/det-metrics:
|
127 |
+
|
128 |
+
- **area_ranges_tuples**: Define different area ranges for metrics calculation.
|
129 |
+
- **bbox_format**: Set the bounding box format (e.g., `"xywh"`).
|
130 |
+
- **iou_threshold**: Choose the IOU threshold for determining correct detections.
|
131 |
+
- **class_agnostic**: Specify whether to calculate metrics disregarding class labels.
|
132 |
+
|
133 |
+
```python
|
134 |
+
area_ranges_tuples = [
|
135 |
+
("all", [0, 1e5**2]),
|
136 |
+
("small", [0**2, 6**2]),
|
137 |
+
("medium", [6**2, 12**2]),
|
138 |
+
("large", [12**2, 1e5**2]),
|
139 |
+
]
|
140 |
+
|
141 |
+
module = evaluate.load(
|
142 |
+
"SEA-AI/det-metrics",
|
143 |
+
iou_threshold=[0.00001],
|
144 |
+
area_ranges_tuples=area_ranges_tuples,
|
145 |
+
)
|
146 |
+
```
|
147 |
+
|
148 |
+
## Output Values
|
149 |
+
|
150 |
+
SEA-AI/det-metrics provides a detailed breakdown of performance metrics for each specified area range:
|
151 |
+
|
152 |
+
- **range**: The area range considered.
|
153 |
+
- **iouThr**: The IOU threshold applied.
|
154 |
+
- **maxDets**: The maximum number of detections evaluated.
|
155 |
+
- **tp/fp/fn**: Counts of true positives, false positives, and false negatives.
|
156 |
+
- **duplicates**: Number of duplicate detections.
|
157 |
+
- **precision/recall/f1**: Calculated precision, recall, and F1 score.
|
158 |
+
- **support**: Number of ground truth boxes considered.
|
159 |
+
- **fpi**: Number of images with predictions but no ground truths.
|
160 |
+
- **nImgs**: Total number of images evaluated.
|
161 |
+
|
162 |
+
## Further References
|
163 |
+
|
164 |
+
- **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
|
165 |
+
- **Pycoco Tools**: SEA-AI/det-metrics calculations are based on [pycoco tools](https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools), a widely used library for COCO dataset evaluation.
|
166 |
+
- **Understanding Metrics**: For a deeper understanding of precision, recall, and other metrics, read [this comprehensive guide](https://www.analyticsvidhya.com/blog/2020/09/precision-recall-machine-learning/).
|
167 |
+
|
168 |
+
## Contribution
|
169 |
+
|
170 |
+
Your contributions are welcome! If you'd like to improve SEA-AI/det-metrics or add new features, please feel free to fork the repository, make your changes, and submit a pull request.
|