Spaces:
Running
Running
Merge branch 'main' of https://huggingface.co/spaces/SEA-AI/det-metrics
Browse files- README.md +58 -2
- det-metrics.py +141 -8
README.md
CHANGED
@@ -61,10 +61,11 @@ results = module.compute()
|
|
61 |
print(results)
|
62 |
```
|
63 |
|
64 |
-
This will output the following dictionary containing metrics for the detection model. The key of the dictionary will be the model name or "custom" if no model names are available like in this case.
|
65 |
|
66 |
```json
|
67 |
{
|
|
|
68 |
"custom": {
|
69 |
"metrics": ...,
|
70 |
"eval": ...,
|
@@ -83,6 +84,8 @@ See [Output Values](#output-values) for more detailed information about the retu
|
|
83 |
|
84 |
Integrate SEA-AI/det-metrics with FiftyOne datasets for enhanced analysis and visualization:
|
85 |
|
|
|
|
|
86 |
```python
|
87 |
import evaluate
|
88 |
import logging
|
@@ -97,6 +100,7 @@ processor = PayloadProcessor(
|
|
97 |
models=["yolov5n6_RGB_D2304-v1_9C", "tf1zoo_ssd-mobilenet-v2_agnostic_D2207"],
|
98 |
sequence_list=["Trip_14_Seq_1"],
|
99 |
data_type="rgb",
|
|
|
100 |
)
|
101 |
|
102 |
# Evaluate using SEA-AI/det-metrics
|
@@ -127,6 +131,54 @@ This will output the following dictionary containing metrics for the detection m
|
|
127 |
|
128 |
See [Output Values](#output-values) for more detailed information about the returned results structure, which includes metrics, eval, and params fields for each model passed as input.
|
129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
|
131 |
## Metric Settings
|
132 |
|
@@ -136,6 +188,7 @@ Customize your evaluation by specifying various parameters when loading SEA-AI/d
|
|
136 |
- **bbox_format**: Set the bounding box format (e.g., `"xywh"`).
|
137 |
- **iou_threshold**: Choose the IOU threshold for determining correct detections.
|
138 |
- **class_agnostic**: Specify whether to calculate metrics disregarding class labels.
|
|
|
139 |
|
140 |
```python
|
141 |
area_ranges_tuples = [
|
@@ -190,6 +243,8 @@ SEA-AI/det-metrics metrics dictionary provides a detailed breakdown of performan
|
|
190 |
- **fpi**: Number of images with predictions but no ground truths.
|
191 |
- **nImgs**: Total number of images evaluated.
|
192 |
|
|
|
|
|
193 |
### Eval
|
194 |
|
195 |
The SEA-AI/det-metrics evaluation dictionary provides details about evaluation metrics and results. Below is a description of each field:
|
@@ -243,7 +298,8 @@ The params return value of the COCO evaluation parameters in PyCOCO represents a
|
|
243 |
- **areaRng**: Object area ranges for evaluation. This parameter defines the sizes of objects to evaluate. It is specified as a list of tuples, where each tuple represents a range of area in square pixels.
|
244 |
- **maxDets**: List of thresholds on maximum detections per image for evaluation. By default, it evaluates with thresholds of 1, 10, and 100 detections per image.
|
245 |
- **iouType**: Type of IoU calculation used for evaluation. It can be ‘segm’ (segmentation), ‘bbox’ (bounding box), or ‘keypoints’.
|
246 |
-
- **
|
|
|
247 |
|
248 |
> Note:
|
249 |
> If useCats=0 category labels are ignored as in proposal scoring.
|
|
|
61 |
print(results)
|
62 |
```
|
63 |
|
64 |
+
This will output the following dictionary containing metrics for the detection model. The key of the dictionary will be the model name or "custom" if no model names are available like in this case. Additionally, there is a single key "classes" which maps the labels to the respective indices of the results. If the results are class agnostic, the value of "classes" is None.
|
65 |
|
66 |
```json
|
67 |
{
|
68 |
+
"classes": ...
|
69 |
"custom": {
|
70 |
"metrics": ...,
|
71 |
"eval": ...,
|
|
|
84 |
|
85 |
Integrate SEA-AI/det-metrics with FiftyOne datasets for enhanced analysis and visualization:
|
86 |
|
87 |
+
### Class-agnostic Example
|
88 |
+
|
89 |
```python
|
90 |
import evaluate
|
91 |
import logging
|
|
|
100 |
models=["yolov5n6_RGB_D2304-v1_9C", "tf1zoo_ssd-mobilenet-v2_agnostic_D2207"],
|
101 |
sequence_list=["Trip_14_Seq_1"],
|
102 |
data_type="rgb",
|
103 |
+
slices=["rgb"]
|
104 |
)
|
105 |
|
106 |
# Evaluate using SEA-AI/det-metrics
|
|
|
131 |
|
132 |
See [Output Values](#output-values) for more detailed information about the returned results structure, which includes metrics, eval, and params fields for each model passed as input.
|
133 |
|
134 |
+
### Class-specific example
|
135 |
+
```python
|
136 |
+
import evaluate
|
137 |
+
import logging
|
138 |
+
from seametrics.payload.processor import PayloadProcessor
|
139 |
+
|
140 |
+
logging.basicConfig(level=logging.WARNING)
|
141 |
+
|
142 |
+
# Configure your dataset and model details
|
143 |
+
processor = PayloadProcessor(
|
144 |
+
dataset_name="SAILING_DATASET_QA",
|
145 |
+
gt_field="ground_truth_det",
|
146 |
+
models=["yolov5n6_RGB_D2304-v1_9C", "tf1zoo_ssd-mobilenet-v2_agnostic_D2207"],
|
147 |
+
sequence_list=["Trip_14_Seq_1"],
|
148 |
+
data_type="rgb",
|
149 |
+
slices=["rgb"]
|
150 |
+
)
|
151 |
+
|
152 |
+
# Evaluate using SEA-AI/det-metrics
|
153 |
+
module = evaluate.load("SEA-AI/det-metrics", payload=processor.payload, class_agnostic=False)
|
154 |
+
print("Used labels: \n", module.label_mapping)
|
155 |
+
results = module.compute()
|
156 |
+
|
157 |
+
print("Results: \n", results)
|
158 |
+
```
|
159 |
+
|
160 |
+
```json
|
161 |
+
Used labels:
|
162 |
+
{
|
163 |
+
"SHIP": 0,
|
164 |
+
"FISHING_SHIP": 0,
|
165 |
+
"BOAT_WITHOUT_SAILS": 1,
|
166 |
+
...
|
167 |
+
}
|
168 |
+
Results:
|
169 |
+
{
|
170 |
+
"yolov5n6_RGB_D2304-v1_9C": {
|
171 |
+
"metrics": ..., # metrics are arrays instead of single numbers, where the indices represent class 0, 1, etc. from the label mapping
|
172 |
+
"eval": ...,
|
173 |
+
"params": ...
|
174 |
+
},
|
175 |
+
"tf1zoo_ssd-mobilenet-v2_agnostic_D2207": {
|
176 |
+
"metrics": ...,
|
177 |
+
"eval": ...,
|
178 |
+
"params": ...
|
179 |
+
}
|
180 |
+
}
|
181 |
+
```
|
182 |
|
183 |
## Metric Settings
|
184 |
|
|
|
188 |
- **bbox_format**: Set the bounding box format (e.g., `"xywh"`).
|
189 |
- **iou_threshold**: Choose the IOU threshold for determining correct detections.
|
190 |
- **class_agnostic**: Specify whether to calculate metrics disregarding class labels.
|
191 |
+
- **label_mapping**: Provide an optional mapping of string labels to numeric labels in the form of a dictionary (e.g., `{"SHIP": 0, "BOAT": 1}`). Defaults to a label mapping defined by the SEA.AI label merging map.
|
192 |
|
193 |
```python
|
194 |
area_ranges_tuples = [
|
|
|
243 |
- **fpi**: Number of images with predictions but no ground truths.
|
244 |
- **nImgs**: Total number of images evaluated.
|
245 |
|
246 |
+
If the det-metrics is computed with `class_agnostic=False`, all counts (`tp/fp/fn/duplicates/support/fpi`) and scores (`precision/recall/f1`) are arrays instead of single numbers. For a label mapping of `{"SHIP": 0, "BOAT": 1}`, a exemplary array could be `tp=np.array([10, 4])`, which means there are 10 true positive ships and 4 true positive boats.
|
247 |
+
|
248 |
### Eval
|
249 |
|
250 |
The SEA-AI/det-metrics evaluation dictionary provides details about evaluation metrics and results. Below is a description of each field:
|
|
|
298 |
- **areaRng**: Object area ranges for evaluation. This parameter defines the sizes of objects to evaluate. It is specified as a list of tuples, where each tuple represents a range of area in square pixels.
|
299 |
- **maxDets**: List of thresholds on maximum detections per image for evaluation. By default, it evaluates with thresholds of 1, 10, and 100 detections per image.
|
300 |
- **iouType**: Type of IoU calculation used for evaluation. It can be ‘segm’ (segmentation), ‘bbox’ (bounding box), or ‘keypoints’.
|
301 |
+
- **class_agnostic**: Boolean flag indicating whether to use category labels for evaluation (default is 1, meaning true).
|
302 |
+
- **label_mapping**: Dict of str: int pairs, mapping string labels to numeric labels, so that the payload labels can be mapped to numeric labels (default is a label mapping defined by the class merging structure). Should be provided only if `class_agnostic=False`.
|
303 |
|
304 |
> Note:
|
305 |
> If useCats=0 category labels are ignored as in proposal scoring.
|
det-metrics.py
CHANGED
@@ -13,7 +13,7 @@
|
|
13 |
# limitations under the License.
|
14 |
"""TODO: Add a description here."""
|
15 |
|
16 |
-
from typing import List, Literal, Tuple
|
17 |
|
18 |
import datasets
|
19 |
import evaluate
|
@@ -23,6 +23,43 @@ from seametrics.detection import PrecisionRecallF1Support
|
|
23 |
from seametrics.detection.utils import payload_to_det_metric
|
24 |
from seametrics.payload import Payload
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
_CITATION = """\
|
27 |
@InProceedings{coco:2020,
|
28 |
title = {Microsoft {COCO:} Common Objects in Context},
|
@@ -124,6 +161,7 @@ class DetectionMetric(evaluate.Metric):
|
|
124 |
bbox_format: str = "xywh",
|
125 |
iou_type: Literal["bbox", "segm"] = "bbox",
|
126 |
payload: Payload = None,
|
|
|
127 |
**kwargs,
|
128 |
):
|
129 |
super().__init__(**kwargs)
|
@@ -131,15 +169,25 @@ class DetectionMetric(evaluate.Metric):
|
|
131 |
# save parameters for later
|
132 |
self.payload = payload
|
133 |
self.model_names = payload.models if payload else ["custom"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
134 |
self.iou_thresholds = (
|
135 |
iou_threshold if isinstance(iou_threshold, list) else [iou_threshold]
|
136 |
)
|
137 |
self.area_ranges = [v for _, v in area_ranges_tuples]
|
138 |
self.area_ranges_labels = [k for k, _ in area_ranges_tuples]
|
139 |
-
|
140 |
-
self.iou_type = iou_type
|
141 |
-
self.box_format = bbox_format
|
142 |
-
|
143 |
# initialize coco_metrics
|
144 |
self.coco_metric = PrecisionRecallF1Support(
|
145 |
iou_thresholds=self.iou_thresholds,
|
@@ -147,7 +195,8 @@ class DetectionMetric(evaluate.Metric):
|
|
147 |
area_ranges_labels=self.area_ranges_labels,
|
148 |
class_agnostic=self.class_agnostic,
|
149 |
iou_type=self.iou_type,
|
150 |
-
box_format=self.
|
|
|
151 |
)
|
152 |
|
153 |
# initialize evaluation metric
|
@@ -233,6 +282,7 @@ class DetectionMetric(evaluate.Metric):
|
|
233 |
"""Called within the evaluate.Metric.compute() method"""
|
234 |
|
235 |
results = {}
|
|
|
236 |
for model_name in self.model_names:
|
237 |
print(f"\n##### {model_name} #####")
|
238 |
# add payload if available (otherwise predictions and references must be added with add function)
|
@@ -248,7 +298,7 @@ class DetectionMetric(evaluate.Metric):
|
|
248 |
area_ranges_labels=self.area_ranges_labels,
|
249 |
class_agnostic=self.class_agnostic,
|
250 |
iou_type=self.iou_type,
|
251 |
-
box_format=self.
|
252 |
)
|
253 |
return results
|
254 |
|
@@ -256,7 +306,7 @@ class DetectionMetric(evaluate.Metric):
|
|
256 |
"""Converts the payload to the format expected by the metric"""
|
257 |
# import only if needed since fiftyone is not a direct dependency
|
258 |
|
259 |
-
predictions, references = payload_to_det_metric(payload, model_name)
|
260 |
self.add(prediction=predictions, reference=references)
|
261 |
|
262 |
return self
|
@@ -308,6 +358,11 @@ class DetectionMetric(evaluate.Metric):
|
|
308 |
import plotly.graph_objects as go
|
309 |
from seametrics.detection.utils import get_confidence_metric_vals
|
310 |
|
|
|
|
|
|
|
|
|
|
|
311 |
# Create traces
|
312 |
fig = go.Figure()
|
313 |
metrics = ["precision", "recall", "f1"]
|
@@ -373,6 +428,11 @@ class DetectionMetric(evaluate.Metric):
|
|
373 |
wandb: To interact with the Weights and Biases platform.
|
374 |
datetime: To generate a timestamp for run names.
|
375 |
"""
|
|
|
|
|
|
|
|
|
|
|
376 |
import os
|
377 |
import wandb
|
378 |
import datetime
|
@@ -414,3 +474,76 @@ class DetectionMetric(evaluate.Metric):
|
|
414 |
references = [{"boxes": [[1.0, 2.0, 3.0, 4.0]], "labels": [0], "area": [1.0]}]
|
415 |
|
416 |
return predictions, references
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
# limitations under the License.
|
14 |
"""TODO: Add a description here."""
|
15 |
|
16 |
+
from typing import List, Literal, Tuple, Dict
|
17 |
|
18 |
import datasets
|
19 |
import evaluate
|
|
|
23 |
from seametrics.detection.utils import payload_to_det_metric
|
24 |
from seametrics.payload import Payload
|
25 |
|
26 |
+
LABEL_MAPPING = {
|
27 |
+
'SHIP': 0,
|
28 |
+
'BATTLE_SHIP': 0,
|
29 |
+
'FISHING_SHIP': 0,
|
30 |
+
'CONTAINER_SHIP': 0,
|
31 |
+
'CRUISE_SHIP': 0,
|
32 |
+
'BOAT_WITHOUT_SAILS': 1,
|
33 |
+
'MOTORBOAT': 1,
|
34 |
+
'MARITIME_VEHICLE': 1,
|
35 |
+
'BOAT': 1,
|
36 |
+
'SAILING_BOAT': 2,
|
37 |
+
'SAILING_BOAT_WITH_CLOSED_SAILS': 2,
|
38 |
+
'SAILING_BOAT_WITH_OPEN_SAILS': 2,
|
39 |
+
'LEISURE_VEHICLE': 3,
|
40 |
+
'WATER_SKI': 3,
|
41 |
+
'BUOY': 4,
|
42 |
+
'CONSTRUCTION': 4,
|
43 |
+
'FISHING_BUOY': 4,
|
44 |
+
'HARBOUR_BUOY': 4,
|
45 |
+
'FLOTSAM': 5,
|
46 |
+
'CONTAINER': 5,
|
47 |
+
'SEA_MINE': 5,
|
48 |
+
'WOODEN_LOG': 5,
|
49 |
+
'UNKNOWN': 5,
|
50 |
+
'HUMAN_IN_WATER': 5,
|
51 |
+
'FAR_AWAY_OBJECT': 6,
|
52 |
+
'MARITIME_ANIMAL': 7,
|
53 |
+
'ANIMAL': 7,
|
54 |
+
'FISH': 7,
|
55 |
+
'DOLPHIN': 7,
|
56 |
+
'MAMMAL': 7,
|
57 |
+
'WHALE': 7,
|
58 |
+
'AERIAL_ANIMAL': 8,
|
59 |
+
'SEAGULL': 8,
|
60 |
+
'BIRD': 8,
|
61 |
+
}
|
62 |
+
|
63 |
_CITATION = """\
|
64 |
@InProceedings{coco:2020,
|
65 |
title = {Microsoft {COCO:} Common Objects in Context},
|
|
|
161 |
bbox_format: str = "xywh",
|
162 |
iou_type: Literal["bbox", "segm"] = "bbox",
|
163 |
payload: Payload = None,
|
164 |
+
label_mapping: Dict[str, int] = None,
|
165 |
**kwargs,
|
166 |
):
|
167 |
super().__init__(**kwargs)
|
|
|
169 |
# save parameters for later
|
170 |
self.payload = payload
|
171 |
self.model_names = payload.models if payload else ["custom"]
|
172 |
+
self.iou_threshold = iou_threshold
|
173 |
+
self.area_ranges_tuples = area_ranges_tuples
|
174 |
+
self.class_agnostic = class_agnostic
|
175 |
+
self.iou_type = iou_type
|
176 |
+
self.bbox_format = bbox_format
|
177 |
+
self.label_mapping = LABEL_MAPPING if not self.class_agnostic else None
|
178 |
+
if not class_agnostic:
|
179 |
+
if label_mapping:
|
180 |
+
print("WARNING: overwritting the default label mapping with the \
|
181 |
+
custom label mapping provided via `label_mapping`.")
|
182 |
+
self.label_mapping = label_mapping
|
183 |
+
|
184 |
+
# postprocess parameters
|
185 |
self.iou_thresholds = (
|
186 |
iou_threshold if isinstance(iou_threshold, list) else [iou_threshold]
|
187 |
)
|
188 |
self.area_ranges = [v for _, v in area_ranges_tuples]
|
189 |
self.area_ranges_labels = [k for k, _ in area_ranges_tuples]
|
190 |
+
|
|
|
|
|
|
|
191 |
# initialize coco_metrics
|
192 |
self.coco_metric = PrecisionRecallF1Support(
|
193 |
iou_thresholds=self.iou_thresholds,
|
|
|
195 |
area_ranges_labels=self.area_ranges_labels,
|
196 |
class_agnostic=self.class_agnostic,
|
197 |
iou_type=self.iou_type,
|
198 |
+
box_format=self.bbox_format,
|
199 |
+
labels=sorted(list(set(list(self.label_mapping.values())))) if self.label_mapping else None,
|
200 |
)
|
201 |
|
202 |
# initialize evaluation metric
|
|
|
282 |
"""Called within the evaluate.Metric.compute() method"""
|
283 |
|
284 |
results = {}
|
285 |
+
results["classes"] = self.label_mapping
|
286 |
for model_name in self.model_names:
|
287 |
print(f"\n##### {model_name} #####")
|
288 |
# add payload if available (otherwise predictions and references must be added with add function)
|
|
|
298 |
area_ranges_labels=self.area_ranges_labels,
|
299 |
class_agnostic=self.class_agnostic,
|
300 |
iou_type=self.iou_type,
|
301 |
+
box_format=self.bbox_format,
|
302 |
)
|
303 |
return results
|
304 |
|
|
|
306 |
"""Converts the payload to the format expected by the metric"""
|
307 |
# import only if needed since fiftyone is not a direct dependency
|
308 |
|
309 |
+
predictions, references = payload_to_det_metric(payload, model_name, class_agnostic=self.class_agnostic, label_mapping=self.label_mapping)
|
310 |
self.add(prediction=predictions, reference=references)
|
311 |
|
312 |
return self
|
|
|
358 |
import plotly.graph_objects as go
|
359 |
from seametrics.detection.utils import get_confidence_metric_vals
|
360 |
|
361 |
+
if not self.class_agnostic:
|
362 |
+
raise ValueError(
|
363 |
+
"This method is not yet implemented for `self.class_agnostic=False`."
|
364 |
+
)
|
365 |
+
|
366 |
# Create traces
|
367 |
fig = go.Figure()
|
368 |
metrics = ["precision", "recall", "f1"]
|
|
|
428 |
wandb: To interact with the Weights and Biases platform.
|
429 |
datetime: To generate a timestamp for run names.
|
430 |
"""
|
431 |
+
if not self.class_agnostic:
|
432 |
+
raise ValueError(
|
433 |
+
"This method is not yet implemented for `self.class_agnostic=False`."
|
434 |
+
)
|
435 |
+
|
436 |
import os
|
437 |
import wandb
|
438 |
import datetime
|
|
|
474 |
references = [{"boxes": [[1.0, 2.0, 3.0, 4.0]], "labels": [0], "area": [1.0]}]
|
475 |
|
476 |
return predictions, references
|
477 |
+
|
478 |
+
|
479 |
+
def compute_from_payload(self, payload: Payload, **kwargs):
|
480 |
+
"""
|
481 |
+
Compute the metric from the payload.
|
482 |
+
Args:
|
483 |
+
payload (Payload): The payload to compute the metric from.
|
484 |
+
**kwargs: Additional keyword arguments.
|
485 |
+
Returns:
|
486 |
+
dict: The computed metric results with the following format:
|
487 |
+
{
|
488 |
+
"model_name": {
|
489 |
+
"overall": {
|
490 |
+
"all": {"tp": ..., "fp": ..., "fn": ..., "f1": ...},
|
491 |
+
... # more area ranges
|
492 |
+
},
|
493 |
+
"per_sequence": {
|
494 |
+
"sequence_name": {
|
495 |
+
"all": {...},
|
496 |
+
... # more area ranges
|
497 |
+
},
|
498 |
+
... # more sequences
|
499 |
+
}
|
500 |
+
},
|
501 |
+
... # more models
|
502 |
+
}
|
503 |
+
|
504 |
+
Note:
|
505 |
+
- If the metric does not support area ranges, the metric should store the results under the `all` key.
|
506 |
+
- If a range area is provided it will be displayed in the output. if area_ranges_tuples is None, then all the area ranges will be displayed
|
507 |
+
"""
|
508 |
+
results = {}
|
509 |
+
|
510 |
+
for model_name in payload.models:
|
511 |
+
results[model_name] = {"overall": {}, "per_sequence": {}}
|
512 |
+
|
513 |
+
# per-sequence loop
|
514 |
+
for seq_name, sequence in payload.sequences.items():
|
515 |
+
print(f"\n##### {seq_name} #####")
|
516 |
+
# create new payload only with specific sequence and model
|
517 |
+
sequence_payload = Payload(
|
518 |
+
dataset=payload.dataset,
|
519 |
+
gt_field_name=payload.gt_field_name,
|
520 |
+
models=[model_name],
|
521 |
+
sequences={seq_name: sequence}
|
522 |
+
)
|
523 |
+
module = DetectionMetric(
|
524 |
+
area_ranges_tuples=kwargs["area_ranges_tuples"],
|
525 |
+
iou_threshold=self.iou_threshold,
|
526 |
+
class_agnostic=self.class_agnostic,
|
527 |
+
bbox_format=self.bbox_format,
|
528 |
+
iou_type=self.iou_type,
|
529 |
+
payload=sequence_payload
|
530 |
+
)
|
531 |
+
results[model_name]["per_sequence"][seq_name] = module.compute()[model_name]["metrics"]
|
532 |
+
|
533 |
+
# overall per-model loop
|
534 |
+
model_payload = Payload(
|
535 |
+
dataset=payload.dataset,
|
536 |
+
gt_field_name=payload.gt_field_name,
|
537 |
+
models=[model_name],
|
538 |
+
sequences=payload.sequences
|
539 |
+
)
|
540 |
+
module = DetectionMetric(
|
541 |
+
area_ranges_tuples=kwargs["area_ranges_tuples"],
|
542 |
+
iou_threshold=self.iou_threshold,
|
543 |
+
class_agnostic=self.class_agnostic,
|
544 |
+
bbox_format=self.bbox_format,
|
545 |
+
iou_type=self.iou_type,
|
546 |
+
payload=model_payload
|
547 |
+
)
|
548 |
+
results[model_name]["overall"] = module.compute()[model_name]["metrics"]
|
549 |
+
return results
|