Victoria Oberascher commited on
Commit
99065cf
·
2 Parent(s): 35fe85d 27234d1

Merge branch 'feature/confidence-curve'

Browse files
README.md CHANGED
@@ -28,7 +28,7 @@ First, ensure you have Python 3.8 or later installed. Then, install det-metrics
28
  pip install evaluate git+https://github.com/SEA-AI/seametrics@develop
29
  ```
30
 
31
- ### Basic Usage
32
 
33
  Here's how to quickly evaluate your object detection models using SEA-AI/det-metrics:
34
 
@@ -60,22 +60,23 @@ results = module.compute()
60
  print(results)
61
  ```
62
 
63
- This will output the evaluation metrics for your detection model.
64
- ```
65
- {'all': {'range': [0, 10000000000.0],
66
- 'iouThr': '0.00',
67
- 'maxDets': 100,
68
- 'tp': 2,
69
- 'fp': 0,
70
- 'fn': 0,
71
- 'duplicates': 0,
72
- 'precision': 1.0,
73
- 'recall': 1.0,
74
- 'f1': 1.0,
75
- 'support': 2,
76
- 'fpi': 0,
77
- 'nImgs': 1}
78
  ```
 
 
 
 
 
 
79
 
80
  ## FiftyOne Integration
81
 
@@ -92,35 +93,40 @@ logging.basicConfig(level=logging.WARNING)
92
  processor = PayloadProcessor(
93
  dataset_name="SAILING_DATASET_QA",
94
  gt_field="ground_truth_det",
95
- models=["yolov5n6_RGB_D2304-v1_9C"],
96
  sequence_list=["Trip_14_Seq_1"],
97
  data_type="rgb",
98
  )
99
 
100
  # Evaluate using SEA-AI/det-metrics
101
- module = evaluate.load("SEA-AI/det-metrics")
102
- module.add_payload(processor.payload)
103
  results = module.compute()
104
 
105
  print(results)
106
  ```
107
-
108
- ```console
109
- {'all': {'range': [0, 10000000000.0],
110
- 'iouThr': '0.00',
111
- 'maxDets': 100,
112
- 'tp': 89,
113
- 'fp': 13,
114
- 'fn': 15,
115
- 'duplicates': 1,
116
- 'precision': 0.8725490196078431,
117
- 'recall': 0.8557692307692307,
118
- 'f1': 0.8640776699029126,
119
- 'support': 104,
120
- 'fpi': 0,
121
- 'nImgs': 22}}
122
  ```
123
 
 
 
 
 
 
 
 
124
  ## Metric Settings
125
 
126
  Customize your evaluation by specifying various parameters when loading SEA-AI/det-metrics:
@@ -146,8 +152,32 @@ module = evaluate.load(
146
  ```
147
 
148
  ## Output Values
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
 
150
- SEA-AI/det-metrics provides a detailed breakdown of performance metrics for each specified area range:
 
 
151
 
152
  - **range**: The area range considered.
153
  - **iouThr**: The IOU threshold applied.
@@ -159,6 +189,106 @@ SEA-AI/det-metrics provides a detailed breakdown of performance metrics for each
159
  - **fpi**: Number of images with predictions but no ground truths.
160
  - **nImgs**: Total number of images evaluated.
161
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
  ## Further References
163
 
164
  - **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
 
28
  pip install evaluate git+https://github.com/SEA-AI/seametrics@develop
29
  ```
30
 
31
+ ## Basic Usage
32
 
33
  Here's how to quickly evaluate your object detection models using SEA-AI/det-metrics:
34
 
 
60
  print(results)
61
  ```
62
 
63
+ This will output the following dictionary containing metrics for the detection model. The key of the dictionary will be the model name or "custom" if no model names are available like in this case.
64
+
65
+ ```json
66
+ {
67
+ "custom": {
68
+ "metrics": ...,
69
+ "eval": ...,
70
+ "params": ...
71
+ }
72
+ }
 
 
 
 
 
73
  ```
74
+ - `metrics`: A dictionary containing performance metrics for each area range
75
+ - `eval`: Output of COCOeval.accumulate()
76
+ - `params`: COCOeval parameters object
77
+
78
+ See [Output Values](#output-values) for more detailed information about the returned results structure, which includes metrics, eval, and params fields for each model passed as input.
79
+
80
 
81
  ## FiftyOne Integration
82
 
 
93
  processor = PayloadProcessor(
94
  dataset_name="SAILING_DATASET_QA",
95
  gt_field="ground_truth_det",
96
+ models=["yolov5n6_RGB_D2304-v1_9C", "tf1zoo_ssd-mobilenet-v2_agnostic_D2207"],
97
  sequence_list=["Trip_14_Seq_1"],
98
  data_type="rgb",
99
  )
100
 
101
  # Evaluate using SEA-AI/det-metrics
102
+ module = evaluate.load("SEA-AI/det-metrics", payload=processor.payload)
 
103
  results = module.compute()
104
 
105
  print(results)
106
  ```
107
+ This will output the following dictionary containing metrics for the detection model. The key of the dictionary will be the model name.
108
+ ```json
109
+ {
110
+ "yolov5n6_RGB_D2304-v1_9C": {
111
+ "metrics": ...,
112
+ "eval": ...,
113
+ "params": ...
114
+ },
115
+ "tf1zoo_ssd-mobilenet-v2_agnostic_D2207": {
116
+ "metrics": ...,
117
+ "eval": ...,
118
+ "params": ...
119
+ }
120
+ }
 
121
  ```
122
 
123
+ - `metrics`: A dictionary containing performance metrics for each area range
124
+ - `eval`: Output of COCOeval.accumulate()
125
+ - `params`: COCOeval parameters object
126
+
127
+ See [Output Values](#output-values) for more detailed information about the returned results structure, which includes metrics, eval, and params fields for each model passed as input.
128
+
129
+
130
  ## Metric Settings
131
 
132
  Customize your evaluation by specifying various parameters when loading SEA-AI/det-metrics:
 
152
  ```
153
 
154
  ## Output Values
155
+ For every model passed as input, the results contain the metrics, eval, and params fields. If no specific model was passed (usage without payload), the default model name “custom” will be used.
156
+
157
+ ```json
158
+ {
159
+ "model_1": {
160
+ "metrics": ...,
161
+ "eval": ...,
162
+ "params": ...
163
+ },
164
+ "model_2": {
165
+ "metrics": ...,
166
+ "eval": ...,
167
+ "params": ...
168
+ },
169
+ "model_3": {
170
+ "metrics": ...,
171
+ "eval": ...,
172
+ "params": ...
173
+ },
174
+ ...
175
+ }
176
+ ```
177
 
178
+ ### Metrics
179
+
180
+ SEA-AI/det-metrics metrics dictionary provides a detailed breakdown of performance metrics for each specified area range:
181
 
182
  - **range**: The area range considered.
183
  - **iouThr**: The IOU threshold applied.
 
189
  - **fpi**: Number of images with predictions but no ground truths.
190
  - **nImgs**: Total number of images evaluated.
191
 
192
+ ### Eval
193
+
194
+ The SEA-AI/det-metrics evaluation dictionary provides details about evaluation metrics and results. Below is a description of each field:
195
+
196
+ - **params**: Parameters used for evaluation, defining settings and conditions.
197
+
198
+ - **counts**: Dimensions of parameters used in evaluation, represented as a list [T, R, K, A, M]:
199
+ - T: IoU threshold (default: [1e-10])
200
+ - R: Recall threshold (not used)
201
+ - K: Class index (class-agnostic, so only 0)
202
+ - A: Area range (0=all, 1=valid_n, 2=valid_w, 3=tiny, 4=small, 5=medium, 6=large)
203
+ - M: Max detections (default: [100])
204
+
205
+ - **date**: The date when the evaluation was performed.
206
+
207
+ - **precision**: A multi-dimensional array [TxRxKxAxM] storing precision values for each evaluation setting.
208
+
209
+ - **recall**: A multi-dimensional array [TxKxAxM] storing maximum recall values for each evaluation setting.
210
+
211
+ - **scores**: Scores for each detection.
212
+
213
+ - **TP**: True Positives - correct detections matching ground truth.
214
+
215
+ - **FP**: False Positives - incorrect detections not matching ground truth.
216
+
217
+ - **FN**: False Negatives - ground truth objects not detected.
218
+
219
+ - **duplicates**: Duplicate detections of the same object.
220
+
221
+ - **support**: Number of ground truth objects for each category.
222
+
223
+ - **FPI**: False Positives per Image.
224
+
225
+ - **TPC**: True Positives per Category.
226
+
227
+ - **FPC**: False Positives per Category.
228
+
229
+ - **sorted_conf**: Confidence scores of detections sorted in descending order.
230
+
231
+ > Note:
232
+ > **precision** and **recall** are set to -1 for settings with no ground truth objects.
233
+
234
+ ### Params
235
+
236
+ The params return value of the COCO evaluation parameters in PyCOCO represents a dictionary with various evaluation settings that can be customized. Here’s a breakdown of what each parameter means:
237
+
238
+ - **imgIds**: List of image IDs to use for evaluation. By default, it evaluates on all images.
239
+ - **catIds**: List of category IDs to use for evaluation. By default, it evaluates on all categories.
240
+ - **iouThrs**: List of IoU (Intersection over Union) thresholds for evaluation. By default, it uses thresholds from 0.5 to 0.95 with a step of 0.05 (i.e., [0.5, 0.55, …, 0.95]).
241
+ - **recThrs**: List of recall thresholds for evaluation. By default, it uses 101 thresholds from 0 to 1 with a step of 0.01 (i.e., [0, 0.01, …, 1]).
242
+ - **areaRng**: Object area ranges for evaluation. This parameter defines the sizes of objects to evaluate. It is specified as a list of tuples, where each tuple represents a range of area in square pixels.
243
+ - **maxDets**: List of thresholds on maximum detections per image for evaluation. By default, it evaluates with thresholds of 1, 10, and 100 detections per image.
244
+ - **iouType**: Type of IoU calculation used for evaluation. It can be ‘segm’ (segmentation), ‘bbox’ (bounding box), or ‘keypoints’.
245
+ - **useCats**: Boolean flag indicating whether to use category labels for evaluation (default is 1, meaning true).
246
+
247
+ > Note:
248
+ > If useCats=0 category labels are ignored as in proposal scoring.
249
+ > Multiple areaRngs [Ax2] and maxDets [Mx1] can be specified.
250
+
251
+ ## Confidence Curves
252
+ When you use **module.generate_confidence_curves()**, it creates a graph that shows how metrics like **precision, recall, and f1 score** change as you adjust confidence thresholds. This helps you see the trade-offs between precision (how accurate positive predictions are) and recall (how well the model finds all positive instances) at different confidence levels. As confidence scores go up, models usually have higher precision but may find fewer positive instances, reflecting their certainty in making correct predictions.
253
+
254
+ #### Confidence Config
255
+ The `confidence_config` dictionary is set as `{"T": 0, "R": 0, "K": 0, "A": 0, "M": 0}`, where:
256
+ - `T = 0`: represents the IoU (Intersection over Union) threshold.
257
+ - `R = 0`: is the recall threshold, although it's currently not used.
258
+ - `K = 0`: indicates a class index for class-agnostic mean Average Precision (mAP), with only one class indexed at 0.
259
+ - `A = 0`: signifies that all object sizes are considered for evaluation. (0=all, 1=small, 2=medium, 3=large, ... depending on area ranges)
260
+ - `M = 0`: sets the default maximum detections (`maxDets`) to 100 in precision_recall_f1_support calculations.
261
+
262
+
263
+ ```python
264
+ import evaluate
265
+ import logging
266
+ from seametrics.payload.processor import PayloadProcessor
267
+
268
+ logging.basicConfig(level=logging.WARNING)
269
+
270
+ # Configure your dataset and model details
271
+ processor = PayloadProcessor(
272
+ dataset_name="SAILING_DATASET_QA",
273
+ gt_field="ground_truth_det",
274
+ models=["yolov5n6_RGB_D2304-v1_9C"],
275
+ sequence_list=["Trip_14_Seq_1"],
276
+ data_type="rgb",
277
+ )
278
+
279
+ # Evaluate using SEA-AI/det-metrics
280
+ module = evaluate.load("SEA-AI/det-metrics", payload=processor.payload)
281
+ results = module.compute()
282
+
283
+ # Plot confidence curves
284
+ confidence_config={"T": 0, "R": 0, "K": 0, "A": 0, "M": 0}
285
+ fig = module.generate_confidence_curves(results, confidence_config)
286
+ fig.show()
287
+ ```
288
+
289
+ ![Alt text](assets/example_confidence_curves.png)
290
+
291
+
292
  ## Further References
293
 
294
  - **seametrics Library**: Explore the [seametrics GitHub repository](https://github.com/SEA-AI/seametrics/tree/main) for more details on the underlying library.
assets/example_confidence_curves.png ADDED
det-metrics.py CHANGED
@@ -13,14 +13,14 @@
13
  # limitations under the License.
14
  """TODO: Add a description here."""
15
 
16
- from typing import List, Tuple, Literal
17
- from deprecated import deprecated
18
 
19
- import evaluate
20
  import datasets
 
21
  import numpy as np
22
-
23
  from seametrics.detection import PrecisionRecallF1Support
 
24
  from seametrics.payload import Payload
25
 
26
  _CITATION = """\
@@ -92,7 +92,7 @@ Examples:
92
  >>> from seametrics.payload.processor import PayloadProcessor
93
  >>> payload = PayloadProcessor(...).payload
94
  >>> module = evaluate.load("SEA-AI/det-metrics", ...)
95
- >>> module.add_payload(payload)
96
  >>> result = module.compute()
97
  >>> print(result)
98
  {'all': {
@@ -123,20 +123,36 @@ class DetectionMetric(evaluate.Metric):
123
  class_agnostic: bool = True,
124
  bbox_format: str = "xywh",
125
  iou_type: Literal["bbox", "segm"] = "bbox",
126
- **kwargs
 
127
  ):
128
  super().__init__(**kwargs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
  self.coco_metric = PrecisionRecallF1Support(
130
- iou_thresholds=(
131
- iou_threshold if isinstance(iou_threshold, list) else [iou_threshold]
132
- ),
133
- area_ranges=[v for _, v in area_ranges_tuples],
134
- area_ranges_labels=[k for k, _ in area_ranges_tuples],
135
- class_agnostic=class_agnostic,
136
- iou_type=iou_type,
137
- box_format=bbox_format,
138
  )
139
 
 
 
 
140
  def _info(self):
141
  return evaluate.MetricInfo(
142
  # This is the description that will appear on the modules page.
@@ -186,29 +202,63 @@ class DetectionMetric(evaluate.Metric):
186
 
187
  self.coco_metric.update(prediction, reference)
188
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
189
  # does not impact the metric, but is required for the interface x_x
190
  super(evaluate.Metric, self).add(
191
- prediction=self._postprocess(prediction),
192
- references=self._postprocess(reference),
193
- **kwargs
194
  )
195
 
196
- @deprecated(reason="Use `module.add_payload` instead")
197
  def add_batch(self, payload: Payload, model_name: str = None):
198
  """Takes as input a payload and adds the batch to the metric"""
199
- self.add_payload(payload, model_name)
200
 
201
  def _compute(self, *, predictions, references, **kwargs):
202
  """Called within the evaluate.Metric.compute() method"""
203
- return self.coco_metric.compute()["metrics"]
204
 
205
- def add_payload(self, payload: Payload, model_name: str = None):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
206
  """Converts the payload to the format expected by the metric"""
207
  # import only if needed since fiftyone is not a direct dependency
208
- from seametrics.detection.utils import payload_to_det_metric
209
 
210
  predictions, references = payload_to_det_metric(payload, model_name)
211
  self.add(prediction=predictions, reference=references)
 
212
  return self
213
 
214
  def _preprocess(self, list_of_dicts):
@@ -236,3 +286,79 @@ class DetectionMetric(evaluate.Metric):
236
  elif isinstance(v, list):
237
  d[k] = np.array(v)
238
  return d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  # limitations under the License.
14
  """TODO: Add a description here."""
15
 
16
+ from typing import List, Literal, Tuple
 
17
 
 
18
  import datasets
19
+ import evaluate
20
  import numpy as np
21
+ from deprecated import deprecated
22
  from seametrics.detection import PrecisionRecallF1Support
23
+ from seametrics.detection.utils import payload_to_det_metric
24
  from seametrics.payload import Payload
25
 
26
  _CITATION = """\
 
92
  >>> from seametrics.payload.processor import PayloadProcessor
93
  >>> payload = PayloadProcessor(...).payload
94
  >>> module = evaluate.load("SEA-AI/det-metrics", ...)
95
+ >>> module._add_payload(payload)
96
  >>> result = module.compute()
97
  >>> print(result)
98
  {'all': {
 
123
  class_agnostic: bool = True,
124
  bbox_format: str = "xywh",
125
  iou_type: Literal["bbox", "segm"] = "bbox",
126
+ payload: Payload = None,
127
+ **kwargs,
128
  ):
129
  super().__init__(**kwargs)
130
+
131
+ # save parameters for later
132
+ self.payload = payload
133
+ self.model_names = payload.models if payload else ["custom"]
134
+ self.iou_thresholds = (
135
+ iou_threshold if isinstance(iou_threshold, list) else [iou_threshold]
136
+ )
137
+ self.area_ranges = [v for _, v in area_ranges_tuples]
138
+ self.area_ranges_labels = [k for k, _ in area_ranges_tuples]
139
+ self.class_agnostic = class_agnostic
140
+ self.iou_type = iou_type
141
+ self.box_format = bbox_format
142
+
143
+ # initialize coco_metrics
144
  self.coco_metric = PrecisionRecallF1Support(
145
+ iou_thresholds=self.iou_thresholds,
146
+ area_ranges=self.area_ranges,
147
+ area_ranges_labels=self.area_ranges_labels,
148
+ class_agnostic=self.class_agnostic,
149
+ iou_type=self.iou_type,
150
+ box_format=self.box_format,
 
 
151
  )
152
 
153
+ # initialize evaluation metric
154
+ self._init_evaluation_metric()
155
+
156
  def _info(self):
157
  return evaluate.MetricInfo(
158
  # This is the description that will appear on the modules page.
 
202
 
203
  self.coco_metric.update(prediction, reference)
204
 
205
+ def _init_evaluation_metric(self, **kwargs):
206
+ """
207
+ Initializes the evaluation metric by generating sample data, preprocessing predictions and references,
208
+ and then adding the processed data to the metric using the super class method with additional keyword arguments.
209
+
210
+ Parameters:
211
+ **kwargs: Additional keyword arguments for the super class method.
212
+
213
+ Returns:
214
+ None
215
+ """
216
+ predictions, references = self._generate_sample_data()
217
+ predictions = self._preprocess(predictions)
218
+ references = self._preprocess(references)
219
+
220
  # does not impact the metric, but is required for the interface x_x
221
  super(evaluate.Metric, self).add(
222
+ prediction=self._postprocess(predictions),
223
+ references=self._postprocess(references),
224
+ **kwargs,
225
  )
226
 
227
+ @deprecated(reason="Use `module._add_payload` instead")
228
  def add_batch(self, payload: Payload, model_name: str = None):
229
  """Takes as input a payload and adds the batch to the metric"""
230
+ self._add_payload(payload, model_name)
231
 
232
  def _compute(self, *, predictions, references, **kwargs):
233
  """Called within the evaluate.Metric.compute() method"""
 
234
 
235
+ results = {}
236
+ for model_name in self.model_names:
237
+ print(f"\n##### {model_name} #####")
238
+ # add payload if available (otherwise predictions and references must be added with add function)
239
+ if self.payload:
240
+ self._add_payload(self.payload, model_name)
241
+
242
+ results[model_name] = self.coco_metric.compute()
243
+
244
+ # reset coco_metrics for next model
245
+ self.coco_metric = PrecisionRecallF1Support(
246
+ iou_thresholds=self.iou_thresholds,
247
+ area_ranges=self.area_ranges,
248
+ area_ranges_labels=self.area_ranges_labels,
249
+ class_agnostic=self.class_agnostic,
250
+ iou_type=self.iou_type,
251
+ box_format=self.box_format,
252
+ )
253
+ return results
254
+
255
+ def _add_payload(self, payload: Payload, model_name: str = None):
256
  """Converts the payload to the format expected by the metric"""
257
  # import only if needed since fiftyone is not a direct dependency
 
258
 
259
  predictions, references = payload_to_det_metric(payload, model_name)
260
  self.add(prediction=predictions, reference=references)
261
+
262
  return self
263
 
264
  def _preprocess(self, list_of_dicts):
 
286
  elif isinstance(v, list):
287
  d[k] = np.array(v)
288
  return d
289
+
290
+ def generate_confidence_curves(
291
+ self, results, confidence_config={"T": 0, "R": 0, "K": 0, "A": 0, "M": 0}
292
+ ):
293
+ """
294
+ Generate confidence curves based on results and confidence configuration.
295
+
296
+ Parameters:
297
+ results (dict): Results of the evaluation for different models.
298
+ confidence_config (dict): Configuration for confidence values. Defaults to {"T": 0, "R": 0, "K": 0, "A": 0, "M": 0}.
299
+ T: [1e-10] iou threshold
300
+ R: recall threshold (not used)
301
+ K: class index (class-agnostic mAP, so only 0)
302
+ A: 0=all, 1=small, 2=medium, 3=large, ... (depending on area ranges)
303
+ M: [100] maxDets default in precision_recall_f1_support
304
+
305
+ Returns:
306
+ fig (plotly.graph_objects.Figure): The plotly figure showing the confidence curves.
307
+ """
308
+ import plotly.graph_objects as go
309
+ from seametrics.detection.utils import get_confidence_metric_vals
310
+
311
+ # Create traces
312
+ fig = go.Figure()
313
+ metrics = ["precision", "recall", "f1"]
314
+ for model_name in self.model_names:
315
+ print(f"##### {model_name} #####")
316
+ plot_data = get_confidence_metric_vals(
317
+ cocoeval=results[model_name]["eval"],
318
+ T=confidence_config["T"],
319
+ R=confidence_config["R"],
320
+ K=confidence_config["K"],
321
+ A=confidence_config["A"],
322
+ M=confidence_config["M"],
323
+ )
324
+
325
+ for metric in metrics:
326
+ fig.add_trace(
327
+ go.Scatter(
328
+ x=plot_data["conf"],
329
+ y=plot_data[metric],
330
+ mode="lines",
331
+ name=f"{model_name} {metric}",
332
+ line=dict(dash=None if metric == "f1" else "dash"),
333
+ )
334
+ )
335
+
336
+ fig.update_layout(
337
+ title="Metric vs Confidence",
338
+ hovermode="x unified",
339
+ xaxis_title="Confidence",
340
+ yaxis_title="Metric value",
341
+ )
342
+ return fig
343
+
344
+ def _generate_sample_data(self):
345
+ """
346
+ Generates dummy sample data for predictions and references used for initialization.
347
+
348
+ Returns:
349
+ Tuple[List[Dict[str, List[Union[float, int]]]], List[Dict[str, List[Union[float, int]]]]]:
350
+ - predictions (List[Dict[str, List[Union[float, int]]]]): A list of dictionaries representing the predictions. Each dictionary contains the following keys:
351
+ - boxes (List[List[float]]): A list of bounding boxes in the format [x, y, w, h].
352
+ - labels (List[int]): A list of labels.
353
+ - scores (List[float]): A list of scores.
354
+ - references (List[Dict[str, List[Union[float, int]]]]): A list of dictionaries representing the references. Each dictionary contains the following keys:
355
+ - boxes (List[List[float]]): A list of bounding boxes in the format [x, y, w, h].
356
+ - labels (List[int]): A list of labels.
357
+ - area (List[float]): A list of areas.
358
+ """
359
+ predictions = [
360
+ {"boxes": [[1.0, 2.0, 3.0, 4.0]], "labels": [0], "scores": [1.0]}
361
+ ]
362
+ references = [{"boxes": [[1.0, 2.0, 3.0, 4.0]], "labels": [0], "area": [1.0]}]
363
+
364
+ return predictions, references
requirements.txt CHANGED
@@ -1,3 +1,4 @@
1
  git+https://github.com/huggingface/evaluate@main
2
  git+https://github.com/SEA-AI/seametrics@develop
3
- fiftyone
 
 
1
  git+https://github.com/huggingface/evaluate@main
2
  git+https://github.com/SEA-AI/seametrics@develop
3
+ fiftyone
4
+ plotly