Spaces:
Sleeping
Sleeping
VascoDVRodrigues
commited on
Commit
·
f12b919
1
Parent(s):
18cf4f9
changed readme to match standard payload
Browse files- README.md +51 -24
- mot-metrics.py +5 -63
README.md
CHANGED
@@ -1,22 +1,24 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
emoji: 📚
|
4 |
colorFrom: gray
|
5 |
colorTo: green
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
|
|
|
|
10 |
sdk: gradio
|
11 |
sdk_version: 3.19.1
|
12 |
-
|
13 |
-
|
|
|
|
|
14 |
---
|
15 |
|
16 |
# How to Use
|
17 |
|
18 |
-
|
19 |
-
```python
|
20 |
>>> import numpy as np
|
21 |
>>> module = evaluate.load("SEA-AI/mot-metrics")
|
22 |
>>> predicted =[[1,1,10,20,30,40,0.85],[2,1,15,25,35,45,0.78],[2,2,55,65,75,85,0.95]]
|
@@ -32,22 +34,46 @@ The MOT metrics takes two numeric arrays as input corresponding to the predictio
|
|
32 |
'mota': 0.7, 'motp': 0.02981870229007634,
|
33 |
'num_transfer': 0, 'num_ascend': 0,
|
34 |
'num_migrate': 0}
|
35 |
-
```
|
36 |
|
37 |
-
## Input
|
38 |
-
Each line of the **predictions** array is a list with the following format:
|
39 |
-
```
|
40 |
-
[frame ID, object ID, x, y, width, height, confidence]
|
41 |
-
```
|
42 |
|
43 |
-
|
44 |
-
|
45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
```
|
47 |
|
48 |
-
|
|
|
|
|
49 |
|
50 |
## Output
|
|
|
51 |
The output is a dictionary containing the following metrics:
|
52 |
|
53 |
| Name | Description |
|
@@ -68,16 +94,16 @@ The output is a dictionary containing the following metrics:
|
|
68 |
| mota | Multiple object tracker accuracy. |
|
69 |
| motp | Multiple object tracker precision. |
|
70 |
|
71 |
-
|
72 |
## Citations
|
73 |
-
|
|
|
74 |
@InProceedings{huggingface:module,
|
75 |
title = {A great new module},
|
76 |
authors={huggingface, Inc.},
|
77 |
year={2020}}
|
78 |
```
|
79 |
-
|
80 |
-
```bibtex
|
81 |
@article{milan2016mot16,
|
82 |
title={MOT16: A benchmark for multi-object tracking},
|
83 |
author={Milan, Anton and Leal-Taix{\'e}, Laura and Reid, Ian and Roth, Stefan and Schindler, Konrad},
|
@@ -86,4 +112,5 @@ year={2016}}
|
|
86 |
```
|
87 |
|
88 |
## Further References
|
|
|
89 |
- [Github Repository - py-motmetrics](https://github.com/cheind/py-motmetrics/tree/develop)
|
|
|
1 |
---
|
2 |
+
app_file: app.py
|
|
|
3 |
colorFrom: gray
|
4 |
colorTo: green
|
5 |
+
description: 'TODO: add a description here'
|
6 |
+
emoji: "\U0001F4DA"
|
7 |
+
pinned: false
|
8 |
+
runme:
|
9 |
+
id: 01HPS3ASFJXVQR88985QNSXVN1
|
10 |
+
version: v3
|
11 |
sdk: gradio
|
12 |
sdk_version: 3.19.1
|
13 |
+
tags:
|
14 |
+
- evaluate
|
15 |
+
- metric
|
16 |
+
title: Mot Metrics
|
17 |
---
|
18 |
|
19 |
# How to Use
|
20 |
|
21 |
+
```python {"id":"01HPS3ASFHPCECERTYN7Z4Z7MN"}
|
|
|
22 |
>>> import numpy as np
|
23 |
>>> module = evaluate.load("SEA-AI/mot-metrics")
|
24 |
>>> predicted =[[1,1,10,20,30,40,0.85],[2,1,15,25,35,45,0.78],[2,2,55,65,75,85,0.95]]
|
|
|
34 |
'mota': 0.7, 'motp': 0.02981870229007634,
|
35 |
'num_transfer': 0, 'num_ascend': 0,
|
36 |
'num_migrate': 0}
|
|
|
37 |
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
>>> import evaluate
|
40 |
+
>>> from seametrics.fo_to_payload.utils import fo_to_payload
|
41 |
+
>>> b = fo_to_payload(
|
42 |
+
>>> dataset="SENTRY_VIDEOS_DATASET_QA",
|
43 |
+
>>> gt_field="ground_truth_det",
|
44 |
+
>>> models=['volcanic-sweep-3_02_2023_N_LN1_ep288_TRACKER'],
|
45 |
+
>>> sequence_list=["Sentry_2022_11_PROACT_CELADON_7.5M_MOB_2022_11_25_12_12_39"],
|
46 |
+
>>> tracking_mode=True
|
47 |
+
>>> )
|
48 |
+
>>> module = evaluate.load("SEA-AI/mot-metrics")
|
49 |
+
>>> res = module._calculate(b, max_iou=0.99)
|
50 |
+
>>> print(res)
|
51 |
+
{'Sentry_2022_11_PROACT_CELADON_7.5M_MOB_2022_11_25_12_12_39': {'volcanic-sweep-3_02_2023_N_LN1_ep288_TRACKER': {'idf1': 0.9543031226199543,
|
52 |
+
'idp': 0.9804381846635368,
|
53 |
+
'idr': 0.9295252225519288,
|
54 |
+
'recall': 0.9436201780415431,
|
55 |
+
'precision': 0.9953051643192489,
|
56 |
+
'num_unique_objects': 2,
|
57 |
+
'mostly_tracked': 1,
|
58 |
+
'partially_tracked': 0,
|
59 |
+
'mostly_lost': 1,
|
60 |
+
'num_false_positives': 6,
|
61 |
+
'num_misses': 76,
|
62 |
+
'num_switches': 1,
|
63 |
+
'num_fragmentations': 4,
|
64 |
+
'mota': 0.9384272997032641,
|
65 |
+
'motp': 0.5235835810268012,
|
66 |
+
'num_transfer': 0,
|
67 |
+
'num_ascend': 1,
|
68 |
+
'num_migrate': 0}}}
|
69 |
```
|
70 |
|
71 |
+
## Metric Settings
|
72 |
+
|
73 |
+
The `max_iou` parameter is used to filter out the bounding boxes with IOU less than the threshold. The default value is 0.5. This means that if a ground truth and a predicted bounding boxes IoU value is less than 0.5, then the predicted bounding box is not considered for association. So, the higher the `max_iou` value, the more the predicted bounding boxes are considered for association.
|
74 |
|
75 |
## Output
|
76 |
+
|
77 |
The output is a dictionary containing the following metrics:
|
78 |
|
79 |
| Name | Description |
|
|
|
94 |
| mota | Multiple object tracker accuracy. |
|
95 |
| motp | Multiple object tracker precision. |
|
96 |
|
|
|
97 |
## Citations
|
98 |
+
|
99 |
+
```bibtex {"id":"01HPS3ASFJXVQR88985GKHAQRE"}
|
100 |
@InProceedings{huggingface:module,
|
101 |
title = {A great new module},
|
102 |
authors={huggingface, Inc.},
|
103 |
year={2020}}
|
104 |
```
|
105 |
+
|
106 |
+
```bibtex {"id":"01HPS3ASFJXVQR88985KRT478N"}
|
107 |
@article{milan2016mot16,
|
108 |
title={MOT16: A benchmark for multi-object tracking},
|
109 |
author={Milan, Anton and Leal-Taix{\'e}, Laura and Reid, Ian and Roth, Stefan and Schindler, Konrad},
|
|
|
112 |
```
|
113 |
|
114 |
## Further References
|
115 |
+
|
116 |
- [Github Repository - py-motmetrics](https://github.com/cheind/py-motmetrics/tree/develop)
|
mot-metrics.py
CHANGED
@@ -49,64 +49,6 @@ Args:
|
|
49 |
max_iou (`float`, *optional*):
|
50 |
If specified, this is the minimum Intersection over Union (IoU) threshold to consider a detection as a true positive.
|
51 |
Default is 0.5.
|
52 |
-
Returns:
|
53 |
-
summary: pandas.DataFrame with the following columns:
|
54 |
-
- idf1 (IDF1 Score): The F1 score for the identity assignment, computed as 2 * (IDP * IDR) / (IDP + IDR).
|
55 |
-
- idp (ID Precision): Identity Precision, representing the ratio of correctly assigned identities to the total number of predicted identities.
|
56 |
-
- idr (ID Recall): Identity Recall, representing the ratio of correctly assigned identities to the total number of ground truth identities.
|
57 |
-
- recall: Recall, computed as the ratio of the number of correctly tracked objects to the total number of ground truth objects.
|
58 |
-
- precision: Precision, computed as the ratio of the number of correctly tracked objects to the total number of predicted objects.
|
59 |
-
- num_unique_objects: Total number of unique objects in the ground truth.
|
60 |
-
- mostly_tracked: Number of objects that are mostly tracked throughout the sequence.
|
61 |
-
- partially_tracked: Number of objects that are partially tracked but not mostly tracked.
|
62 |
-
- mostly_lost: Number of objects that are mostly lost throughout the sequence.
|
63 |
-
- num_false_positives: Number of false positive detections (predicted objects not present in the ground truth).
|
64 |
-
- num_misses: Number of missed detections (ground truth objects not detected in the predictions).
|
65 |
-
- num_switches: Number of identity switches.
|
66 |
-
- num_fragmentations: Number of fragmented objects (objects that are broken into multiple tracks).
|
67 |
-
- mota (MOTA - Multiple Object Tracking Accuracy): Overall tracking accuracy, computed as 1 - ((num_false_positives + num_misses + num_switches) / num_unique_objects).
|
68 |
-
- motp (MOTP - Multiple Object Tracking Precision): Average precision of the object localization, computed as the mean of the localization errors of correctly detected objects.
|
69 |
-
- num_transfer: Number of track transfers.
|
70 |
-
- num_ascend: Number of ascended track IDs.
|
71 |
-
- num_migrate: Number of track ID migrations.
|
72 |
-
|
73 |
-
Examples:
|
74 |
-
>>> import numpy as np
|
75 |
-
>>> module = evaluate.load("bascobasculino/mot-metrics")
|
76 |
-
|
77 |
-
>>> predicted =[
|
78 |
-
[1,1,10,20,30,40,0.85],
|
79 |
-
[1,2,50,60,70,80,0.92],
|
80 |
-
[1,3,80,90,100,110,0.75],
|
81 |
-
[2,1,15,25,35,45,0.78],
|
82 |
-
[2,2,55,65,75,85,0.95],
|
83 |
-
[3,1,20,30,40,50,0.88],
|
84 |
-
[3,2,60,70,80,90,0.82],
|
85 |
-
[4,1,25,35,45,55,0.91],
|
86 |
-
[4,2,65,75,85,95,0.89]
|
87 |
-
]
|
88 |
-
|
89 |
-
>>> ground_truth = [
|
90 |
-
[1, 1, 10, 20, 30, 40],
|
91 |
-
[1, 2, 50, 60, 70, 80],
|
92 |
-
[1, 3, 85, 95, 105, 115],
|
93 |
-
[2, 1, 15, 25, 35, 45],
|
94 |
-
[2, 2, 55, 65, 75, 85],
|
95 |
-
[3, 1, 20, 30, 40, 50],
|
96 |
-
[3, 2, 60, 70, 80, 90],
|
97 |
-
[4, 1, 25, 35, 45, 55],
|
98 |
-
[5, 1, 30, 40, 50, 60],
|
99 |
-
[5, 2, 70, 80, 90, 100]
|
100 |
-
]
|
101 |
-
>>> predicted = [np.array(a) for a in predicted]
|
102 |
-
>>> ground_truth = [np.array(a) for a in ground_truth]
|
103 |
-
|
104 |
-
>>> results = module._compute(predictions=predicted, references=ground_truth, max_iou=0.5)
|
105 |
-
>>> print(results)
|
106 |
-
{'idf1': 0.8421052631578947, 'idp': 0.8888888888888888, 'idr': 0.8, 'recall': 0.8, 'precision': 0.8888888888888888,
|
107 |
-
'num_unique_objects': 3,'mostly_tracked': 2, 'partially_tracked': 1, 'mostly_lost': 0, 'num_false_positives': 1,
|
108 |
-
'num_misses': 2, 'num_switches': 0, 'num_fragmentations': 0, 'mota': 0.7, 'motp': 0.02981870229007634,
|
109 |
-
'num_transfer': 0, 'num_ascend': 0, 'num_migrate': 0}
|
110 |
"""
|
111 |
|
112 |
|
@@ -202,12 +144,12 @@ def calculate_from_payload(payload: dict, max_iou: float = 0.5, debug: bool = Fa
|
|
202 |
for sequence in sequence_list:
|
203 |
output[sequence] = {}
|
204 |
frames = payload['sequences'][sequence][gt_field_name]
|
205 |
-
|
206 |
for frame_id, frame in enumerate(frames):
|
207 |
for detection in frame:
|
208 |
id = detection['index']
|
209 |
x, y, w, h = detection['bounding_box']
|
210 |
-
|
211 |
|
212 |
for model in models:
|
213 |
frames = payload['sequences'][sequence][model]
|
@@ -223,13 +165,13 @@ def calculate_from_payload(payload: dict, max_iou: float = 0.5, debug: bool = Fa
|
|
223 |
if debug:
|
224 |
print("sequence/model: ", sequence, model)
|
225 |
print("formated_predictions: ", formated_predictions)
|
226 |
-
print("formated_references: ",
|
227 |
if len(formated_predictions) == 0:
|
228 |
output[sequence][model] = "Model had no predictions."
|
229 |
-
elif len(
|
230 |
output[sequence][model] = "No ground truth."
|
231 |
else:
|
232 |
-
output[sequence][model] = calculate(formated_predictions,
|
233 |
return output
|
234 |
|
235 |
|
|
|
49 |
max_iou (`float`, *optional*):
|
50 |
If specified, this is the minimum Intersection over Union (IoU) threshold to consider a detection as a true positive.
|
51 |
Default is 0.5.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
"""
|
53 |
|
54 |
|
|
|
144 |
for sequence in sequence_list:
|
145 |
output[sequence] = {}
|
146 |
frames = payload['sequences'][sequence][gt_field_name]
|
147 |
+
formated_references = []
|
148 |
for frame_id, frame in enumerate(frames):
|
149 |
for detection in frame:
|
150 |
id = detection['index']
|
151 |
x, y, w, h = detection['bounding_box']
|
152 |
+
formated_references.append([frame_id+1, id, x, y, w, h])
|
153 |
|
154 |
for model in models:
|
155 |
frames = payload['sequences'][sequence][model]
|
|
|
165 |
if debug:
|
166 |
print("sequence/model: ", sequence, model)
|
167 |
print("formated_predictions: ", formated_predictions)
|
168 |
+
print("formated_references: ", formated_references)
|
169 |
if len(formated_predictions) == 0:
|
170 |
output[sequence][model] = "Model had no predictions."
|
171 |
+
elif len(formated_references) == 0:
|
172 |
output[sequence][model] = "No ground truth."
|
173 |
else:
|
174 |
+
output[sequence][model] = calculate(formated_predictions, formated_references, max_iou=max_iou)
|
175 |
return output
|
176 |
|
177 |
|