hichem-abdellali commited on
Commit
4de29b8
·
verified ·
1 Parent(s): d70cd24

update readme

Browse files
Files changed (1) hide show
  1. README.md +146 -118
README.md CHANGED
@@ -3,7 +3,7 @@ app_file: app.py
3
  colorFrom: yellow
4
  colorTo: green
5
  description: 'TODO: add a description here'
6
- emoji: 🤑
7
  pinned: false
8
  runme:
9
  id: 01HPS3ASFJXVQR88985QNSXVN1
@@ -13,7 +13,7 @@ sdk_version: 4.36.0
13
  tags:
14
  - evaluate
15
  - metric
16
- title: user-friendly-metrics
17
  ---
18
 
19
  # How to Use
@@ -22,135 +22,163 @@ title: user-friendly-metrics
22
  import evaluate
23
  from seametrics.payload.processor import PayloadProcessor
24
 
25
- payload = PayloadProcessor(
26
- dataset_name="SENTRY_VIDEOS_DATASET_QA",
27
- gt_field="ground_truth_det_fused_id",
28
- models=["ahoy_IR_b2_engine_3_7_0_757_g8765b007_oversea"],
29
- sequence_list=["Sentry_2023_02_08_PROACT_CELADON_@6m_MOB_2023_02_08_14_41_51"],
30
- # tags=["GT_ID_FUSION"],
31
- tracking_mode=True
32
- ).payload
33
-
34
- module = evaluate.load("SEA-AI/user-friendly-metrics")
35
  res = module._compute(payload, max_iou=0.5, recognition_thresholds=[0.3, 0.5, 0.8])
36
  print(res)
37
  ```
38
 
 
 
 
39
  ```json
40
- {
41
- "global": {
42
- "ahoy_IR_b2_engine_3_6_0_49_gd81d3b63_oversea": {
43
- "all": {
44
- "f1": 0.15967351103175614,
45
- "fn": 2923.0,
46
- "fp": 3666.0,
47
- "num_gt_ids": 10,
48
- "precision": 0.14585274930102515,
49
- "recall": 0.1763877148492533,
50
- "recognition_0.3": 0.1,
51
- "recognition_0.5": 0.1,
52
- "recognition_0.8": 0.1,
53
- "recognized_0.3": 1,
54
- "recognized_0.5": 1,
55
- "recognized_0.8": 1,
56
- "tp": 626.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  }
58
- }
59
- },
60
- "per_sequence": {
61
- "Sentry_2023_02_08_PROACT_CELADON_@6m_MOB_2023_02_08_12_51_49": {
62
- "ahoy_IR_b2_engine_3_6_0_49_gd81d3b63_oversea": {
63
  "all": {
64
- "f1": 0.15967351103175614,
65
- "fn": 2923.0,
66
- "fp": 3666.0,
67
- "num_gt_ids": 10,
68
- "precision": 0.14585274930102515,
69
- "recall": 0.1763877148492533,
70
- "recognition_0.3": 0.1,
71
- "recognition_0.5": 0.1,
72
- "recognition_0.8": 0.1,
73
- "recognized_0.3": 1,
74
- "recognized_0.5": 1,
75
- "recognized_0.8": 1,
76
- "tp": 626.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  }
78
  }
79
  }
80
  }
81
- }
82
  ```
83
 
84
- ## Metric Settings
85
-
86
- The `max_iou` parameter is used to filter out the bounding boxes with IOU less than the threshold. The default value is 0.5. This means that if a ground truth and a predicted bounding boxes IoU value is less than 0.5, then the predicted bounding box is not considered for association. So, the higher the `max_iou` value, the more the predicted bounding boxes are considered for association.
87
-
88
- ## Output
89
-
90
- The output is a dictionary containing the following metrics:
91
-
92
- | Name | Description |
93
- | :------------------- | :--------------------------------------------------------------------------------- |
94
- | recall | Number of detections over number of objects. |
95
- | precision | Number of detected objects over sum of detected and false positives. |
96
- | f1 | F1 score |
97
- | num_gt_ids | Number of unique objects on the ground truth |
98
- | fn | Number of false negatives |
99
- | fp | Number of of false postives |
100
- | tp | number of true positives |
101
- | recognized_th | Total number of unique objects on the ground truth that were seen more then th% of the times |
102
- | recognition_th | Total number of unique objects on the ground truth that were seen more then th% of the times over the number of unique objects on the ground truth|
103
-
104
- ## How it Works
105
-
106
- We levereage one of the internal variables of motmetrics ```MOTAccumulator``` class, ```events```, which keeps track of the detections hits and misses. These values are then processed via the ```track_ratios``` function which counts the ratio of assigned to total appearance count per unique object id. We then define the ```recognition``` function that counts how many objects have been seen more times then the desired threshold.
107
-
108
-
109
-
110
-
111
- ## W&B logging
112
- When you use **module.wandb()**, it is possible to log the User Frindly metrics values in Weights and Bias (W&B). The W&B key is stored as a Secret in this repository.
113
-
114
- ### Params
115
- - **wandb_project** - Name of the W&B project (Default: `'user_freindly_metrics'`)
116
- - **log_plots** (bool, optional): Generates categorized bar charts for global metrics. Defaults to True
117
- - **debug** (bool, optional): Logs everything to the console and w&b Logs page. Defaults to False
118
-
119
- ```python
120
- import evaluate
121
- import logging
122
- from seametrics.payload.processor import PayloadProcessor
123
-
124
- logging.basicConfig(level=logging.WARNING)
125
-
126
- # Configure your dataset and model details
127
- payload = PayloadProcessor(
128
- dataset_name="SENTRY_VIDEOS_DATASET_QA",
129
- gt_field="ground_truth_det_fused_id",
130
- models=["ahoy_IR_b2_engine_3_7_0_757_g8765b007_oversea"],
131
- sequence_list=["Sentry_2023_02_08_PROACT_CELADON_@6m_MOB_2023_02_08_14_41_51"],
132
- tracking_mode=True
133
- ).payload
134
-
135
-
136
- # Evaluate using SEA-AI/user-friendly-metrics
137
- module = evaluate.load("SEA-AI/user-friendly-metrics")
138
- res = module._compute(payload, max_iou=0.5, recognition_thresholds=[0.3, 0.5, 0.8])
139
-
140
- module.wandb(res,log_plots=True, debug=True)
141
- ```
142
-
143
- - If `log_plots` is `True`, the W&B logging function generates four bar plots:
144
- - **User_Friendly Metrics (mostly_tracked_score_%)** mainly for non dev users
145
- - **User_Friendly Metrics (mostly_tracked_count_%)** for dev
146
- - **Evaluation Metrics** (F1, precision, recall)
147
- - **Prediction Summary** (false negatives, false positives, true positives)
148
-
149
- - If `debug` is `True`, the function logs the global metrics plus the per-sequence evaluation metrics in descending order of F1 score under the **Logs** section of the run page.
150
-
151
- - If both `log_plots` and `debug` are `False`, the function logs the metrics to the **Summary**.
152
-
153
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ca2aafdc38a2858aa43f1e/RYEsFwt6K-jP0mp7_RIZv.png)
154
 
155
 
156
  ## Citations
 
3
  colorFrom: yellow
4
  colorTo: green
5
  description: 'TODO: add a description here'
6
+ emoji: 🐢
7
  pinned: false
8
  runme:
9
  id: 01HPS3ASFJXVQR88985QNSXVN1
 
13
  tags:
14
  - evaluate
15
  - metric
16
+ title: ref-metric
17
  ---
18
 
19
  # How to Use
 
22
  import evaluate
23
  from seametrics.payload.processor import PayloadProcessor
24
 
25
+ payload = {}
26
+ module = evaluate.load("SEA-AI/ref-metric")
 
 
 
 
 
 
 
 
27
  res = module._compute(payload, max_iou=0.5, recognition_thresholds=[0.3, 0.5, 0.8])
28
  print(res)
29
  ```
30
 
31
+ ## Output
32
+
33
+
34
  ```json
35
+ "model_1": {
36
+ "overall": {
37
+ "all": {
38
+ "tp": 50,
39
+ "fp": 20,
40
+ "fn": 10,
41
+ "precision": 0.71,
42
+ "recall": 0.83,
43
+ "f1": 0.76
44
+ },
45
+ "small": {
46
+ "tp": 15,
47
+ "fp": 5,
48
+ "fn": 2,
49
+ "precision": 0.75,
50
+ "recall": 0.88,
51
+ "f1": 0.81
52
+ },
53
+ "medium": {
54
+ "tp": 25,
55
+ "fp": 10,
56
+ "fn": 5,
57
+ "precision": 0.71,
58
+ "recall": 0.83,
59
+ "f1": 0.76
60
+ },
61
+ "large": {
62
+ "tp": 10,
63
+ "fp": 5,
64
+ "fn": 3,
65
+ "precision": 0.67,
66
+ "recall": 0.77,
67
+ "f1": 0.71
68
+ }
69
+ },
70
+ "per_sequence": {
71
+ "sequence_1": {
72
+ "all": {
73
+ "tp": 30,
74
+ "fp": 15,
75
+ "fn": 7,
76
+ "precision": 0.67,
77
+ "recall": 0.81,
78
+ "f1": 0.73
79
+ },
80
+ "small": {
81
+ "tp": 10,
82
+ "fp": 3,
83
+ "fn": 1,
84
+ "precision": 0.77,
85
+ "recall": 0.91,
86
+ "f1": 0.83
87
+ },
88
+ "medium": {
89
+ "tp": 15,
90
+ "fp": 7,
91
+ "fn": 2,
92
+ "precision": 0.68,
93
+ "recall": 0.88,
94
+ "f1": 0.77
95
+ },
96
+ "large": {
97
+ "tp": 5,
98
+ "fp": 2,
99
+ "fn": 1,
100
+ "precision": 0.71,
101
+ "recall": 0.83,
102
+ "f1": 0.76
103
+ }
104
+ }
105
  }
106
+ },
107
+ "model_2": {
108
+ "overall": {
 
 
109
  "all": {
110
+ "tp": 60,
111
+ "fp": 25,
112
+ "fn": 15,
113
+ "precision": 0.71,
114
+ "recall": 0.80,
115
+ "f1": 0.75
116
+ },
117
+ "small": {
118
+ "tp": 20,
119
+ "fp": 6,
120
+ "fn": 3,
121
+ "precision": 0.77,
122
+ "recall": 0.87,
123
+ "f1": 0.82
124
+ },
125
+ "medium": {
126
+ "tp": 30,
127
+ "fp": 12,
128
+ "fn": 5,
129
+ "precision": 0.71,
130
+ "recall": 0.86,
131
+ "f1": 0.78
132
+ },
133
+ "large": {
134
+ "tp": 10,
135
+ "fp": 7,
136
+ "fn": 5,
137
+ "precision": 0.59,
138
+ "recall": 0.67,
139
+ "f1": 0.63
140
+ }
141
+ },
142
+ "per_sequence": {
143
+ "sequence_1": {
144
+ "all": {
145
+ "tp": 40,
146
+ "fp": 18,
147
+ "fn": 8,
148
+ "precision": 0.69,
149
+ "recall": 0.83,
150
+ "f1": 0.75
151
+ },
152
+ "small": {
153
+ "tp": 12,
154
+ "fp": 4,
155
+ "fn": 2,
156
+ "precision": 0.75,
157
+ "recall": 0.86,
158
+ "f1": 0.80
159
+ },
160
+ "medium": {
161
+ "tp": 20,
162
+ "fp": 8,
163
+ "fn": 3,
164
+ "precision": 0.71,
165
+ "recall": 0.87,
166
+ "f1": 0.78
167
+ },
168
+ "large": {
169
+ "tp": 8,
170
+ "fp": 6,
171
+ "fn": 3,
172
+ "precision": 0.57,
173
+ "recall": 0.73,
174
+ "f1": 0.64
175
+ }
176
  }
177
  }
178
  }
179
  }
 
180
  ```
181
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
182
 
183
 
184
  ## Citations