hichem-abdellali commited on
Commit
12e6370
1 Parent(s): 86f14cf

update readme with w&b guide

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -140,13 +140,19 @@ res = module._compute(payload, max_iou=0.5, recognition_thresholds=[0.3, 0.5, 0.
140
  module.wandb(res,log_plots=True, debug=True)
141
  ```
142
 
 
 
 
 
 
 
 
143
 
 
144
 
145
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ca2aafdc38a2858aa43f1e/RYEsFwt6K-jP0mp7_RIZv.png)
146
 
147
 
148
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ca2aafdc38a2858aa43f1e/zVbKzrURGYV858rpnOt_Q.png)
149
-
150
  ## Citations
151
 
152
  ```bibtex {"id":"01HPS3ASFJXVQR88985GKHAQRE"}
 
140
  module.wandb(res,log_plots=True, debug=True)
141
  ```
142
 
143
+ - If `log_plots` is `True`, the W&B logging function generates four bar plots:
144
+ - **Recognition metrics**
145
+ - **Recognized metrics**
146
+ - **Evaluation metrics** (F1, precision, recall)
147
+ - **Confusion metrics** (false negatives, false positives, true positives)
148
+
149
+ - If `debug` is `True`, the function logs the global metrics plus the per-sequence evaluation metrics in descending order of F1 score under the **Logs** section of the run page.
150
 
151
+ - If both `log_plots` and `debug` are `False`, the function logs the metrics to the **Summary**.
152
 
153
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ca2aafdc38a2858aa43f1e/RYEsFwt6K-jP0mp7_RIZv.png)
154
 
155
 
 
 
156
  ## Citations
157
 
158
  ```bibtex {"id":"01HPS3ASFJXVQR88985GKHAQRE"}