julien-c HF staff commited on
Commit
1a3c05b
·
1 Parent(s): 3e1789e

Add description to card metadata

Browse files

This metric computes the area under the curve (AUC) for the Receiver Operating Characteristic Curve (ROC). The return values represent how well the model used is predicting the correct classes, based on the input data. A score of `0.5` means that the model is predicting exactly at chance, i.e. the model's predictions are correct at the same rate as if the predictions were being decided by the flip of a fair coin or the roll of a fair die. A score above `0.5` indicates that the model is doing better than chance, while a score below `0.5` indicates that the model is doing worse than chance.

This metric has three separate use cases:
- binary: The case in which there are only two different label classes, and each example gets only one label. This is the default implementation.
- multiclass: The case in which there can be more than two different label classes, but each example still gets only one label.
- multilabel: The case in which there can be more than two different label classes, and each example can have more than one label.

Files changed (1) hide show
  1. README.md +18 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: ROC AUC
3
- emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
@@ -8,10 +8,24 @@ sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
- - evaluate
12
- - metric
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
-
15
  # Metric Card for ROC AUC
16
 
17
 
 
1
  ---
2
  title: ROC AUC
3
+ emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
 
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
+ - evaluate
12
+ - metric
13
+ description: >-
14
+ This metric computes the area under the curve (AUC) for the Receiver Operating
15
+ Characteristic Curve (ROC). The return values represent how well the model
16
+ used is predicting the correct classes, based on the input data. A score of
17
+ `0.5` means that the model is predicting exactly at chance, i.e. the model's
18
+ predictions are correct at the same rate as if the predictions were being
19
+ decided by the flip of a fair coin or the roll of a fair die. A score above
20
+ `0.5` indicates that the model is doing better than chance, while a score
21
+ below `0.5` indicates that the model is doing worse than chance.
22
+
23
+
24
+ This metric has three separate use cases:
25
+ - binary: The case in which there are only two different label classes, and each example gets only one label. This is the default implementation.
26
+ - multiclass: The case in which there can be more than two different label classes, but each example still gets only one label.
27
+ - multilabel: The case in which there can be more than two different label classes, and each example can have more than one label.
28
  ---
 
29
  # Metric Card for ROC AUC
30
 
31