lvwerra HF staff commited on
Commit
856c775
·
1 Parent(s): 0916dc1

Update Space (evaluate main: e179b5b8)

Browse files
Files changed (1) hide show
  1. README.md +4 -18
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: ROC AUC
3
- emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
@@ -8,24 +8,10 @@ sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
- - evaluate
12
- - metric
13
- description: >-
14
- This metric computes the area under the curve (AUC) for the Receiver Operating
15
- Characteristic Curve (ROC). The return values represent how well the model
16
- used is predicting the correct classes, based on the input data. A score of
17
- `0.5` means that the model is predicting exactly at chance, i.e. the model's
18
- predictions are correct at the same rate as if the predictions were being
19
- decided by the flip of a fair coin or the roll of a fair die. A score above
20
- `0.5` indicates that the model is doing better than chance, while a score
21
- below `0.5` indicates that the model is doing worse than chance.
22
-
23
-
24
- This metric has three separate use cases:
25
- - binary: The case in which there are only two different label classes, and each example gets only one label. This is the default implementation.
26
- - multiclass: The case in which there can be more than two different label classes, but each example still gets only one label.
27
- - multilabel: The case in which there can be more than two different label classes, and each example can have more than one label.
28
  ---
 
29
  # Metric Card for ROC AUC
30
 
31
 
 
1
  ---
2
  title: ROC AUC
3
+ emoji: 🤗
4
  colorFrom: blue
5
  colorTo: red
6
  sdk: gradio
 
8
  app_file: app.py
9
  pinned: false
10
  tags:
11
+ - evaluate
12
+ - metric
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
+
15
  # Metric Card for ROC AUC
16
 
17