Spaces:
Build error
Build error
Update README.md
Browse files
README.md
CHANGED
@@ -25,8 +25,8 @@ Build a text report showing the main classification metrics that are accuracy, p
|
|
25 |
At minimum, this metric requires predictions and references as inputs.
|
26 |
|
27 |
```python
|
28 |
-
>>>
|
29 |
-
>>> results =
|
30 |
>>> print(results)
|
31 |
{'0': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, '1': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, 'accuracy': 1.0, 'macro avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}, 'weighted avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}}
|
32 |
```
|
@@ -65,8 +65,8 @@ Output Example(s):
|
|
65 |
|
66 |
Simple Example:
|
67 |
```python
|
68 |
-
>>>
|
69 |
-
>>> results =
|
70 |
>>> print(results)
|
71 |
{'0': {'precision': 0.5, 'recall': 0.5, 'f1-score': 0.5, 'support': 2}, '1': {'precision': 0.6666666666666666, 'recall': 1.0, 'f1-score': 0.8, 'support': 2}, '2': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 2}, 'accuracy': 0.5, 'macro avg': {'precision': 0.38888888888888884, 'recall': 0.5, 'f1-score': 0.43333333333333335, 'support': 6}, 'weighted avg': {'precision': 0.38888888888888884, 'recall': 0.5, 'f1-score': 0.43333333333333335, 'support': 6}}
|
72 |
```
|
|
|
25 |
At minimum, this metric requires predictions and references as inputs.
|
26 |
|
27 |
```python
|
28 |
+
>>> classification_report_metric = evaluate.load("bstrai/classification_report")
|
29 |
+
>>> results = classification_report_metric.compute(references=[0, 1], predictions=[0, 1])
|
30 |
>>> print(results)
|
31 |
{'0': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, '1': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, 'accuracy': 1.0, 'macro avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}, 'weighted avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}}
|
32 |
```
|
|
|
65 |
|
66 |
Simple Example:
|
67 |
```python
|
68 |
+
>>> classification_report_metric = evaluate.load("bstrai/classification_report")
|
69 |
+
>>> results = classification_report_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0])
|
70 |
>>> print(results)
|
71 |
{'0': {'precision': 0.5, 'recall': 0.5, 'f1-score': 0.5, 'support': 2}, '1': {'precision': 0.6666666666666666, 'recall': 1.0, 'f1-score': 0.8, 'support': 2}, '2': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 2}, 'accuracy': 0.5, 'macro avg': {'precision': 0.38888888888888884, 'recall': 0.5, 'f1-score': 0.43333333333333335, 'support': 6}, 'weighted avg': {'precision': 0.38888888888888884, 'recall': 0.5, 'f1-score': 0.43333333333333335, 'support': 6}}
|
72 |
```
|