Spaces:
Runtime error
Runtime error
Minor Fix
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ This metric implements the evaluation harness for datasets translated with the B
|
|
28 |
4. Run the metric on the `predictions`, `languages`, and `question_infos`.
|
29 |
5. The result of the metric is a tuple where the first is a metric dict and the second value is the results for each prediction.
|
30 |
|
31 |
-
```
|
32 |
import evaluate
|
33 |
from datasets import load_dataset
|
34 |
import os
|
@@ -46,7 +46,6 @@ for row in ds:
|
|
46 |
# Replace this with however you generate and postprocess predictions.
|
47 |
predictions.append(model.generate(row['signature_with_docstring']))
|
48 |
|
49 |
-
|
50 |
metric = evaluate.load("bc_eval")
|
51 |
metrics, results = metric.compute(
|
52 |
predictions=predictions, languages=languages, question_dicts=question_infos, k=[1]
|
|
|
28 |
4. Run the metric on the `predictions`, `languages`, and `question_infos`.
|
29 |
5. The result of the metric is a tuple where the first is a metric dict and the second value is the results for each prediction.
|
30 |
|
31 |
+
```Python
|
32 |
import evaluate
|
33 |
from datasets import load_dataset
|
34 |
import os
|
|
|
46 |
# Replace this with however you generate and postprocess predictions.
|
47 |
predictions.append(model.generate(row['signature_with_docstring']))
|
48 |
|
|
|
49 |
metric = evaluate.load("bc_eval")
|
50 |
metrics, results = metric.compute(
|
51 |
predictions=predictions, languages=languages, question_dicts=question_infos, k=[1]
|