murinj
commited on
Commit
·
f5f2008
1
Parent(s):
649b3ae
init7
Browse files
README.md
CHANGED
@@ -55,7 +55,7 @@ At minimum, this metric requires predictions and references as inputs.
|
|
55 |
>>> hter_metric = evaluate.load("murinj/hter")
|
56 |
>>> results = hter_metric.compute(references=[0, 0], predictions=[0, 1])
|
57 |
>>> print(results)
|
58 |
-
{'HTER': 0.
|
59 |
```
|
60 |
|
61 |
|
@@ -69,11 +69,11 @@ At minimum, this metric requires predictions and references as inputs.
|
|
69 |
|
70 |
|
71 |
### Output Values
|
72 |
-
- **HTER**(`float` or `int`): HTER score. Minimum possible value is 0. Maximum possible value is 1.0
|
73 |
|
74 |
Output Example(s):
|
75 |
```python
|
76 |
-
{'HTER':
|
77 |
```
|
78 |
|
79 |
This metric outputs a dictionary, containing the HTER score.
|
|
|
55 |
>>> hter_metric = evaluate.load("murinj/hter")
|
56 |
>>> results = hter_metric.compute(references=[0, 0], predictions=[0, 1])
|
57 |
>>> print(results)
|
58 |
+
{'HTER': 0.25}
|
59 |
```
|
60 |
|
61 |
|
|
|
69 |
|
70 |
|
71 |
### Output Values
|
72 |
+
- **HTER**(`float` or `int`): HTER score. Minimum possible value is 0. Maximum possible value is 1.0.
|
73 |
|
74 |
Output Example(s):
|
75 |
```python
|
76 |
+
{'HTER': 0.0}
|
77 |
```
|
78 |
|
79 |
This metric outputs a dictionary, containing the HTER score.
|
hter.py
CHANGED
@@ -38,7 +38,7 @@ Args:
|
|
38 |
references (`list` of `int`): Ground truth labels.
|
39 |
|
40 |
Returns:
|
41 |
-
HTER (`float` or `int`): HTER score. Minimum possible value is 0. Maximum possible value is 1.0
|
42 |
|
43 |
Examples:
|
44 |
|
@@ -46,7 +46,7 @@ Examples:
|
|
46 |
>>> hter_metric = evaluate.load("murinj/hter")
|
47 |
>>> results = hter_metric.compute(references=[0, 0], predictions=[0, 1])
|
48 |
>>> print(results)
|
49 |
-
{'HTER': 0.
|
50 |
|
51 |
"""
|
52 |
|
|
|
38 |
references (`list` of `int`): Ground truth labels.
|
39 |
|
40 |
Returns:
|
41 |
+
HTER (`float` or `int`): HTER score. Minimum possible value is 0. Maximum possible value is 1.0.
|
42 |
|
43 |
Examples:
|
44 |
|
|
|
46 |
>>> hter_metric = evaluate.load("murinj/hter")
|
47 |
>>> results = hter_metric.compute(references=[0, 0], predictions=[0, 1])
|
48 |
>>> print(results)
|
49 |
+
{'HTER': 0.25}
|
50 |
|
51 |
"""
|
52 |
|