File size: 2,046 Bytes
d27d43d d275ac7 d27d43d d275ac7 d27d43d d275ac7 d27d43d d275ac7 c6db808 d275ac7 f5f2008 d275ac7 f5f2008 d275ac7 f5f2008 d275ac7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
title: HTER
emoji: 🤗
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
tags:
- evaluate
- metric
description: >-
HTER (Half Total Error Rate) is a metric that combines the False Accept Rate (FAR) and False Reject Rate (FRR) to provide a comprehensive evaluation of a system's performance. It can be computed with:
HTER = (FAR + FRR) / 2
Where:
FAR (False Accept Rate) = FP / (FP + TN)
FRR (False Reject Rate) = FN / (FN + TP)
TP: True positive
TN: True negative
FP: False positive
FN: False negative
---
# Metric Card for HTER
## Metric Description
HTER (Half Total Error Rate) is a metric that combines the False Accept Rate (FAR) and False Reject Rate (FRR) to provide a comprehensive evaluation of a system's performance. It can be computed with:
HTER = (FAR + FRR) / 2
Where:
FAR (False Accept Rate) = FP / (FP + TN)
FRR (False Reject Rate) = FN / (FN + TP)
TP: True positive
TN: True negative
FP: False positive
FN: False negative
## How to Use
At minimum, this metric requires predictions and references as inputs.
```python
>>> hter_metric = evaluate.load("murinj/hter")
>>> results = hter_metric.compute(references=[0, 0], predictions=[0, 1])
>>> print(results)
{'HTER': 0.25}
```
### Inputs
- **predictions** (`list` of `int`): Predicted labels.
- **references** (`list` of `int`): Ground truth labels.
[//]: # (- **normalize** (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True.)
[//]: # (- **sample_weight** (`list` of `float`): Sample weights Defaults to None.)
### Output Values
- **HTER**(`float` or `int`): HTER score. Minimum possible value is 0. Maximum possible value is 1.0.
Output Example(s):
```python
{'HTER': 0.0}
```
This metric outputs a dictionary, containing the HTER score.
[//]: # (## Citation(s))
[//]: # (```bibtex)
[//]: # ()
[//]: # (```)
## Further References
|