|
--- |
|
title: HTER |
|
emoji: 🤗 |
|
colorFrom: blue |
|
colorTo: red |
|
sdk: gradio |
|
sdk_version: 3.19.1 |
|
app_file: app.py |
|
pinned: false |
|
tags: |
|
- evaluate |
|
- metric |
|
description: >- |
|
HTER (Half Total Error Rate) is a metric that combines the False Accept Rate (FAR) and False Reject Rate (FRR) to provide a comprehensive evaluation of a system's performance. It can be computed with: |
|
HTER = (FAR + FRR) / 2 |
|
Where: |
|
FAR (False Accept Rate) = FP / (FP + TN) |
|
FRR (False Reject Rate) = FN / (FN + TP) |
|
TP: True positive |
|
TN: True negative |
|
FP: False positive |
|
FN: False negative |
|
--- |
|
|
|
# Metric Card for HTER |
|
|
|
|
|
## Metric Description |
|
|
|
HTER (Half Total Error Rate) is a metric that combines the False Accept Rate (FAR) and False Reject Rate (FRR) to provide a comprehensive evaluation of a system's performance. It can be computed with: |
|
|
|
HTER = (FAR + FRR) / 2 |
|
|
|
Where: |
|
|
|
FAR (False Accept Rate) = FP / (FP + TN) |
|
|
|
FRR (False Reject Rate) = FN / (FN + TP) |
|
|
|
TP: True positive |
|
|
|
TN: True negative |
|
|
|
FP: False positive |
|
|
|
FN: False negative |
|
|
|
|
|
|
|
## How to Use |
|
|
|
At minimum, this metric requires predictions and references as inputs. |
|
|
|
```python |
|
>>> hter_metric = evaluate.load("murinj/hter") |
|
>>> results = hter_metric.compute(references=[0, 0], predictions=[0, 1]) |
|
>>> print(results) |
|
{'HTER': 0.25} |
|
``` |
|
|
|
|
|
### Inputs |
|
- **predictions** (`list` of `int`): Predicted labels. |
|
- **references** (`list` of `int`): Ground truth labels. |
|
|
|
[//]: # (- **normalize** (`boolean`): If set to False, returns the number of correctly classified samples. Otherwise, returns the fraction of correctly classified samples. Defaults to True.) |
|
|
|
[//]: # (- **sample_weight** (`list` of `float`): Sample weights Defaults to None.) |
|
|
|
|
|
### Output Values |
|
- **HTER**(`float` or `int`): HTER score. Minimum possible value is 0. Maximum possible value is 1.0. |
|
|
|
Output Example(s): |
|
```python |
|
{'HTER': 0.0} |
|
``` |
|
|
|
This metric outputs a dictionary, containing the HTER score. |
|
|
|
|
|
[//]: # (## Citation(s)) |
|
|
|
[//]: # (```bibtex) |
|
|
|
[//]: # () |
|
[//]: # (```) |
|
|
|
|
|
## Further References |
|
|