Spaces:
Runtime error
Runtime error
File size: 1,818 Bytes
d9ff9f9 889e704 630c62a d9ff9f9 889e704 d9ff9f9 889e704 630c62a 889e704 630c62a 889e704 630c62a 889e704 630c62a 889e704 630c62a 889e704 630c62a 889e704 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
title: CTC_Eval
datasets:
-
tags:
- evaluate
- metric
description: "This repo contains code of an automatic evaluation metric described in the paper
Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation"
sdk: gradio
sdk_version: 3.0.2
app_file: app.py
pinned: false
---
# Metric Card for CTC_Eval
## Metric Description
* Previous work on NLG evaluation has typically focused on a single task and developed individual evaluation metrics based on specific intuitions.
* In this work, we propose a unifying perspective based on the nature of information change in NLG tasks, including compression (e.g., summarization), transduction (e.g., text rewriting), and creation (e.g., dialog).
* A common concept underlying the three broad categories is information alignment, which we define as the extent to which the information in one generation component is grounded in another.
* We adopt contextualized language models to measure information alignment.
## How to Use
Example:
```python
>>> ctc_score = evaluate.load("yzha/ctc_eval")
>>> results = ctc_score.compute(references=['hello world'], predictions='hi world')
>>> print(results)
{'ctc_score': 0.5211202502250671}
```
### Inputs
- **input_field**
- `references`: The document contains all the information
- `predictions`: NLG model generated text
### Output Values
The CTC Score.
## Citation
@inproceedings{deng2021compression,
title={Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation},
author={Deng, Mingkai and Tan, Bowen and Liu, Zhengzhong and Xing, Eric and Hu, Zhiting},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={7580--7605},
year={2021}
}
|