|
--- |
|
title: GLEU |
|
tags: |
|
- evaluate |
|
- metric |
|
description: "Generalized Language Evaluation Understanding (GLEU) is a metric initially developed for Grammatical Error Correction (GEC), that builds upon BLEU by rewarding corrections while also correctly crediting unchanged source text." |
|
sdk: gradio |
|
sdk_version: 3.50 |
|
app_file: app.py |
|
pinned: false |
|
emoji: π |
|
colorFrom: indigo |
|
colorTo: green |
|
--- |
|
|
|
# Metric Card for GLEU |
|
|
|
## Metric Description |
|
|
|
GLEU metric can be used for any monolingual "translation" task, that is it can be used for Grammatical Error Correction and other text re-writing tasks. BLEU computes n-gram precisions over the reference but assigns more weight to n-grams that have been correctly changed from the source. GLEU rewards corrections while also correctly crediting unchanged source text. |
|
|
|
## How to Use |
|
|
|
Follow instructions on [the GitHub repository](https://github.com/cnap/gec-ranking) for the cited papers. |
|
|
|
## Citation |
|
|
|
- [Ground Truth for Grammatical Error Correction Metrics](https://aclanthology.org/P15-2097) by Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault, and Matt Post. *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing*. |
|
- [GLEU Without Tuning](https://arxiv.org/abs/1605.02592) by Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault, and Matt Post. arXiv. |
|
|