File size: 1,416 Bytes
0c862de 41ad560 57060bd 24fa801 0c862de 5c84599 0c862de 3e76455 0c862de 41ad560 57060bd 24fa801 57060bd 24fa801 57060bd 24fa801 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
title: GLEU
tags:
- evaluate
- metric
description: "Generalized Language Evaluation Understanding (GLEU) is a metric initially developed for Grammatical Error Correction (GEC), that builds upon BLEU by rewarding corrections while also correctly crediting unchanged source text."
sdk: gradio
sdk_version: 3.50
app_file: app.py
pinned: false
emoji: π
colorFrom: indigo
colorTo: green
---
# Metric Card for GLEU
## Metric Description
GLEU metric can be used for any monolingual "translation" task, that is it can be used for Grammatical Error Correction and other text re-writing tasks. BLEU computes n-gram precisions over the reference but assigns more weight to n-grams that have been correctly changed from the source. GLEU rewards corrections while also correctly crediting unchanged source text.
## How to Use
Follow instructions on [the GitHub repository](https://github.com/cnap/gec-ranking) for the cited papers.
## Citation
- [Ground Truth for Grammatical Error Correction Metrics](https://aclanthology.org/P15-2097) by Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault, and Matt Post. *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing*.
- [GLEU Without Tuning](https://arxiv.org/abs/1605.02592) by Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault, and Matt Post. arXiv.
|