metadata
datasets:
- jhu-clsp/jfleg
language:
- en
base_model:
- google-t5/t5-base
pipeline_tag: text2text-generation
library_name: transformers
tags:
- text-generation-inference
- grammar
This model is part of the GrammarCorrector tool https://github.com/akhmat-s/GrammarCorrector
Fine-tuning for the FlanT5 model uses a dataset called JFLEG. The primary objective of the experiment was to develop a highly effective tool using a minimal dataset.
To accomplish this goal, we implement the key strategy:
- Perplexity-Based Data Pruning With Small Reference Models.