File size: 638 Bytes
a65161d a87170f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
---
datasets:
- jhu-clsp/jfleg
language:
- en
base_model:
- google-t5/t5-base
pipeline_tag: text2text-generation
library_name: transformers
tags:
- text-generation-inference
- grammar
---
This model is part of the GrammarCorrector tool
https://github.com/akhmat-s/GrammarCorrector
Fine-tuning for the FlanT5 model uses a dataset called [JFLEG](https://arxiv.org/abs/1702.04066). The primary objective of the experiment was to develop a highly effective tool using a minimal dataset.
To accomplish this goal, we implement the key strategy:
- [Perplexity-Based Data](https://arxiv.org/abs/2405.20541) Pruning With Small Reference Models. |