File size: 4,518 Bytes
d768812
 
9eacc3b
 
 
 
 
f76b124
9eacc3b
d768812
 
9eacc3b
d768812
f76b124
d768812
 
9eacc3b
d768812
 
9eacc3b
 
 
 
 
d768812
9eacc3b
 
 
 
 
 
 
 
 
d768812
9eacc3b
 
 
 
d768812
9eacc3b
 
 
 
d768812
9eacc3b
 
 
 
 
 
 
d768812
9eacc3b
 
 
 
 
 
 
d768812
9eacc3b
 
 
d768812
 
9eacc3b
 
d768812
9eacc3b
 
 
 
f76b124
9eacc3b
d768812
9eacc3b
 
d768812
9eacc3b
d768812
9eacc3b
 
d768812
 
9eacc3b
d768812
 
9eacc3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d768812
f76b124
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
library_name: transformers
tags:
- GEC
language:
- et
base_model:
- tartuNLP/Llammas-base
pipeline_tag: text-generation
---

# Llammas-base-p1-llama-errors-p2-GEC

GEC model for Estonian based on [tartuNLP/Llammas-base](https://huggingface.co/tartuNLP/Llammas-base) and fine-tuned on 1) correcting 1M synthetic errors produced by our Llama-based error generation model 2) human GEC data.


For training and inference code used in our paper see our repository [https://github.com/TartuNLP/gec-llm](https://github.com/TartuNLP/gec-llm).


### Usage for Inference
Simple example (we provide the templating in `tokenizer.chat_template`)
````
from transformers import pipeline
import torch

gec_pipe = pipeline(
    "text-generation",
    model="tartuNLP/Llammas-base-p1-llama-errors-p2-GEC",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    do_sample=False, num_beams=4, temperature=None, top_p=None
)
gec_pipe.tokenizer.pad_token_id = gec_pipe.tokenizer.eos_token_id
gec_pipe.tokenizer.padding_side = "left"

### Input sentence here:
input_sentence = "Ma läheb koju"
gec_pipe([{"role": "user", "content": input_sentence}], max_new_tokens=300)[0]["generated_text"][-1]["content"]
````

Alternative:
````
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch

model = AutoModelForCausalLM.from_pretrained(
    "tartuNLP/Llammas-base-p1-llama-errors-p2-GEC",
    device_map="auto",
    return_dict=True,
    low_cpu_mem_usage=True,
    torch_dtype=torch.bfloat16
)

tokenizer = AutoTokenizer.from_pretrained(
    "tartuNLP/Llammas-base-p1-llama-errors-p2-GEC",
    padding_side="left"
)
# Need to set the padding token to 0 or eos_token_id if batching is used
# (the model does not set it by default)
tokenizer.pad_token_id = tokenizer.eos_token_id

gec_pipe = pipeline(
    "text-generation", model=model, tokenizer=tokenizer, do_sample=False, num_beams=4, temperature=None, top_p=None
)


### Input sentence here
input_sentence = "Ma läheb koju"

# Two options:
# 1)
PROMPT = '### Instruction:\nReply with a corrected version of the input sentence in Estonian with all grammatical and spelling errors fixed. If there are no errors, reply with a copy of the original sentence.\n\n### Input:\n{input}\n\n### Response:\n'
example = PROMPT.format(input=input_sentence)
# 2) or use the chat template provided by us that does the same thing
example = tokenizer.apply_chat_template([{"role": "user", "content": input_sentence}], tokenize=False)

gec_pipe(example, max_new_tokens=300)[0]["generated_text"][len(example):]
````

#### Preprocessing

For Estonian, we used a detokenization script ([detokenize.py](https://github.com/TartuNLP/gec-llm/blob/main/scripts/gec/detokenize.py))
that also did whitespace and quote normalization, so you might also want to apply those regex rules.


## Citation

**BibTeX:**
````
@inproceedings{luhtaru-etal-2024-err,
    title = "To Err Is Human, but Llamas Can Learn It Too",
    author = "Luhtaru, Agnes  and
      Purason, Taido  and
      Vainikko, Martin  and
      Del, Maksym  and
      Fishel, Mark",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-emnlp.727",
    doi = "10.18653/v1/2024.findings-emnlp.727",
    pages = "12466--12481",
    abstract = "This study explores enhancing grammatical error correction (GEC) through automatic error generation (AEG) using language models (LMs). Specifically, we fine-tune Llama 2 LMs for error generation and find that this approach yields synthetic errors akin to human errors. Next, we train GEC Llama models using these artificial errors and outperform previous state-of-the-art error correction models, with gains ranging between 0.8 and 6 F0.5 points across all tested languages (German, Ukrainian, and Estonian). Moreover, we demonstrate that generating errors by fine-tuning smaller sequence-to-sequence models and prompting large commercial LMs (GPT3.5 and GPT4) also results in synthetic errors beneficially affecting error generation models. We openly release trained models for error generation and correction as well as all the synthesized error datasets for the covered languages.",
}
````

Arxiv link: [https://arxiv.org/abs/2403.05493](https://arxiv.org/abs/2403.05493)