farelzii commited on
Commit
b39b096
1 Parent(s): fe7b370

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +155 -24
README.md CHANGED
@@ -1,42 +1,173 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- ## Widget Inference
6
 
7
- Anda dapat mencoba model ini langsung di sini:
 
 
 
 
 
 
8
 
9
- **(Ganti `username/grammar-corrector-t5` dengan nama model Anda)**
 
 
 
 
 
 
10
 
11
- import gradio as gr
 
 
 
 
 
 
12
  from transformers import pipeline
13
 
14
- # Load model
15
- corrector = pipeline("text2text-generation", model="username/grammar-corrector-t5")
 
 
 
 
 
 
16
 
17
- # Buat antarmuka Gradio
18
- def correct_grammar(text):
19
- return corrector(text)[0]["generated_text"]
20
 
21
- iface = gr.Interface(
22
- fn=correct_grammar,
23
- inputs=gr.Textbox(lines=5, placeholder="Masukkan teks di sini..."),
24
- outputs=gr.Textbox(lines=5),
25
- )
 
26
 
27
- iface.launch()
28
 
29
- import gradio as gr
30
 
31
- def greet(name):
32
- return "Hello, " + name + "!"
33
 
34
- demo = gr.Interface(
35
- fn=greet,
36
- inputs=gr.Textbox(label="Nama Anda"),
37
- outputs=gr.Textbox(label="Hasil"),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
- demo.launch()
41
 
 
42
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ languages:
3
+ - en
4
+ license:
5
+ - cc-by-nc-sa-4.0
6
+ - apache-2.0
7
+ tags:
8
+ - grammar
9
+ - spelling
10
+ - punctuation
11
+ - error-correction
12
+ - grammar synthesis
13
+ - FLAN
14
+
15
+ datasets:
16
+ - jfleg
17
+ widget:
18
+
19
+ - text: "There car broke down so their hitching a ride to they're class."
20
+ example_title: "compound-1"
21
+ - text: "i can has cheezburger"
22
+ example_title: "cheezburger"
23
+ - text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s
24
+ i again tort watfettering an we have estimated the trend an
25
+ called wot to be called sthat of exty right now we can and look at
26
+ wy this should not hare a trend i becan we just remove the trend an and we can we now estimate
27
+ tesees ona effect of them exty"
28
+ example_title: "Transcribed Audio Example 2"
29
+ - text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money."
30
+ example_title: "incorrect word choice (context)"
31
+ - text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording
32
+ an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about
33
+ ta ohow to remove trents in these nalitives from time series"
34
+ example_title: "lowercased audio transcription output"
35
+ - text: "Frustrated, the chairs took me forever to set up."
36
+ example_title: "dangling modifier"
37
+ - text: "I would like a peice of pie."
38
+ example_title: "miss-spelling"
39
+ - text: "Which part of Zurich was you going to go hiking in when we were there for the first time together? ! ?"
40
+ example_title: "chatbot on Zurich"
41
+ - text: "Most of the course is about semantic or content of language but there are also interesting topics to be learned from the servicefeatures except statistics in characters in documents. At this point, Elvthos introduces himself as his native English speaker and goes on to say that if you continue to work on social scnce,"
42
+ example_title: "social science ASR summary output"
43
+ - text: "they are somewhat nearby right yes please i'm not sure how the innish is tepen thut mayyouselect one that istatte lo variants in their property e ere interested and anyone basical e may be applyind reaching the browing approach were"
44
+ example_title: "medical course audio transcription"
45
+
46
+ parameters:
47
+ max_length: 128
48
+ min_length: 4
49
+ num_beams: 8
50
+ repetition_penalty: 1.21
51
+ length_penalty: 1
52
+ early_stopping: True
53
  ---
54
 
 
55
 
56
+ # grammar-synthesis-large: FLAN-t5
57
+
58
+ <a href="https://colab.research.google.com/gist/pszemraj/5dc89199a631a9c6cfd7e386011452a0/demo-flan-t5-large-grammar-synthesis.ipynb">
59
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
60
+ </a>
61
+
62
+ A fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset. [Demo](https://huggingface.co/spaces/pszemraj/FLAN-grammar-correction) on HF spaces.
63
 
64
+ ## Example
65
+
66
+ ![example](https://i.imgur.com/PIhrc7E.png)
67
+
68
+ Compare vs. the original [grammar-synthesis-large](https://huggingface.co/pszemraj/grammar-synthesis-large).
69
+
70
+ ---
71
 
72
+ ## usage in Python
73
+
74
+ > There's a colab notebook that already has this basic version implemented (_click on the Open in Colab button_)
75
+
76
+ After `pip install transformers` run the following code:
77
+
78
+ ```python
79
  from transformers import pipeline
80
 
81
+ corrector = pipeline(
82
+ 'text2text-generation',
83
+ 'pszemraj/flan-t5-large-grammar-synthesis',
84
+ )
85
+ raw_text = 'i can has cheezburger'
86
+ results = corrector(raw_text)
87
+ print(results)
88
+ ```
89
 
90
+ **For Batch Inference:** see [this discussion thread](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis/discussions/1) for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference **in the same fashion:** batches of 64-96 tokens ish (or, 2-3 sentences split with regex)
 
 
91
 
92
+ - it is also helpful to **first** check whether or not a given sentence needs grammar correction before using the text2text model. You can do this with BERT-type models fine-tuned on CoLA like `textattack/roberta-base-CoLA`
93
+ - I made a notebook demonstrating batch inference [here](https://colab.research.google.com/gist/pszemraj/6e961b08970f98479511bb1e17cdb4f0/batch-grammar-check-correct-demo.ipynb)
94
+
95
+
96
+
97
+ ---
98
 
 
99
 
100
+ ## Model description
101
 
102
+ The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text **that could have a lot of mistakes** with the important qualifier of **it does not semantically change text/information that IS grammatically correct.**
 
103
 
104
+ Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
105
+
106
+ ### ONNX Checkpoint
107
+
108
+ This model has been converted to ONNX and can be loaded/used with huggingface's `optimum` library.
109
+
110
+ You first need to [install optimum](https://huggingface.co/docs/optimum/installation)
111
+
112
+ ```bash
113
+ pip install optimum[onnxruntime]
114
+ # ^ if you want to use a different runtime read their docs
115
+ ```
116
+ load with the optimum `pipeline`
117
+
118
+ ```python
119
+ from optimum.pipelines import pipeline
120
+
121
+ corrector = pipeline(
122
+ "text2text-generation", model=corrector_model_name, accelerator="ort"
123
  )
124
+ # use as normal
125
+ ```
126
+
127
+ ### Other checkpoints
128
+
129
+ If trading a slight decrease in grammatical correction quality for faster inference speed makes sense for your use case, check out the **[base](https://huggingface.co/pszemraj/grammar-synthesis-base)** and **[small](https://huggingface.co/pszemraj/grammar-synthesis-small)** checkpoints fine-tuned from the relevant t5 checkpoints.
130
+
131
+ ## Limitations
132
+
133
+ - dataset: `cc-by-nc-sa-4.0`
134
+ - model: `apache-2.0`
135
+ - this is **still a work-in-progress** and while probably useful for "single-shot grammar correction" in a lot of cases, **give the outputs a glance for correctness ok?**
136
+
137
+
138
+ ## Use Cases
139
+
140
+ Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
141
+
142
+ 1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
143
+ - To be investigated further, depending on what model/system is used it _might_ be worth it to apply this after OCR on typed characters.
144
+ 2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of [this OPT 2.7B chatbot-esque model of myself](https://huggingface.co/pszemraj/opt-peter-2.7B).
145
+ > An example of this model running on CPU with beam search:
146
+
147
+ ```
148
+ Original response:
149
+ ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
150
+ synthesizing took 306.12 seconds
151
+ Final response in 1294.857 s:
152
+ I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
153
+ ```
154
+ _Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting [to avoid coming off as passive aggressive](https://www.npr.org/2020/09/05/909969004/before-texting-your-kid-make-sure-to-double-check-your-punctuation)_
155
+
156
+ 3. Somewhat related to #2 above, fixing/correcting so-called [tortured-phrases](https://arxiv.org/abs/2107.06751) that are dead giveaways text was generated by a language model. _Note that _SOME_ of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest)._
157
+
158
+ ---
159
 
160
+ ## Citation info
161
 
162
+ If you find this fine-tuned model useful in your work, please consider citing it :)
163
 
164
+ ```
165
+ @misc {peter_szemraj_2022,
166
+ author = { {Peter Szemraj} },
167
+ title = { flan-t5-large-grammar-synthesis (Revision d0b5ae2) },
168
+ year = 2022,
169
+ url = { https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis },
170
+ doi = { 10.57967/hf/0138 },
171
+ publisher = { Hugging Face }
172
+ }
173
+ ```