jordiclive commited on
Commit
3b515a1
1 Parent(s): 48b8a20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -1
README.md CHANGED
@@ -19,6 +19,9 @@ metrics:
19
 
20
  # Multi-purpose Summarizer (Fine-tuned 3B google/flan-t5-xl on several Summarization datasets)
21
 
 
 
 
22
 
23
  A fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on various summarization datasets (xsum, wikihow, cnn_dailymail/3.0.0, samsum, scitldr/AIC, billsum, TLDR)
24
 
@@ -31,7 +34,47 @@ Goal: a model that can be used for a general-purpose summarizer for academic and
31
 
32
  ---
33
 
34
- # Usage - Basic
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## Training procedure
37
 
 
19
 
20
  # Multi-purpose Summarizer (Fine-tuned 3B google/flan-t5-xl on several Summarization datasets)
21
 
22
+ <a href="https://colab.research.google.com/gist/pszemraj/5dc89199a631a9c6cfd7e386011452a0/demo-flan-t5-large-grammar-synthesis.ipynb">
23
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
24
+ </a>
25
 
26
  A fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on various summarization datasets (xsum, wikihow, cnn_dailymail/3.0.0, samsum, scitldr/AIC, billsum, TLDR)
27
 
 
34
 
35
  ---
36
 
37
+ ## Usage
38
+
39
+ Check the colab notebook. **The model expects a prompt prepended to the source document to indicate the type of summary**, examples of prompts used to train the model here:
40
+ ```
41
+ prompts = {
42
+ "article": "Produce an article summary of the following news article:",
43
+ "one_sentence": "Given the following news article, summarize the article in one sentence:",
44
+ "conversation": "Briefly summarize in third person the following conversation:",
45
+ "scitldr": "Given the following scientific article, provide a TL;DR summary:",
46
+ "bill": "Summarize the following proposed legislation (bill):",
47
+ "outlines": "Produce an article summary including outlines of each paragraph of the following article:",
48
+ }
49
+ ```
50
+ After `pip install transformers` run the following code:
51
+
52
+ ```python
53
+ from transformers import pipeline
54
+
55
+ summarizer = pipeline("summarization", "jordiclive/flan-t5-3b-summarizer", torch_dtype=torch.bfloat16)
56
+
57
+ raw_document = 'You must be 18 years old to live or work in New York State...'
58
+ prompt = "Produce an article summary of the following news article:"
59
+ results = summarizer(
60
+ f"{prompt} {raw_document}",
61
+ num_beams=5,
62
+ min_length=5,
63
+ no_repeat_ngram_size=3,
64
+ skip_special_tokens=True,
65
+ truncation=True,
66
+ max_length=512,
67
+ )
68
+ ```
69
+
70
+ **For Batch Inference:** see [this discussion thread](https://huggingface.co/pszemraj/flan-t5-large-grammar-synthesis/discussions/1) for details, but essentially the dataset consists of several sentences at a time, and so I'd recommend running inference **in the same fashion:** batches of 64-96 tokens ish (or, 2-3 sentences split with regex)
71
+
72
+ - it is also helpful to **first** check whether or not a given sentence needs grammar correction before using the text2text model. You can do this with BERT-type models fine-tuned on CoLA like `textattack/roberta-base-CoLA`
73
+ - I made a notebook demonstrating batch inference [here](https://colab.research.google.com/gist/pszemraj/6e961b08970f98479511bb1e17cdb4f0/batch-grammar-check-correct-demo.ipynb)
74
+
75
+
76
+
77
+ ---
78
 
79
  ## Training procedure
80