DeathReaper0965
commited on
Commit
•
d76f9be
1
Parent(s):
3dc0163
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
|
4 |
+
datasets:
|
5 |
+
- samsum
|
6 |
+
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
|
10 |
+
tags:
|
11 |
+
- summarization
|
12 |
+
- text-generation
|
13 |
+
- toxicity-reduction
|
14 |
+
|
15 |
+
widget:
|
16 |
+
- text: "Summarize the following Conversation: Kate: Good morning. Kai: Hi! How official! Kate: I wrote it at 4am Kai: I've noticed. Why? Kate: I had to get up early to catch the bus to the airport Kai: Where are you flying? Kate: To Antwerp! I'm fed up with Cambridge Kai: poor thing. Why? Kate: Just a stupid, elitist place without a soul. Or with a soul made of money. Kai: Try to rest a bit in Belgium, do not work too much. Kate: I have to work, but at least not in this soulless place. Kai: When are you coming back? Kate: I have to see my supervisor on Monday <unk> Kai: not too long a break Kate: Still better than nothing. Summary:"
|
17 |
+
example_title: Summarization Example 1
|
18 |
+
- text: "Summarize the following Conversation: Dean: I feel sick Scott: hungover? Dean: no, like I ate something bad Scott: what did you eat yesterday? Dean: breakfast at Coffee Lovers' Scott: this is a rather safe place Dean: and Chinese from TaoTao for dinner Scott: now we have a suspect Summary:"
|
19 |
+
example_title: Summarization Example 2
|
20 |
+
|
21 |
+
inference:
|
22 |
+
parameters:
|
23 |
+
max_new_tokens: 256
|
24 |
+
repetition_penalty: 2.5
|
25 |
+
top_p: 0.95
|
26 |
+
top_k: 50
|
27 |
+
temperature: 0.7
|
28 |
+
num_beams: 3
|
29 |
+
no_repeat_ngram_size: 2
|
30 |
+
num_return_sequences: 1
|
31 |
+
do_sample: True
|
32 |
+
|
33 |
+
---
|
34 |
+
|
35 |
+
# Flan-T5 (base-sized) Dialogue Summarization with reduced toxicity using RLAIF
|
36 |
+
This model is a fine-tuned [Flan-T5 model](https://huggingface.co/google/flan-t5-base) on the [SAMSUM](https://huggingface.co/datasets/samsum) dataset.
|
37 |
+
The Base Model(Flan-T5) is based on Pre-trained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance <br>
|
38 |
+
|
39 |
+
Our Model is fine-tuned specifically on a single downstream task of Dialogue Summarization on the above mentioned dataset with a primary objective of reduced toxicity while generating summaries.
|
40 |
+
|
41 |
+
## Model description
|
42 |
+
This Model has the same architecture and Parameters as its base model. Please refer to this [link](https://arxiv.org/abs/2210.11416) to know more about the model details.
|
43 |
+
|
44 |
+
## Intended Use & Limitations
|
45 |
+
This model is intended to summarize the given dialogue in a way that outputs the less toxic summary even when we pass a dialogue that contains toxic phrases or words.<br>
|
46 |
+
I've fine-tuned the model with an instruction of `Summarize the following Conversation:` that's prepended at the start of each dialogue followed by `Summary: ` keyword at the end that indicates the start of summary.
|
47 |
+
|
48 |
+
Note: The model is primarily trained with an objective of reduced toxicity in the outputs, we can sometimes expect relatively short outputs that might sometimes(rarely) miss the important message in the dialogue but still being true to its primary goal.
|
49 |
+
|
50 |
+
## Usage
|
51 |
+
|
52 |
+
You can use this model directly to get the summaries:
|
53 |
+
|
54 |
+
```python
|
55 |
+
import torch
|
56 |
+
|
57 |
+
from peft import PeftModel, PeftConfig
|
58 |
+
|
59 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
60 |
+
|
61 |
+
|
62 |
+
# Load peft config for pre-trained checkpoint etc.
|
63 |
+
peft_model_id = "DeathReaper0965/flan-t5-base-samsum-lora-RLAIF-toxicity"
|
64 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
65 |
+
|
66 |
+
# load base LLM model and tokenizer
|
67 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, device_map='auto') # load_in_8bit=True,
|
68 |
+
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
69 |
+
|
70 |
+
# Load the Lora model
|
71 |
+
model = PeftModel.from_pretrained(model, peft_model_id, device_map='auto')
|
72 |
+
|
73 |
+
input_ids = tokenizer.encode(
|
74 |
+
"Summarize the following Conversation: Dean: I feel sick Scott: hungover? Dean: no, like I ate something bad Scott: what did you eat yesterday? Dean: breakfast at Coffee Lovers' Scott: this is a rather safe place Dean: and Chinese from TaoTao for dinner Scott: now we have a suspect Summary:",
|
75 |
+
return_tensors="pt"
|
76 |
+
).to("cuda" if torch.cuda.is_available() else "cpu")
|
77 |
+
|
78 |
+
summary = model.generate(
|
79 |
+
input_ids = input_ids,
|
80 |
+
max_new_tokens=256,
|
81 |
+
repetition_penalty=2.5,
|
82 |
+
top_p=0.95,
|
83 |
+
top_k=50,
|
84 |
+
temperature=0.7,
|
85 |
+
no_repeat_ngram_size=2,
|
86 |
+
num_return_sequences=1,
|
87 |
+
do_sample=True)
|
88 |
+
|
89 |
+
output = tokenizer.batch_decode(summary, skip_special_tokens=True)
|
90 |
+
|
91 |
+
###########OUTPUT###########
|
92 |
+
# "Dean ate breakfast at Coffee Lovers' yesterday and Chinese from TaoTao for dinner."
|
93 |
+
|
94 |
+
|
95 |
+
> Designed and Developed with <span style="color: #e25555;">♥</span> by [Praneet](https://deathreaper0965.github.io/) | [LinkedIn](http://linkedin.com/in/deathreaper0965) | [GitHub](https://github.com/DeathReaper0965/)
|