Anshoo Mehra
commited on
Commit
·
cfe3efe
1
Parent(s):
3c26189
Update README.md
Browse files
README.md
CHANGED
@@ -1,39 +1,63 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
-
-
|
4 |
metrics:
|
5 |
- rouge
|
6 |
model-index:
|
7 |
-
- name: t5-v1-base-s-q-c
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
|
|
|
|
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
-
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 1.5165
|
19 |
-
- Rouge1: 0.5819
|
20 |
-
- Rouge2: 0.4231
|
21 |
-
- Rougel: 0.5487
|
22 |
-
- Rougelsum: 0.5491
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
|
37 |
|
38 |
### Training hyperparameters
|
39 |
|
|
|
1 |
---
|
2 |
tags:
|
3 |
+
- Question(s) Generation
|
4 |
metrics:
|
5 |
- rouge
|
6 |
model-index:
|
7 |
+
- name: anshoomehra/question-generation-auto-hints-t5-v1-base-s-q-c
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
+
# Auto Question Generation
|
12 |
+
The model is intended to be used for Auto And/Or Hint enabled Question Generation tasks. The model is expected to produce one or possibly more than one question from the provided context.
|
13 |
+
|
14 |
+
[Live Demo: Question Generation](https://huggingface.co/spaces/anshoomehra/question_generation)
|
15 |
|
16 |
+
Including this there are five models trained with different training sets, demo provide comparison to all in one go. However, you can reach individual projects at below links:
|
17 |
|
18 |
+
[Auto Question Generation v1](https://huggingface.co/anshoomehra/question-generation-auto-t5-v1-base-s)
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
+
[Auto Question Generation v2](https://huggingface.co/anshoomehra/question-generation-auto-t5-v1-base-s-q)
|
21 |
|
22 |
+
[Auto Question Generation v3](https://huggingface.co/anshoomehra/question-generation-auto-t5-v1-base-s-q-c)
|
23 |
|
24 |
+
[Auto/Hints based Question Generation v1](https://huggingface.co/anshoomehra/question-generation-auto-hints-t5-v1-base-s-q)
|
25 |
|
26 |
+
This model can be used as below:
|
27 |
|
28 |
+
```
|
29 |
+
from transformers import (
|
30 |
+
AutoModelForSeq2SeqLM,
|
31 |
+
AutoTokenizer
|
32 |
+
)
|
33 |
+
|
34 |
+
model_checkpoint = "anshoomehra/question-generation-auto-hints-t5-v1-base-s-q-c"
|
35 |
+
|
36 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
|
38 |
+
|
39 |
+
## Input with prompt
|
40 |
+
context="question_context: <context>"
|
41 |
+
encodings = tokenizer.encode(context, return_tensors='pt', truncation=True, padding='max_length').to(device)
|
42 |
|
43 |
+
## You can play with many hyperparams to condition the output, look at demo
|
44 |
+
output = model.generate(encodings,
|
45 |
+
#max_length=300,
|
46 |
+
#min_length=20,
|
47 |
+
#length_penalty=2.0,
|
48 |
+
num_beams=4,
|
49 |
+
#early_stopping=True,
|
50 |
+
#do_sample=True,
|
51 |
+
#temperature=1.1
|
52 |
+
)
|
53 |
+
|
54 |
+
## Multiple questions are expected to be delimited by '?' You can write a small wrapper to elegantly format. Look at the demo.
|
55 |
+
questions = [tokenizer.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=False) for id in output]
|
56 |
+
```
|
57 |
+
|
58 |
+
## Training and evaluation data
|
59 |
|
60 |
+
Custom data.
|
61 |
|
62 |
### Training hyperparameters
|
63 |
|