Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,15 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
tags:
|
3 |
-
-
|
4 |
-
-
|
5 |
-
-
|
6 |
-
-
|
7 |
widget:
|
8 |
- text: "q: do you have any hobbies that you like to do while you are at home ? a: watching online shows"
|
9 |
- text: "q: do you watch a lot of comedy ? a: yes it will helpful the mind relaxation"
|
@@ -14,7 +20,9 @@ widget:
|
|
14 |
Considering the question of "Which drug did you take?" and the answer of "Doliprane", the aim of this model is to derive a full answer to the question "I took Doliprane".
|
15 |
|
16 |
## Model training
|
17 |
-
We fine-tune T5 (Raffel et al.,2019), a pre-trained encoder-decoder model, on two datasets of (question, incomplete answer, full answer) triples, one for wh- and one for yes-no(YN) questions.
|
|
|
|
|
18 |
1,100 (question, answer, full answer) triples by au-
|
19 |
tomatically extracting YN (question, answer) pairs
|
20 |
from this corpus and manually associating them
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
datasets:
|
5 |
+
- SAMSum
|
6 |
+
metrics:
|
7 |
+
- ROUGE-L
|
8 |
tags:
|
9 |
+
- Question answering
|
10 |
+
- T5
|
11 |
+
- Declarative
|
12 |
+
- Text generation
|
13 |
widget:
|
14 |
- text: "q: do you have any hobbies that you like to do while you are at home ? a: watching online shows"
|
15 |
- text: "q: do you watch a lot of comedy ? a: yes it will helpful the mind relaxation"
|
|
|
20 |
Considering the question of "Which drug did you take?" and the answer of "Doliprane", the aim of this model is to derive a full answer to the question "I took Doliprane".
|
21 |
|
22 |
## Model training
|
23 |
+
We fine-tune T5 (Raffel et al.,2019), a pre-trained encoder-decoder model, on two datasets of (question, incomplete answer, full answer) triples, one for wh- and one for yes-no(YN) questions.
|
24 |
+
For wh-questions, we use 3,300 entries of the dataset consisting of (question, answer, declarative answer sentence) triples gathered by Demszky et al. (2018) using Amazon Mechanical Turk workers.
|
25 |
+
For YN questions, we used the SAMSum corpus, (Gliwa et al., 2019) which contains short dialogs in chit-chat format. We created
|
26 |
1,100 (question, answer, full answer) triples by au-
|
27 |
tomatically extracting YN (question, answer) pairs
|
28 |
from this corpus and manually associating them
|