Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
**Transforming Question-Answer Pairs to Full Declarative Answer Form**
|
2 |
+
|
3 |
+
Considering the question of "Which drug did you take?" and the answer of "Doliprane", the aim of this model is to derive a full answer to the question "I took Doliprane".
|
4 |
+
|
5 |
+
We fine-tune T5 (Raffel et al.,2019), a pre-trained encoder-decoder model, on two datasets of (question, incomplete answer, full answer) triples, one for wh- and one for yes-no(YN) questions. For wh-questions, we use 3,300 entries of the dataset consisting of (question, answer, declarative answer sentence) triples gathered by Demszky et al. (2018) using Amazon Mechanical Turk workers. For YN questions, we used the SAMSum corpus, (Gliwa et al., 2019) which contains short dialogs in chit-chat format. We created
|
6 |
+
1,100 (question, answer, full answer) triples by au-
|
7 |
+
tomatically extracting YN (question, answer) pairs
|
8 |
+
from this corpus and manually associating them
|
9 |
+
with the corresponding declarative answer. Data
|
10 |
+
was splitted into train and test (9:1) and the fine-
|
11 |
+
tuned model achieved 0.90 ROUGE-L score on the
|
12 |
+
test set.
|
13 |
+
|
14 |
+
In paper "Exploring the Influence of Dialog Input Format for Unsupervised Clinical Questionnaire Filling", the model was applied in information-seeking dialogs to form the declarative transform of the whole dialog.
|