{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:29.961725Z" }, "title": "Cascaded Span Extraction and Response Generation for Document-Grounded Dialog", "authors": [ { "first": "Nico", "middle": [], "last": "Daheim", "suffix": "", "affiliation": { "laboratory": "Human Language Technology and Pattern Recognition Group", "institution": "RWTH Aachen University", "location": { "country": "Germany" } }, "email": "" }, { "first": "David", "middle": [], "last": "Thulke", "suffix": "", "affiliation": { "laboratory": "Human Language Technology and Pattern Recognition Group", "institution": "RWTH Aachen University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Christian", "middle": [], "last": "Dugast", "suffix": "", "affiliation": { "laboratory": "Human Language Technology and Pattern Recognition Group", "institution": "RWTH Aachen University", "location": { "country": "Germany" } }, "email": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "Human Language Technology and Pattern Recognition Group", "institution": "RWTH Aachen University", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper summarizes our entries to both subtasks of the first DialDoc shared task which focuses on the agent response prediction task in goal-oriented document-grounded dialogs. The task is split into two subtasks: predicting a span in a document that grounds an agent turn and generating an agent response based on a dialog and grounding document. In the first subtask, we restrict the set of valid spans to the ones defined in the dataset, use a biaffine classifier to model spans, and finally use an ensemble of different models. For the second subtask, we use a cascaded model which grounds the response prediction on the predicted span instead of the full document. With these approaches, we obtain significant improvements in both subtasks compared to the baseline.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper summarizes our entries to both subtasks of the first DialDoc shared task which focuses on the agent response prediction task in goal-oriented document-grounded dialogs. The task is split into two subtasks: predicting a span in a document that grounds an agent turn and generating an agent response based on a dialog and grounding document. In the first subtask, we restrict the set of valid spans to the ones defined in the dataset, use a biaffine classifier to model spans, and finally use an ensemble of different models. For the second subtask, we use a cascaded model which grounds the response prediction on the predicted span instead of the full document. With these approaches, we obtain significant improvements in both subtasks compared to the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Unstructured documents contain a vast amount of knowledge that can be useful information for responding to users in goal-oriented dialog systems. The shared task at the first DialDoc Workshop focuses on grounding and generating agent responses in such systems. Therefore, two subtasks are proposed: given a dialog extract the relevant information for the next agent turn from a document and generate a natural language agent response based on dialog context and grounding document. In this paper, we present our submissions to both subtasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the first subtask, we focus on modeling spans directly using a biaffine classifier and restricting the model's output to valid spans. We notice that replacing BERT with alternative language models results in significant improvements. For the second subtask, we notice that providing a generation model with an entire, possibly long, grounding document often leads to models struggling to generate factually correct output. Hence, we split the task into two subsequent stages, where first a ground-ing span is selected according to our method for the first subtask which is then provided for generation. With these approaches, we report strong improvements over the baseline in both subtasks. Additionally, we experimented with marginalizing over all spans in order to be able to account for the uncertainty of the span selection model during generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, multiple datasets and challenges concerning conversational question answering have been proposed. For example, Saeidi et al. (2018) introduced ShARC, a dataset containing ca. 32k utterances which include follow-up questions on user requests which can not be answered directly based on the given dialog and grounding. Similarly, the CoQA dataset (Reddy et al., 2019) provides 127k questions with answers and grounding obtained from human conversations. Closer related to the DialDoc shared task, the task in the first track of DSTC 9 was to generate agent responses based on relevant knowledge in task-oriented dialog. However, the considered knowledge has the form of FAQ documents, where snippets are much shorter than those considered in this work.", "cite_spans": [ { "start": 121, "end": 141, "text": "Saeidi et al. (2018)", "ref_id": "BIBREF17" }, { "start": 355, "end": 375, "text": "(Reddy et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Pre-trained trained language models such as BART (Lewis et al., 2020a) or RoBERTa (Liu et al., 2019) have recently become a successful tool for different kinds of natural language understanding tasks, such as question answering (QA), where they obtain state-of-the-art results (Liu et al., 2019; Clark et al., 2020) . Naturally, they have recently also found their way into task-oriented dialog systems (Lewis et al., 2020a) , where they are either used as end-to-end systems (Budzianowski and Vuli\u0107, 2019; Ham et al., 2020) or as components for a specific subtask (He et al., 2021) .", "cite_spans": [ { "start": 49, "end": 70, "text": "(Lewis et al., 2020a)", "ref_id": "BIBREF11" }, { "start": 82, "end": 100, "text": "(Liu et al., 2019)", "ref_id": null }, { "start": 277, "end": 295, "text": "(Liu et al., 2019;", "ref_id": null }, { "start": 296, "end": 315, "text": "Clark et al., 2020)", "ref_id": "BIBREF1" }, { "start": 403, "end": 424, "text": "(Lewis et al., 2020a)", "ref_id": "BIBREF11" }, { "start": 476, "end": 506, "text": "(Budzianowski and Vuli\u0107, 2019;", "ref_id": "BIBREF0" }, { "start": 507, "end": 524, "text": "Ham et al., 2020)", "ref_id": "BIBREF6" }, { "start": 565, "end": 582, "text": "(He et al., 2021)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The task of dialog systems is to generate an appropriate systems response u T +1 to a user turn u T and preceding dialog context u T \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "1 := u 1 , ..., u T \u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "In a document-grounded setting, u T +1 is based on knowledge from a set of relevant documents D \u2286 D, where D denotes all knowledge documents. Feng et al. (2020) identify three tasks relevant to such systems, namely 1) user utterance understanding; 2) agent response prediction; 3) relevant document identification. The shared task deals with the second task and assumes the result of the third task to be known. They further split this task into agent response grounding prediction and agent response generation. More specifically, one subtask focuses on identifying the grounding of u T +1 and the second subtask on generating u T +1 . In both subtasks exactly one document d \u2208 D is given. Each document consists of multiple sections, whereby each section consists of a title and the content. In the doc2dial dataset, the latter is split into multiple subspans. In the following, we refer to these given subspans as phrases in order to avoid confusing them with arbitrary spans in the document.", "cite_spans": [ { "start": 142, "end": 160, "text": "Feng et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "The first subtask is to identify a span in a given document that grounds the agent response u T +1 . It is formulated as a span selection task where the aim is to return a tuple (a s , a e ) of start and end position of the relevant span within the grounding document d based on the dialog history u T 1 . In the context of the challenge, these spans always correspond to one of the given phrases in the documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Grounding Prediction", "sec_num": null }, { "text": "The goal of response generation is to provide the user with a system response u T +1 that is based on the dialog context u T 1 and document d and fits naturally into the preceding dialog.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Generation", "sec_num": null }, { "text": "Agent Response Grounding Prediction For the first subtask, Feng et al. (2020) fine-tune BERT for question answering as proposed by Devlin et al. (2019) . Therefore, a start and end score for each token is calculated by a linear projection from the last hidden states of the model. These scores are normalized using a softmax over all tokens to obtain probabilities for the start and end positions. In order to obtain the probability of a specific span, the probabilities of the start and end positions are multiplied. If the length of the documents exceeds the maximum length supported by the model, a sliding window with stride over the document is used and each window is passed to the model. In training, if the correct span is not included in the window, the span only consisting of the begin of sequence token is used as target. In decoding the scores of all windows are combined to find the best span.", "cite_spans": [ { "start": 131, "end": 151, "text": "Devlin et al. (2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1" }, { "text": "Agent Response Generation The baseline provided for the shared task uses a pre-trained BART model (Lewis et al., 2020a) to generate agent responses. The model is fine-tuned on the tasks training data by minimizing the cross-entropy of the reference tokens. As input, it is provided with the dialog context, title of the document, and the grounding document separated by special tokens. Inputs longer than the maximum sequence length supported by the model (1,024 tokens for BART) are truncated. Effectively, this means that parts of the document are removed that may include the information relevant to the response. An alternative to truncating the document would be to truncate the dialog context (i.e. removing the oldest turns which may be less relevant than the document). We did not do experiments with this approach in this work and always included the full dialog context in the input. For decoding beam search with a beam size of 4 is used.", "cite_spans": [ { "start": 98, "end": 119, "text": "(Lewis et al., 2020a)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1" }, { "text": "Phrase restriction In contrast to standard QA tasks, in this task, possible start and end positions of spans are restricted to phrases in the document. This motivated us to also restrict the possible outputs of the model to these positions. That is, instead of applying the softmax over all tokens, it is only applied over tokens corresponding to the start or end positions of a phrase and thus only consider these positions in training and decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Grounding Prediction", "sec_num": "4.2" }, { "text": "Span-based objective The training objective for QA assumes that the probability of the start and end position are conditionally independent. Previous work (Fajcik et al., 2020) indicates that directly modeling the joint probability of start and end position can improve performance. Hence, to model this joint probability, we use a biaffine classifier as proposed by Dozat and Manning (2017) for dependency parsing.", "cite_spans": [ { "start": 155, "end": 176, "text": "(Fajcik et al., 2020)", "ref_id": null }, { "start": 367, "end": 391, "text": "Dozat and Manning (2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Agent Response Grounding Prediction", "sec_num": "4.2" }, { "text": "Ensembling In our submission, we use an ensemble of multiple models for the prediction of spans to capture their uncertainty. More precisely, we use Bayesian Model Averaging (Hoeting et al., 1999) , where the probability of a span a = (a s , a e ) is obtained by marginalizing the joint probability of span and model over all models H as:", "cite_spans": [ { "start": 174, "end": 196, "text": "(Hoeting et al., 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Agent Response Grounding Prediction", "sec_num": "4.2" }, { "text": "p a | u T 1 , d = h\u2208H p h a | u T 1 , d \u2022 p (h) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Grounding Prediction", "sec_num": "4.2" }, { "text": "The model prior p (h) is obtained by applying a softmax function over the logarithm of the F1 scores obtained on a validation set. Furthermore, we approximate the span posterior distribution p h a | u T 1 , d by an n-best list of size 20.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Grounding Prediction", "sec_num": "4.2" }, { "text": "Cascaded Response Generation One main issue with the baseline approach is that the model appears to be unable to identify the relevant knowledge when provided with long documents. Additionally, due to the truncation, the input of the model may not even contain the relevant parts of the document. To solve this issue, we propose to model the problem by cascading span selection and response generation. This way, we only have to provide the comparatively short grounding span to the model instead of the full document. This allows the model to focus on generating an appropriate utterance and less on identifying relevant grounding information. Similar to the baseline, we fine-tune BART (Lewis et al., 2020a) . In training, we provide the model with the dialog context u T 1 concatenated with the document title and reference span, each separated by a special token. In decoding, the reference span is not available and we use the span predicted by our span selection model as input.", "cite_spans": [ { "start": 688, "end": 709, "text": "(Lewis et al., 2020a)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Agent Response Generation", "sec_num": "4.3" }, { "text": "Marginalization over Spans Conditioning on only the ground truth span creates a mismatch between training and inference time since the ground truth span is not available at test time but has to be predicted. This leads to errors occurring in span selection being propagated in response generation. Further, the generation model is unable to take the uncertainty of the span selection model into account. Similar to Lewis et al. (2020b) and Thulke et al. (2021) we propose to marginalize over all spans S. We model the response generation as:", "cite_spans": [ { "start": 415, "end": 435, "text": "Lewis et al. (2020b)", "ref_id": null }, { "start": 440, "end": 460, "text": "Thulke et al. (2021)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Agent Response Generation", "sec_num": "4.3" }, { "text": "p \u00fb = u T +1 | u T 1 ; d = N i s\u2208S p \u00fb i , s |\u00fb i\u22121 1 ; u T 1 ; d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Generation", "sec_num": "4.3" }, { "text": "where the joint probability may be factorized into a span selection model p s | u T 1 ; d and a generation model p u T +1 | u T 1 , s; d corresponding to our models for each subtask. For efficiency, we approximate S by the top 5 spans which we renormalize to maintain a probability distribution. The generation model is then trained with cross-entropy using an n-best list obtained from the separately trained selection model. A potential extension which we did not yet try is to train both models jointly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Generation", "sec_num": "4.3" }, { "text": "The shared task uses the doc2dial dataset (Feng et al., 2020) which contains 4,793 annotated dialogs based on a total of 487 documents. All documents were obtained from public government service websites and stem from the four domains Social Security Administration (ssa), Department of Motor Vehicles (dmv), United States Department of Veterans Affairs (va), and Federal Student Aid (studentaid). In the shared task, each document is associated with exactly one domain and is annotated with sections and phrases. The latter is described by a start and end index within the document and associated with a specific section that has a title and text. Each dialog is based on one document and contains a set of turns. Turns are taken either by a user or an agent and described by a dialog act and a list of grounding reference phrases in the document.", "cite_spans": [ { "start": 42, "end": 61, "text": "(Feng et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5" }, { "text": "The training set of the shared task contains 3,474 dialogs with in total 44,149 turns. In addition to the training set, the shared task organizers provide a validation set with 661 dialogs and a testdev set with 198 dialogs which include around 30% of the dialogs from the final test set. The final test set includes an additional domain of unseen documents and comprises a total of 787 dialogs. Documents are rather long, have a median length of 817.5, and an average length of 991 tokens (using the BART subword vocabulary). Thus, in many cases, truncation of the input is required. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5" }, { "text": "We base our implementation 1 on the provided baseline code of the shared task 2 . Furthermore, we use the workflow manager Sisyphus (Peter et al., 2018) to organize our experiments.", "cite_spans": [ { "start": 132, "end": 152, "text": "(Peter et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "For the first subtask, we use the base and large variants of RoBERTa (Liu et al., 2019) and ELEC-TRA (Clark et al., 2020) instead of BERT large uncased. In the second subtask, we use BART base instead of the large variant, which was used in the baseline code, since even after reducing the batch size to one, we were not able to run the baseline with a maximum sequence length of 1024 on our Nvidia GTX 1080 Ti and RTX 2080 Ti GPUs due to memory constraints. All models are fine-tuned with an initial learning rate of 3e-5. Base variants are trained for 10 epochs and large variants for 5 epochs.", "cite_spans": [ { "start": 69, "end": 87, "text": "(Liu et al., 2019)", "ref_id": null }, { "start": 101, "end": 121, "text": "(Clark et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We include agent follow-up turns in our training data, i.e. such turns u t made by agents, where the preceding turn u t\u22121 was already taken by the agent. Similar to other agent turns, i.e. where the preceding turn was taken by the user, these turns are annotated with their grounding span and can be used as additional samples in both tasks. In the baseline implementation, these are excluded from training and evaluation. To maintain comparability, we do not include them in the validation or test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "For evaluation, we use the same evaluation metrics as proposed in the baseline. In the first subtask, exact match (EM), i.e. the percentage of exact matches between the predicted and reference span (after lowercasing and removing punctuation, articles, and whitespace) and the token-level F1 score is used. The second subtask is evaluated using SacreBLEU (Post, 2018 ).", "cite_spans": [ { "start": 355, "end": 366, "text": "(Post, 2018", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "1 Our code is made available at https://github. com/ndaheim/dialdoc-sharedtask-21", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "2 Baseline code is available at https://github.com/ doc2dial/sharedtask-dialdoc2021 6.1 Results Table 1 summarizes our main results and submission to the shared task. The first line shows the results obtained by reproducing the baseline provided by the organizers (using BART base for Subtask 2). We note that these results differ from the ones reported in Feng et al. (2020) due to slightly different data conditions in the shared task and their paper. The second line shows the results of our best single model. In Subtask 1, we obtained our best results by using RoBERTa large, trained additionally on agent follow-up turns, and by restricting the model to phrases occurring in the document. Using an ensemble of this model, an ELECTRA large model trained with the same approach, and a RoBERTa base model trained with the span-based objective, we achieve our best result. In the second subtask, our cascaded approach using this model and BART base significantly outperforms the baseline by over 10% absolute in BLEU. Using the results of the ensemble in Subtask 2 also translates to a significant improvement in BLEU, which indicates a strong influence of the agent response grounding prediction task. Table 3 : Ablation analysis of our systems for subtask 2 on the validation set.", "cite_spans": [ { "start": 357, "end": 375, "text": "Feng et al. (2020)", "ref_id": null } ], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1205, "end": 1212, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Agent Response Grounding Prediction Table 2 gives an overview of our ablation analysis for the first subtask. In addition to F1 and EM, we report the EM@5 which we define as the percentage of turns where an exact match is part of the 5-best list predicted by the model. This metric gives an indication of the quality of the n-best list produced by the model. Both RoBERTa and ELECTRA large outperform BERT large concerning F1 and EM with RoBERTa large performing best. Removing agent follow-up turns in training consistently degrades the results for both models.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Ablation Analysis", "sec_num": "6.2" }, { "text": "Restricting the predictions of the model to valid phrases during training and evaluation gives consistent improvements in the EM and EM@5 scores. Training RoBERTa base using the span-based objective, we observe degradations in F1 and EM but observe an improvement in EM@5 which indicates that it better models the distribution across phrases. Due to instabilities during training, we were not able to train a large model with the span-based objective. Additionally, we only did experiments with the biaffine classifier discussed in Section 3. It would be interesting to compare the results with other span-based objectives as the ones proposed by Fajcik et al. (2020) . Table 3 shows an ablation study of our results in response generation. The results show that our cascaded approach outperforms the baseline by a large margin. Further experiments with additional context, such as the title of a section or a window of 10 tokens to each side of the span, do not give improvements. This indicates that the selected spans seem to be suffi-cient to generate suitable responses. Furthermore, marginalizing over multiple spans leads to degradations, which might be because training is based on an n-best list from an uncertain model. We observe our best results when using only the predicted span and a beam size of 6. Furthermore, we add a repetition penalty of 1.2 (Keskar et al., 2019) to discourage repetitions in generated responses.", "cite_spans": [ { "start": 647, "end": 667, "text": "Fajcik et al. (2020)", "ref_id": null }, { "start": 1363, "end": 1384, "text": "(Keskar et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 670, "end": 677, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Ablation Analysis", "sec_num": "6.2" }, { "text": "Finally, the last line of the table reports the results of the cascaded method when using ground truth spans instead of the spans predicted by a model. That is, a perfect model for the first subtask would additionally improve the results by 4.7 points absolute in BLEU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agent Response Generation", "sec_num": null }, { "text": "In this paper, we have described our submissions to both subtasks of the first DialDoc shared task. In the first subtask, we have experimented with restricting the set of spans that can be predicted to valid phrases, which yields constant improvements in terms of EM. Furthermore, we have employed a model to directly hypothesize entire spans and shown the benefits of combining multiple models using Bayesian Model Averaging. In the second subtask, we have shown how cascading span selection and response generation improves results when compared to providing an entire document in generation. We have compared marginalizing over spans to just using a single span for generation, with which we obtain our best results in the shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694537, project \"SEQCLAS\"). The work reflects only the authors' views and the European Research Council Executive Agency (ERCEA) is not responsible for any use that may be made of the information it contains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Hello, It's GPT-2 -How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D19-5602" ] }, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski and Ivan Vuli\u0107. 2019. Hello, It's GPT-2 -How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dia- logue Systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "ELECTRA: Pre-Training Text Encoders as Discriminators Rather Than Generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre- Training Text Encoders as Discriminators Rather Than Generators. In ICLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deep Biaffine Attention for Neural Dependency Parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D Manning. 2017. Deep Biaffine Attention for Neural Dependency Parsing. In ICLR.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Santosh Kesiraju, and Pavel Smrz. 2020. Rethinking the Objectives of Extractive Question Answering", "authors": [ { "first": "Martin", "middle": [], "last": "Fajcik", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Jon", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Fajcik, Josef Jon, Santosh Kesiraju, and Pavel Smrz. 2020. Rethinking the Objectives of Extractive Question Answering.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "2020. doc2dial: A goal-oriented document-grounded dialogue dataset", "authors": [ { "first": "Song", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Chulaka", "middle": [], "last": "Gunasekara", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Sachindra", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Lastras", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "8118--8128", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.652" ] }, "num": null, "urls": [], "raw_text": "Song Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, and Luis Lastras. 2020. doc2dial: A goal-oriented document-grounded dia- logue dataset. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 8118-8128, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "End-to-End Neural Pipeline for Goal-Oriented Dialogue Systems using GPT-2", "authors": [ { "first": "Donghoon", "middle": [], "last": "Ham", "suffix": "" }, { "first": "Jeong-Gwan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Youngsoo", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Kee-Eung", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "583--592", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.54" ] }, "num": null, "urls": [], "raw_text": "Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-End Neural Pipeline for Goal-Oriented Dialogue Systems using GPT-2. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguis- tics, pages 583-592, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning to select external knowledge with multi-scale negative sampling", "authors": [ { "first": "Hua", "middle": [], "last": "Huang He", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhengyu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang He, Hua Lu, Siqi Bao, Fan Wang, Hua Wu, Zhengyu Niu, and Haifeng Wang. 2021. Learning to select external knowledge with multi-scale nega- tive sampling.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bayesian model averaging: A tutorial", "authors": [ { "first": "Jennifer", "middle": [ "A" ], "last": "Hoeting", "suffix": "" }, { "first": "David", "middle": [], "last": "Madigan", "suffix": "" }, { "first": "Adrian", "middle": [ "E" ], "last": "Raftery", "suffix": "" }, { "first": "Chris", "middle": [ "T" ], "last": "Volinsky", "suffix": "" } ], "year": 1999, "venue": "Statistical Science", "volume": "14", "issue": "4", "pages": "382--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jennifer A. Hoeting, David Madigan, Adrian E. Raftery, and Chris T. Volinsky. 1999. Bayesian model averaging: A tutorial. Statistical Science, 14(4):382-401.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CTRL: A conditional transformer language model for controllable generation", "authors": [ { "first": "Bryan", "middle": [], "last": "Nitish Shirish Keskar", "suffix": "" }, { "first": "Lav", "middle": [ "R" ], "last": "Mccann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Varshney", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh- ney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Beyond domain APIs: Task-oriented conversational modeling with unstructured knowledge access", "authors": [ { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Gopalakrishnan", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Hedayatnia", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "278--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, and Dilek Hakkani- Tur. 2020. Beyond domain APIs: Task-oriented con- versational modeling with unstructured knowledge access. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dia- logue, pages 278-289, 1st virtual meeting. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Tim Rockt\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Perez", "suffix": "" }, { "first": "Aleksandara", "middle": [], "last": "Piktus", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Heinrich", "middle": [], "last": "K\u00fcttler", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In NeurIPS.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sisyphus, a Workflow Manager Designed for Machine Translation and Automatic Speech Recognition", "authors": [ { "first": "Jan-Thorsten", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Eugen", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "84--89", "other_ids": { "DOI": [ "10.18653/v1/D18-2015" ] }, "num": null, "urls": [], "raw_text": "Jan-Thorsten Peter, Eugen Beck, and Hermann Ney. 2018. Sisyphus, a Workflow Manager Designed for Machine Translation and Automatic Speech Recog- nition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 84-89, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "CoQA: A conversational question answering challenge", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "249--266", "other_ids": { "DOI": [ "10.1162/tacl_a_00266" ] }, "num": null, "urls": [], "raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Interpretation of natural language rules in conversational machine reading", "authors": [ { "first": "Marzieh", "middle": [], "last": "Saeidi", "suffix": "" }, { "first": "Max", "middle": [], "last": "Bartolo", "suffix": "" }, { "first": "S", "middle": [ "H" ], "last": "Patrick", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Sheldon", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Bouchard", "suffix": "" }, { "first": "", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2087--2097", "other_ids": { "DOI": [ "10.18653/v1/d18-1233" ] }, "num": null, "urls": [], "raw_text": "Marzieh Saeidi, Max Bartolo, Patrick S. H. Lewis, Sameer Singh, Tim Rockt\u00e4schel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversa- tional machine reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2087-2097. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Efficient Retrieval Augmented Generation from Unstructured Knowledge for Task-Oriented Dialog", "authors": [ { "first": "David", "middle": [], "last": "Thulke", "suffix": "" }, { "first": "Nico", "middle": [], "last": "Daheim", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Dugast", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2021, "venue": "AAAI-21 : 9th Dialog System Technology Challenge (DSTC-9) Workshop. 9th Dialog System Technology Challenge Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Thulke, Nico Daheim, Christian Dugast, and Hermann Ney. 2021. Efficient Retrieval Augmented Generation from Unstructured Knowledge for Task- Oriented Dialog. In AAAI-21 : 9th Dialog System Technology Challenge (DSTC-9) Workshop. 9th Di- alog System Technology Challenge Workshop, on- line, 8 Feb 2021 -9 Feb 2021.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "content": "
Subtask 1Subtask 2
testvaltestval
modelF1 EMF1 EM modelBLEU
baseline67.9 51.5 70.8 56.3 baseline (ours)28.1 32.9
", "type_str": "table", "num": null, "text": "RoBERTa 73.2 58.3 77.3 65.6 cascaded (RoBERTa) 39.1 39.6 ensemble 75.9 63.5 78.8 68.4 cascaded (ensemble) 40.4 41.5 Results of our best system on test and validation set." }, "TABREF2": { "html": null, "content": "
: Ablation analysis of our systems for subtask 1
on the validation set. The best single model results are
underlined.
", "type_str": "table", "num": null, "text": "" } } } }