{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:21.781167Z" }, "title": "CAiRE in DialDoc21: Data Augmentation for Information-Seeking Dialogue System", "authors": [ { "first": "Yan", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Etsuko", "middle": [], "last": "Ishii", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "eishii@connect.ust.hk" }, { "first": "Indra", "middle": [], "last": "Genta", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Zhaojiang", "middle": [], "last": "Winata", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "giwinata@connect.ust.hk" }, { "first": "Andrea", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Zihan", "middle": [], "last": "Madotto", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Peng", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Pascale", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "pascale@ece.ust.hk" }, { "first": "", "middle": [], "last": "Fung", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users' needs, which. To tackle this challenge, we utilize data augmentation methods and several training techniques with the pre-trained language models to learn a general pattern of the task and thus achieve promising performance. In DialDoc21 competition, our system achieved 74.95 F1 score and 60.74 Exact Match score in subtask 1, and 37.72 Sacre-BLEU score in subtask 2. Empirical analysis is provided to explain the effectiveness of our approaches.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users' needs, which. To tackle this challenge, we utilize data augmentation methods and several training techniques with the pre-trained language models to learn a general pattern of the task and thus achieve promising performance. In DialDoc21 competition, our system achieved 74.95 F1 score and 60.74 Exact Match score in subtask 1, and 37.72 Sacre-BLEU score in subtask 2. Empirical analysis is provided to explain the effectiveness of our approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent progress in research has opened up real-life applications of dialogue systems , of which informationseeking dialogue systems are one of the major types. The goal of such dialogue systems is to provide fluent and coherent responses with sufficient information to users based on their needs, retrieving information using the dialogue history. The performance of an information-seeking dialogue system can be evaluated from three aspects: (1) user utterance understanding, (2) relevant knowledge retrieval, and (3) agent response generation (Feng et al., 2020) .", "cite_spans": [ { "start": 545, "end": 564, "text": "(Feng et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents work on the DialDoc-21 Shared Task, which is to teach a dialogue system to identify the most relevant knowledge in the associated document for generating agent responses in natural language. It is composed of two subtasks: Knowledge Identification (KI) to retrieve the knowledge from the document, and Response Generation (RG) to generate an agent utterance utilizing the retrieved knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To tackle this problem, we leverage the pretrained language models from Liu et al. (2019a) and Lewis et al. (2020) and explore data augmentation methods with several training techniques so as to avoid over-fitting to the DialDoc datasets and to teach the model the general pattern of the task. Ensemble and post-processing are conducted to further improve the model performance. Experimental results show that data augmentation is a simple but effective approach for knowledge identification in information-seeking dialogue systems (Madotto et al., 2020a) , while bringing improvement to response generation at the same time. In the DialDoc-21 competition, our system achieved 74.95 of F1 score and 60.74 of Exact Match in subtask 1, and 37.72 SacreBLEU score (Post, 2018) in subtask 2 1 .", "cite_spans": [ { "start": 72, "end": 90, "text": "Liu et al. (2019a)", "ref_id": "BIBREF14" }, { "start": 95, "end": 114, "text": "Lewis et al. (2020)", "ref_id": "BIBREF11" }, { "start": 532, "end": 555, "text": "(Madotto et al., 2020a)", "ref_id": "BIBREF16" }, { "start": 760, "end": 772, "text": "(Post, 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Doc2Dial dataset In this shared task, we mainly focus on the Doc2Dial dataset (Feng et al., 2020) . Doc2Dial addresses the challenge of modeling different dialogue scenes with documents and providing free-form responses while allowing follow-up questions from the agent. The shared task evaluation is divided into a testdev phase and a test phase. The main difference between these is that in the test phase, out-of-domain (OOD) data samples are included by selecting documents from the domain which is unseen in the training process. The testdev phase only covers 30% of the data samples in the final test phase.", "cite_spans": [ { "start": 78, "end": 97, "text": "(Feng et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2" }, { "text": "Besides Doc2Dial, several other datasets are leveraged for augmentation, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2" }, { "text": "MRQA 2019 Shared Task dataset is a collection of multiple reading comprehension datasets for evaluating the generalization ability of QA models. Six datasets are assigned to the training split, which is not included in the evaluation. Among them, SearchQA (Dunn et al., 2017) and Trivi-aQA (Joshi et al., 2017) differ from the others by the data resource and have the least generalization ability compared to the other four datasets as reported in (Su et al., 2019) . In this shared task, we consider two settings when leveraging the MRQA dataset: MRQA and MRQA small which excludes SearchQA and TriviaQA.", "cite_spans": [ { "start": 256, "end": 275, "text": "(Dunn et al., 2017)", "ref_id": "BIBREF4" }, { "start": 290, "end": 310, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF8" }, { "start": 448, "end": 465, "text": "(Su et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2" }, { "text": "Conversational QA (CQA) datasets We also introduce three CQA datasets, CoQA (Reddy et al., 2019) , QuAC (Choi et al., 2018) , and DoQA (Campos et al., 2020), in the shared task because of their similar settings to the KI process.", "cite_spans": [ { "start": 76, "end": 96, "text": "(Reddy et al., 2019)", "ref_id": "BIBREF19" }, { "start": 104, "end": 123, "text": "(Choi et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2" }, { "text": "Wizard-of-Wikipedia (WoW) is a commonlyused knowledge-grounded dialogue dataset (Dinan et al., 2018) . It aims at providing content-full responses to user utterances based on Wikipedia documents.", "cite_spans": [ { "start": 80, "end": 100, "text": "(Dinan et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2" }, { "text": "We utilize a series of data-augmentation approaches to enable the model to obtain better representations on both dialogue context and document context and learn a general pattern of the task with less domain bias. Namely, we have a two-stage training paradigm, the first step is pretraining (PT) to have a better model initialization, and the second step is fine-tuning (FT) to adapt to DialDoc task. For each step, we can apply the multi-task learning (MTL) strategy if we have multiple datasets by making the datasets format uniform and treat samples equally. As reported in Fisch et al. 2019, a model trained on multiple dataset under similar tasks, is supposed to provide a better initialization for further fine-tuning and is capable of generalizing to the data samples in other domains. Thus, we expect a model trained with MTL in the first step to offer a better initialization and in the second step to reduce the domain bias and avoid overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "In the KI task, we conduct experiments on a large pre-trained model, RoBERTa-large (Liu et al., 2019a) , which has shown its effectiveness on many QA datasets (Ju et al., 2019) . The MRQA dataset and three CQA above datasets are leveraged for data augmentation. The combinations of the experimental settings are considered as follows:", "cite_spans": [ { "start": 83, "end": 102, "text": "(Liu et al., 2019a)", "ref_id": "BIBREF14" }, { "start": 159, "end": 176, "text": "(Ju et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "We consider using CQA datasets to enrich the data source. RoBERTa cqa is fine-tuned on Doc2Dial and three CQA datasets using MTL method. RoBERTa f(cqa) leverages the pre-trained RoBERTa cqa model and is fine-tuned on Doc2Dial dataset for better performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "We train the RoBERTa model on MRQA daztaset and MRQA small dataset described in \u00a7 2 using MTL respectively (denoted as RoBERTa mrqa and RoBERTa mrqas ). These models could be further fine-tuned while providing a better initialization (Fisch et al., 2019) .", "cite_spans": [ { "start": 234, "end": 254, "text": "(Fisch et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "RoBERTa f(mrqa) is to further fine-tune RoBERTa mrqa on Doc2Dial dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "The corresponding settings are also applied to RoBERTa f(mrqas) model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "While RoBERTa cqa(mrqa) is initialized with RoBERTa mrqa and fine-tuned on Doc2Dial and three CQA datasets using MTL. RoBERTa cqa(mrqas) follows the same setting as the former model, but use RoBERTa mrqas model for initialization instead. RoBERTa f(cqa(mrqas)) is to further fine-tune RoBERTa cqa(mrqas) on Doc2Dial dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "RoBERTa all is trained on Doc2Dial, MRQA dataset and CQA datasets using MTL method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "For better readability, we summarize the model settings in Table 1 . We also explore more combinations of the experimental settings, such as other combinations of the datasets and other pre-trained language models. However, those fail to bring the improvements as much as those we mentioned above.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 66, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "Post-processing We further conduct postprocessing on the model predictions based on our observation that the ground truths of the data samples are annotated by document splits which are provided together with the dataset. We consider including the whole split of the document once the prediction covers \u03bb percent of it, where \u03bb is set as 0.1. In addition, for better performance in the shared task, we also slightly extend the predictions when there is a \"Yes\" or \"No\" shown right in front of the predicted spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "Ensemble To further boost the model performance, we build an ensemble of our existing models. We consider one prediction containing the start position and the end position of the document as a unit and conduct voting over all the predictions of each data sample. The most frequent one will be selected as the final prediction. We denote the ensemble result as RoBERTa ensemble . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Identification", "sec_num": "3.1" }, { "text": "To obtain natural and relevant responses, we take advantage of the evidence to the query identified from \u00a7 3.1 and focusing on paraphrasing the corresponding knowledge sentences based on the dialogue context. We leverage the large pre-trained model BART large (Lewis et al., 2020) . The process of training and inference can be summarized as three steps:", "cite_spans": [ { "start": 260, "end": 280, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "3.2" }, { "text": "Pre-training on WoW dataset. We first pretrain the BART model on the WoW dataset for better initialization because of its similarity with the RG task. In the training process, the gold grounded knowledge sentences are concatenated with the dialogue context and fed into the model as the inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "3.2" }, { "text": "Fine-tuning on Doc2Dial dataset. In the Doc2Dial dataset, the labels of the gold document splits are also provided in the training and validation set. The model is further fine-tuned on the Doc2Dial dataset using the same components for the input sequences in the first step. The model could be evaluated under two scenarios:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "3.2" }, { "text": "(1) Gold mode (BART gold ), leveraging the gold labels of the knowledge evidence in the dataset as the knowledge inputs;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "3.2" }, { "text": "(2) Prediction mode (BART pred ), leveraging the prediction of the KI process as the inputs. Inference with Knowledge Evidence. During the testdev and test phase, we leverage the predictions from the KI process as the knowledge evidence components for the dialogue queries. The model generates responses based on a concatenation of the knowledge evidence and the dialogue context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "3.2" }, { "text": "Post-processing To avoid serious information loss in the generations compared to the knowledge evidence for the OOD data samples, we compare the lengths of the knowledge evidence and the responses (denoted as L kn and L resp ). The generated response will be replaced by the raw knowledge evidence as the final output if L resp \u2264 \u03b1L kn , where \u03b1 is set as 0.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response Generation", "sec_num": "3.2" }, { "text": "Hyper-parameter Settings We apply different settings to utilize the dialogue history for the two subtasks. For subtask 1, we leverage all previous turns and build the input sequence in a reverse order to them. For subtask 2, we leverage one extra last turn in the time order and differentiate the speakers with special tokens. In Table 2 , we list the selected hyper-parameters utilized in the shared task. Ensemble Settings In subtask 1, we make an ensemble of all the checkpoints of the models listed in Table 1 except RoBERTa mrqa and RoBERTa mrqas . The details of the checkpoints can be found in Tabel 3.", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 337, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 506, "end": 513, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Training Details", "sec_num": "4.1" }, { "text": "Metrics and Model Selection In subtask 1, the Exact Match (EM) and uni-gram F1 score are utilized as the criteria, while in subtask 2, we evaluate the generation by SacreBLEU. We select the models with the best EM and SacreBLEU scores on the validation set respectively, for the two subtasks. Specifically for subtask 2, the model is selected under the gold mode.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Details", "sec_num": "4.1" }, { "text": "The results are shown in Table 3 and Table 4 . For both subtasks, we observe gaps between the testdev phase and the test phase. For some of the models in subtask 1, multiple random seeds are applied in the training process. The performance gap may result from the domain difference of the partial data samples in the test phase, where the corresponding documents are unseen in the training set. In Table 3 , without post-processing on the predictions, the model performance consistently drops to a certain extent, which indicates that postprocessing is suitable for the Doc2Dial scenario. Ensemble, which is a common strategy to improve performance, shows its effectiveness in this task. For subtask 2, the pre-training on WoW dataset brings huge improvement to the model. Interestingly, by just using the knowledge evidence predicted from the subtask 1 RoBERTa ensemble model or the gold knowledge evidence labels, the perfor-mance can even exceed that of the generative model on SacreBLEU scores, while the responses from BART pred are more fluent and natural. This may be caused by the information loss when paraphrasing the knowledge evidence to dialogue responses.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 44, "text": "Table 3 and Table 4", "ref_id": "TABREF5" }, { "start": 398, "end": 405, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2.1" }, { "text": "In this task, we explore data augmentation methods and conduct two-stage training as auxiliary training strategy for improvement. Although resource-and time-consuming, this approach is easy to implement and effective at enabling the model to learn more general ability on the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.2.2" }, { "text": "From our findings, the hyper-parameter, the maximum answer length, is left untuned, which hurts the QA model performance to some degree. With a maximum answer length of 100, the EM and F1 score on the testdev set improve by 2.53 and 1.08, respectively, while a 64.42 EM and 77.27 F1 score are achieved on the test set. With the improved prediction from subtask 1, we achieve a 39.88 SacreBLEU score in subtask 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-Challenge Improvements", "sec_num": "4.2.3" }, { "text": "Conversational QA is a type of reading comprehension task that requires understanding not only the question but also the previous conversation turns. Various datasets have been introduced in recent years, and many of them restrict answers to be extraction of a span from the reference document, while the others allow free-form responses (Choi et al., 2018; Reddy et al., 2019; Campos et al., 2020) .", "cite_spans": [ { "start": 338, "end": 357, "text": "(Choi et al., 2018;", "ref_id": "BIBREF2" }, { "start": 358, "end": 377, "text": "Reddy et al., 2019;", "ref_id": "BIBREF19" }, { "start": 378, "end": 398, "text": "Campos et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In addition to the works to enrich the contents of open-domain conversations by controllable generation (Lin et al., 2020; Madotto et al., 2020b) , the knowledge grounded dialogue task aims to offer more informative conversation by leveraging an external knowledge source (Dinan et al., 2018; . Relevant knowledge selection is the key to improving the whole system, and very recently, latent variable models have been attracting more attention for this purpose (Lian et al., 2019; Liu et al., 2019b; Kim et al., 2020; Chen et al., 2020; Xu et al., 2021) .", "cite_spans": [ { "start": 104, "end": 122, "text": "(Lin et al., 2020;", "ref_id": "BIBREF13" }, { "start": 123, "end": 145, "text": "Madotto et al., 2020b)", "ref_id": "BIBREF17" }, { "start": 272, "end": 292, "text": "(Dinan et al., 2018;", "ref_id": "BIBREF3" }, { "start": 461, "end": 480, "text": "(Lian et al., 2019;", "ref_id": "BIBREF12" }, { "start": 481, "end": 499, "text": "Liu et al., 2019b;", "ref_id": "BIBREF15" }, { "start": 500, "end": 517, "text": "Kim et al., 2020;", "ref_id": "BIBREF10" }, { "start": 518, "end": 536, "text": "Chen et al., 2020;", "ref_id": "BIBREF1" }, { "start": 537, "end": 553, "text": "Xu et al., 2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we utilize data augmentation methods and several training techniques with pre-trained language models to tackle the challenge of the information-seeking dialogue task. The results have indicated the effectiveness of our approaches. Moreover, data augmentation methods are easy to implement, which is promising for practical use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The code is available at: https://github.com/ HLTCHKUST/CAiRE_in_DialDoc21.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Doqa: accessing domain-specific faqs via conversational qa", "authors": [ { "first": "Jon", "middle": [], "last": "Ander Campos", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Otegi", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" }, { "first": "Jan", "middle": [ "Milan" ], "last": "Deriu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Cieliebak", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "7302--7314", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon Ander Campos, Arantxa Otegi, Aitor Soroa, Jan Milan Deriu, Mark Cieliebak, and Eneko Agirre. 2020. Doqa: accessing domain-specific faqs via con- versational qa. In Proceedings of the ACL, pages 7302-7314.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bridging the gap between prior and posterior knowledge selection for knowledge-grounded dialogue generation", "authors": [ { "first": "Xiuyi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Fandong", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Feilong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shuang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "3426--3437", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, and Jie Zhou. 2020. Bridg- ing the gap between prior and posterior knowledge selection for knowledge-grounded dialogue genera- tion. In Proceedings of the EMNLP, pages 3426- 3437.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Quac: Question answering in context", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Wentau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "2174--2184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. In Proceedings of the EMNLP, pages 2174-2184.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2018, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Searchqa: A new q&a dataset augmented with context from a search engine", "authors": [ { "first": "Matthew", "middle": [], "last": "Dunn", "suffix": "" }, { "first": "Levent", "middle": [], "last": "Sagun", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "Volkan", "middle": [], "last": "Ugur Guney", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cirik", "suffix": "" }, { "first": "", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05179" ] }, "num": null, "urls": [], "raw_text": "Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Doc2dial: a framework for dialogue composition grounded in documents", "authors": [ { "first": "Song", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Kshitij", "middle": [], "last": "Fadnis", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Luis", "middle": [ "A" ], "last": "Lastras", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI", "volume": "34", "issue": "", "pages": "13604--13605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Song Feng, Kshitij Fadnis, Q Vera Liao, and Luis A Lastras. 2020. Doc2dial: a framework for dialogue composition grounded in documents. In Proceed- ings of the AAAI, volume 34, pages 13604-13605.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mrqa 2019 shared task: Evaluating generalization in reading comprehension", "authors": [ { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering", "volume": "", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. Mrqa 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Work- shop on Machine Reading for Question Answering, pages 1-13.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Erica: An empathetic android companion for covid-19 quarantine", "authors": [ { "first": "Etsuko", "middle": [], "last": "Ishii", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Divesh", "middle": [], "last": "Cahyawijaya", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Lala", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Etsuko Ishii, Genta Indra Winata, Samuel Cahyawijaya, Divesh Lala, Tatsuya Kawahara, and Pascale Fung. 2021. Erica: An empathetic android companion for covid-19 quarantine.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Weld", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "1601--1611", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the ACL, pages 1601-1611.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Technical report on conversational question answering", "authors": [ { "first": "Ying", "middle": [], "last": "Ju", "suffix": "" }, { "first": "Fubang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Shijie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Xuefeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yunfeng", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.10772" ] }, "num": null, "urls": [], "raw_text": "Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, and Yunfeng Liu. 2019. Technical report on conversational question answering. arXiv preprint arXiv:1909.10772.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sequential latent knowledge selection for knowledge-grounded dialogue", "authors": [ { "first": "Byeongchang", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jaewoo", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byeongchang Kim, Jaewoo Ahn, and Gunhee Kim. 2020. Sequential latent knowledge selection for knowledge-grounded dialogue. In ICLR.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the ACL, pages 7871-7880.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning to select knowledge for response generation in dialog systems", "authors": [ { "first": "Rongzhong", "middle": [], "last": "Lian", "suffix": "" }, { "first": "Min", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jinhua", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "IJCAI International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. 2019. Learning to select knowledge for response generation in dialog systems. In IJCAI International Joint Conference on Artificial Intelli- gence, page 5081.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Xpersona: Evaluating multilingual personalized chatbot", "authors": [ { "first": "Zhaojiang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Genta", "middle": [], "last": "Indra Winata", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Cahyawijaya", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Bang", "suffix": "" }, { "first": "Etsuko", "middle": [], "last": "Ishii", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.07568" ] }, "num": null, "urls": [], "raw_text": "Zhaojiang Lin, Zihan Liu, Genta Indra Winata, Samuel Cahyawijaya, Andrea Madotto, Yejin Bang, Etsuko Ishii, and Pascale Fung. 2020. Xpersona: Eval- uating multilingual personalized chatbot. arXiv preprint arXiv:2003.07568.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Zero-shot cross-lingual dialogue systems with transferable latent variables", "authors": [ { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jamin", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Genta", "middle": [], "last": "Indra Winata", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1297--1303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, and Pascale Fung. 2019b. Zero-shot cross-lingual dialogue systems with trans- ferable latent variables. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1297-1303.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning knowledge bases with parameters for task-oriented dialogue systems", "authors": [ { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Cahyawijaya", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhaojiang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", "volume": "", "issue": "", "pages": "2372--2394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Madotto, Samuel Cahyawijaya, Genta Indra Winata, Yan Xu, Zihan Liu, Zhaojiang Lin, and Pas- cale Fung. 2020a. Learning knowledge bases with parameters for task-oriented dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2372-2394.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Plug-and-play conversational models", "authors": [ { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Etsuko", "middle": [], "last": "Ishii", "suffix": "" }, { "first": "Zhaojiang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Sumanth", "middle": [], "last": "Dathathri", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", "volume": "", "issue": "", "pages": "2422--2433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Madotto, Etsuko Ishii, Zhaojiang Lin, Sumanth Dathathri, and Pascale Fung. 2020b. Plug-and-play conversational models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing: Findings, pages 2422-2433.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/W18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Coqa: A conversational question answering challenge", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "249--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Generalizing question answering system with pretrained language model fine-tuning", "authors": [ { "first": "Dan", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Genta", "middle": [], "last": "Indra Winata", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hyeondey", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering", "volume": "", "issue": "", "pages": "203--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, and Pascale Fung. 2019. Generalizing question answering system with pre- trained language model fine-tuning. In Proceedings of the 2nd Workshop on Machine Reading for Ques- tion Answering, pages 203-211.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Farhad Bin Siddique, Yongsheng Yang, and Pascale Fung. 2021. Nora: The well-being coach", "authors": [ { "first": "Holy", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Etsuko", "middle": [], "last": "Lovenia", "suffix": "" }, { "first": "", "middle": [], "last": "Ishii", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2106.00410" ] }, "num": null, "urls": [], "raw_text": "Genta Indra Winata, Holy Lovenia, Etsuko Ishii, Farhad Bin Siddique, Yongsheng Yang, and Pascale Fung. 2021. Nora: The well-being coach. arXiv preprint arXiv:2106.00410.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Controllable story generation with external knowledge using large-scale language models", "authors": [ { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Mostofa", "middle": [], "last": "Patwary", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Shoeybi", "suffix": "" }, { "first": "Raul", "middle": [], "last": "Puri", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Animashree", "middle": [], "last": "Anandkumar", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Catanzaro", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2831--2845", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Animashree Anandkumar, and Bryan Catanzaro. 2020. Controllable story genera- tion with external knowledge using large-scale lan- guage models. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 2831-2845.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Retrieval-free knowledge-grounded dialogue response generation with adapters", "authors": [ { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Etsuko", "middle": [], "last": "Ishii", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Genta", "middle": [], "last": "Indra Winata", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Su", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Madotto", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2105.06232" ] }, "num": null, "urls": [], "raw_text": "Yan Xu, Etsuko Ishii, Zihan Liu, Genta Indra Winata, Dan Su, Andrea Madotto, and Pascale Fung. 2021. Retrieval-free knowledge-grounded dialogue response generation with adapters. arXiv preprint arXiv:2105.06232.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "content": "
: The combinations of the experimental settings
for the KI subtask. Two-stage training consists of two
stages: pre-training (PT) and fine-tuning (FT).
", "num": null, "text": "", "type_str": "table" }, "TABREF3": { "html": null, "content": "", "num": null, "text": "", "type_str": "table" }, "TABREF5": { "html": null, "content": "
", "num": null, "text": "The results of the selected models on the testdev and test phase of subtask 1 are listed. All the results are calculated with the corresponding predictions after post-processing except those with specific notations. For the models that are trained with multiple random seeds, the average scores and the standard deviations are presented. RoBERTa * ensemble denotes the results of the ensemble model on the test set.", "type_str": "table" }, "TABREF7": { "html": null, "content": "
", "num": null, "text": "The results of selected models on subtask 2 are listed. Gold denotes the gold knowledge evidence labels provided in the dataset. The model denoted with * is the final submission to the test phase.", "type_str": "table" } } } }