{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:41.086720Z" }, "title": "Technical Report on Shared Task in DialDoc21", "authors": [ { "first": "Jiapeng", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin, Heilongjiang", "country": "China" } }, "email": "jpli@ir.hit.edu.cn" }, { "first": "Mingda", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin, Heilongjiang", "country": "China" } }, "email": "mdli@ir.hit.edu.cn" }, { "first": "Longxuan", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin, Heilongjiang", "country": "China" } }, "email": "lxma@ir.hit.edu.cn" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin, Heilongjiang", "country": "China" } }, "email": "wnzhang@ir.hit.edu.cn" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin, Heilongjiang", "country": "China" } }, "email": "tliu@ir.hit.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We participate in the DialDoc Shared Task subtask 1 (Knowledge Identification). The task requires identifying the grounding knowledge in form of a document span for the next dialogue turn. We employ two well-known pre-trained language models (RoBERTa and ELECTRA) to identify candidate document spans and propose a metric-based ensemble method for span selection. Our methods include data augmentation, model pre-training/fine-tuning, postprocessing, and ensemble. On the submission page, we rank 2nd based on the average of normalized F1 and EM scores used for the final evaluation. Specifically, we rank 2nd on EM and 3rd on F1.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We participate in the DialDoc Shared Task subtask 1 (Knowledge Identification). The task requires identifying the grounding knowledge in form of a document span for the next dialogue turn. We employ two well-known pre-trained language models (RoBERTa and ELECTRA) to identify candidate document spans and propose a metric-based ensemble method for span selection. Our methods include data augmentation, model pre-training/fine-tuning, postprocessing, and ensemble. On the submission page, we rank 2nd based on the average of normalized F1 and EM scores used for the final evaluation. Specifically, we rank 2nd on EM and 3rd on F1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Our team SCIR-DT participates in the DialDoc shared task in the Document-grounded Dialogue and Conversational QA Workshop at the ACL-IJCNLP 2021. There are two sub-tasks based on the Doc2Dial dataset (Feng et al., 2020) . The dataset contains goal-oriented conversations between a user and an assistive agent. Each dialogue turn is annotated with a dialogue scene, which includes role, dialogue act, and grounding in a document (or irrelevant to domain documents). The documents are from different domains, such as Social Security and Veterans Affairs. Sub-task1 is Knowledge Identification which requires identifying the grounding knowledge in form of document span for the next agent turn. The input is dialogue history, current user utterance, and associated document. The output should be a text span. The evaluation metrics are Exact Match (EM) and F1 (Rajpurkar et al., 2016) . Sub-task2 is text generation which requires generating the next agent response in natural language. The input is dialogue history and The DGD maintains a dialogue pattern where external knowledge used in dialogues can be obtained from the given document. Recently, some DGD datasets (Moghe et al., 2018; Dinan et al., 2019) have been released to exploiting unstructured document information in open-domain dialogues. The Doc2Dial dataset is also document-grounded dialogue. However, the dialogue in Doc2Dial is goaloriented which guides users to access various forms of information according to their needs.", "cite_spans": [ { "start": 200, "end": 219, "text": "(Feng et al., 2020)", "ref_id": "BIBREF6" }, { "start": 857, "end": 881, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF12" }, { "start": 1167, "end": 1187, "text": "(Moghe et al., 2018;", "ref_id": "BIBREF8" }, { "start": 1188, "end": 1207, "text": "Dinan et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The CQA (such as CoQA (Reddy et al., 2019) , QuAC (Choi et al., 2018) and DoQA (Campos et al., 2020) ) task is also based on background document, which aims to understand a text passage and answering a series of interconnected questions that appear in a conversation. The difference between DGD and CQA is the dialogue of DGD is more diversified (including chit-chat or recommendation) and not limited to QA. The Doc2Dial task is closely related to the CQA tasks. It shares the challenges and additionally introduces the dialogue scenes where the agent asks questions when the user query is identified as under-specified or additional verification required for a resolute solution.", "cite_spans": [ { "start": 22, "end": 42, "text": "(Reddy et al., 2019)", "ref_id": "BIBREF13" }, { "start": 50, "end": 69, "text": "(Choi et al., 2018)", "ref_id": "BIBREF2" }, { "start": 74, "end": 100, "text": "DoQA (Campos et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The traditional word embeddings (Pennington et al., 2014) are fixed and context-independent, they could not resolve the out-of-vocabulary (OOV) problem and the ambiguity of words in different contexts. To address these problems, Pre-trained Language Models (PLMs) such as BERT (Devlin et al., 2019) were introduced. BERT employed a Masked language modeling (MLM) method that first masked out some tokens from the input sentences and then trained the model to predict the masked tokens by the rest of the tokens. Concurrently, there was research proposing different enhanced versions of MLM to further improve on BERT. Instead of static masking, RoBERTa (Liu et al., 2019) improved BERT by dynamic masking and abandoned the Next Sentence Prediction (NSP) loss. Instead of masking the input, ELEC-TRA (Clark et al., 2020) replaced some input tokens with plausible alternatives sampled from a small generator network and trained a discriminative model that predicted whether each token in the corrupted input was replaced by the generator or not. When used for downstream tasks, these PLMs were first trained on a large corpus, then fine-tuned on specific tasks. The contextualized embedding has been proven to be better for the downstream NLP tasks (Qiu et al., 2020) than traditional word embedding. We adopt the BERT, RoBERTa, and ELECTRA in this competition.", "cite_spans": [ { "start": 32, "end": 57, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF9" }, { "start": 277, "end": 298, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 653, "end": 671, "text": "(Liu et al., 2019)", "ref_id": "BIBREF7" }, { "start": 799, "end": 819, "text": "(Clark et al., 2020)", "ref_id": "BIBREF3" }, { "start": 1247, "end": 1265, "text": "(Qiu et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-trained Language Model (PLM)", "sec_num": "2.2" }, { "text": "We first use two data augmentation methods to obtain a 5-times larger augmented dataset. We use the augmented data to re-train BERT and RoBERTa with the whole word masking technique and finetune BERT, RoBERTa, and ELECTRA models. We test several span post-processing methods and then propose an ensemble method with trainable parameters for final text span selection. The pipeline we used in this competition is illustrated in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 427, "end": 435, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Our Method", "sec_num": "3" }, { "text": "In sub-task 1, we focus on selecting the correct text span as knowledge from a document. For each example, the model is given a conversational context model learns to select a document span K i for the response with probability P (K i |K, C; \u0398), \u0398 is the model's parameters. Specifically, our model adopts the BERT-QA (Chadha and Sood, 2019) method and predicts the start and end positions of a span, if the predicted positions are not the boundaries of an existing span, we use some post-processing methods to modify them to the nearest K i . The selected span K i is used for sub-task 2 to generate a response. The model structure is shown in Figure 2 . The input of the model is the sum of positional/segment/word embedding of dialogue and document. The output is a document span.", "cite_spans": [], "ref_spans": [ { "start": 645, "end": 653, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "C = [C 1 , C 2 , ..., C |C| ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "The statistics of the Doc2Dial dataset are shown in Table 1 . The final test set has an unseen domain that is not included in the training set. Besides the final test page, the organizers provide a dev-test page that uses a small set for additional testing. We use back-translation and Synonym substitution as data augmentation methods. We adopt the google translation service 1 to translate English into other languages (such as Spanish/German/Japanese/French), then back-translated them into English 2 . Finally, we obtain 5-times document+dialogue data to pretrain the PLMs. Then we pair the 5-times dialogue data with documents translated from different languages, which gives 25 times data for fine-tuning.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 59, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data augmentation", "sec_num": "3.2" }, { "text": "We use the augmented data to pre-train two models: BERT and RoBERTa. We follow the Masked Language Model method with the whole word masking technique. We do not pre-train the ELECTRA model because we hope our ensemble method could leverage the prediction results from RoBERTa and ELECTRA to achieve a good performance on both seen and unseen domains. We pre-train RoBERTa on the augmented data to get a good performance on the seen domains. Meanwhile, we hope that ELECTRA can get a good prediction on the unseen domain. The unseen domain in the final-test set requires the knowledge packed in the parameters of the pre-trained model. Pre-training ELECTRA will lose this knowledge. When fine-tuning these models (BERT, RoBERTa, and ELECTRA), the model structure and training objective is the same as the common method used in the span-extraction Reading Comprehension task. The training objective is defined as the sum of negative log probabilities of the true start and end positions by the predicted distributions, averaged over all N examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-training and Fine-tuning", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = \u2212 1 N N n=1 [logP (S start n ) + logP (S end n )],", "eq_num": "(1)" } ], "section": "Pre-training and Fine-tuning", "sec_num": "3.3" }, { "text": "where S start n and S end n are the ground-truth span start and end positions of the n-th example .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-training and Fine-tuning", "sec_num": "3.3" }, { "text": "Since the document is divided into consecutive spans and the task requires identifying a single span, we propose two different post-processing methods to fix the wrong predictions. The goal of these methods is to process the predicted incomplete span into a complete one. The first method is to expand the predicted start/end to the boundary of one standard span when the predicted positions are within it. The second is to move the predicted start/end to the boundary of the nearest span when the predicted positions are across two spans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post Processing", "sec_num": "3.4" }, { "text": "2 When the back-translation sentence is the same as the original sentence, we employ synonym substitution with Wordnet (https://wordnet.princeton.edu/) to increase diversity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post Processing", "sec_num": "3.4" }, { "text": "Algorithm 1: Metric-based ensemble method. 1 : During training: Metric = F1 or EM; 2 : Input: S R , S E , S,W R ,W E , S gt . 3 : Output: Weight for each model. 4 : for p \u2208 range(start=0, stop=1, step=0.1) do 5 : Score = 0 6 : for k \u2208 {validation set} do 7 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "Initialize W:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "{W i = 0, i = 1, 2, ..., T } 8 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "for i \u2208 [1, T]; do 9 : ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "W i = p \u2022W R i + (1 \u2212 p) \u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "{W i = 0, i = 1, 2, ..., T } 18: for i \u2208 [1, T]; do 19: W i = p * \u2022W R i + (1 \u2212 p * ) \u2022W E i 20: end for 21: S k = S argmax(W ) 22: end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "We propose a simple but efficient ensemble method (Algorithm 1 shows the details) to utilize the advantages of different models. For each example, we calculate top N span candidates from each model and sort them in descending order with respect to model confidence. Each span is given a weight which is the reciprocal of its ranking number plus one. For example, candidates from RoBERTa are S R j , (j = 1, 2, ..., N ), and the corresponding weight is W R j = 1 j+1 . Similarly, S E j and W E j for ELECTRA. Then we use these candidates to form a final candidate dictionary S i , (i = 1, 2, ..., T ), N \u2264 T \u2264 2N , and the ensemble weight", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "W i of S i , is calculated by W i = p \u2022W R i +(1 \u2212 p) \u2022W E i , (i = 1, 2, ..., T )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": ". p is a hyperparameter andW R i = W R j if there is a j such that S R j \u223c = S i , 0 otherwise. \u223c = means exact match here andW E i follows the same definition. Then we use a specific metric, such as F1 or EM, to learn the optimal p* with all examples in the validation set. When testing, we select one candidate as our final prediction using the learned weight 3 . 2: Experimental results. \"DA/FT/PT/PP\" means \"data augmentation/fine-tuned/pre-trained/post-processing\", respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ensemble Method", "sec_num": "3.5" }, { "text": "On Our implementations of BERT, RoBERTa, and ELECTRA are based on the public Pytorch implementation from Transformers 4 . All models are in large size. During pre-training, we follow the hyper-parameters setting of the original implementation. During fine-tuning, we truncated the length of the dialogue context to 60 tokens and maximum input length to 512 tokens. The maximum predicted span length is set to 90 words. Candidate span size N is set to 20. We use EM as the Metric in the ensemble method. We use a single Tesla v100s GPU with 32gb memory, the pre-training time is around 48 hours and fine-tuning time is around 24 hours for each model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "In this competition, each team has five submission opportunities on the final test page 5 . Table 2 shows the experimental results on dev-test/finaltest sets of different models. The baseline given by the organizer is a BERT-large model without pretrained on Doc2Dial data, we fine-tune the baseline on the training set of Doc2Dial data and get the F1 of 66.84 and EM of 48.48 on the dev-test set. When using augmented data to fine-tune the BERT-large model, we get 67.62 F1 and 50.01 EM. The results prove the effectiveness of dialogue data augmentation. We fine-tune RoBERTa and ELECTRA with the augmented data and they both outperform BERT. We use augmented data to pre-train the RoBERTa model before we fine-tune it. The F1 and EM increase to 72.08 and 60.10, respectively. It proves that pre-training on task data can further improve performance. Then we find Post-processing helps ELECTRA on both F1 and EM. We employ the PT/FT/PP on RoBERTa and get 72.37 F1 and 60.61 EM. At last, we employ our ensemble method on the best performance RoBERTa and ELECTRA models and achieve 74.09 F1 and 63.13 EM on the dev-test set. The last method also achieves our best F1 and EM on the final-test set, the ensemble results outperform the best single model (RoBERTa) more than 4% on both F1 and EM. For EM, the contribution ranks from big to small are Ensemble>Pre-training>Data Augmentation>Post Processing. The ensemble method uses both PLM (RoBERTa) that is pre-trained with augmented data and PLM (ELECTRA) that is not pre-trained with augmented data. In this way, we can leverage the knowledge packed in the parameters of ELECTRA for the unseen domain of the final-test data. The ELECTRA(FT/PP) got an EM of 55.65 on the final-test set and the RoBERTa(PT/FT/PP) got an EM of 59.09. The ensemble method increased the EM to 63.91, indicating that the two models have a great difference of choice in spans and our ensemble method leverages the difference between the two models to achieve a better result.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 100, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "4.2" }, { "text": "We introduced our submission for Doc2Dial Shared Task. In sub-task 1, our model is based on RoBERTa and ELECTRA. We propose a simple but efficient ensemble method for knowledge selection in multi-turn dialogue. Our team SCIR-DT ranks 2nd on the final submission page. Apart from the methods we introduced, there are other methods that could further improve the performance of our model. For example, Feng et al. (2020) proved the dialogue act information was useful for subtask 1; there are some noisy data such as empty responses in the dialogue data could be filtered out during training; employing machine reading comprehension dataset such as SQuAD (Rajpurkar et al., 2016) or CQA dataset such as CoQA (Reddy et al., 2019) for pre-training and fine-tuning may also be helpful. However, due to the time limitation, we did not try all these methods during the competition. We hope these methods and experiences would be helpful for future contestants.", "cite_spans": [ { "start": 400, "end": 418, "text": "Feng et al. (2020)", "ref_id": "BIBREF6" }, { "start": 653, "end": 677, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF12" }, { "start": 706, "end": 726, "text": "(Reddy et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://translate.google.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, a text span ranks 3rd in RoBERTa and ranks 4th in ELECTRA, p*=0.2, then the final weight to re-rank this span in S is 0.2*0.25+0.8*0.2 = 0.21.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/huggingface/transformers 5 Each team has 20 more submission opportunities after the competition to help finish their technical report.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the thoughtful suggestions from the reviewers. This paper is supported by the National Natural Science Foundation of China (No. 62076081, No. 61772153, and No. 61936010) and the Science and Technology Innovation 2030 Major Project of China (No. 2020AAA0108605).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Doqa -accessing domain-specific faqs via conversational QA", "authors": [ { "first": "Jon", "middle": [], "last": "Ander Campos", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Otegi", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Deriu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Cieliebak", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020", "volume": "", "issue": "", "pages": "7302--7314", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.652" ] }, "num": null, "urls": [], "raw_text": "Jon Ander Campos, Arantxa Otegi, Aitor Soroa, Jan Deriu, Mark Cieliebak, and Eneko Agirre. 2020. Doqa -accessing domain-specific faqs via conver- sational QA. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 7302-7314. Association for Computational Linguis- tics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "BERTQA -attention on steroids", "authors": [ { "first": "Ankit", "middle": [], "last": "Chadha", "suffix": "" }, { "first": "Rewa", "middle": [], "last": "Sood", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankit Chadha and Rewa Sood. 2019. BERTQA -atten- tion on steroids. CoRR, abs/1912.10435.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Quac: Question answering in context", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Wentau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2174--2184", "other_ids": { "DOI": [ "10.18653/v1/d18-1241" ] }, "num": null, "urls": [], "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 2174-2184. Association for Computational Linguis- tics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "ELECTRA: pretraining text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pre- training text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/n19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations, ICLR 2019", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In 7th International Conference on Learn- ing Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "2020. doc2dial: A goal-oriented documentgrounded dialogue dataset", "authors": [ { "first": "Song", "middle": [], "last": "Feng", "suffix": "" }, { "first": "R", "middle": [ "Chulaka" ], "last": "Hui Wan", "suffix": "" }, { "first": "Siva Sankalp", "middle": [], "last": "Gunasekara", "suffix": "" }, { "first": "Sachindra", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Luis", "middle": [ "A" ], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Lastras", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "2020", "issue": "", "pages": "8118--8128", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.652" ] }, "num": null, "urls": [], "raw_text": "Song Feng, Hui Wan, R. Chulaka Gunasekara, Siva Sankalp Patel, Sachindra Joshi, and Luis A. Lastras. 2020. doc2dial: A goal-oriented document- grounded dialogue dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8118-8128. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Roberta: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Towards exploiting background knowledge for building conversation systems", "authors": [ { "first": "Nikita", "middle": [], "last": "Moghe", "suffix": "" }, { "first": "Siddhartha", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Suman", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Mitesh", "middle": [ "M" ], "last": "Khapra", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2322--2332", "other_ids": { "DOI": [ "10.18653/v1/d18-1255" ] }, "num": null, "urls": [], "raw_text": "Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting back- ground knowledge for building conversation sys- tems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, Brussels, Belgium, October 31 -November 4, 2018, pages 2322-2332. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/d14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Inter- est Group of the ACL, pages 1532-1543. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/w18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 -November 1, 2018, pages 186-191. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Pre-trained models for natural language processing: A survey", "authors": [ { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Tianxiang", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yunfan", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. CoRR, abs/2003.08271.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Squad: 100, 000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/d16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383-2392. The Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Coqa: A conversational question answering challenge", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Trans. Assoc. Comput. Linguistics", "volume": "7", "issue": "", "pages": "249--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answer- ing challenge. Trans. Assoc. Comput. Linguistics, 7:249-266.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "The pipeline methods we used in the competition." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "The models we used in the competition." }, "TABREF1": { "type_str": "table", "text": "Doc2Dial dataset statistics.", "html": null, "num": null, "content": "
datasetdocuments dialogues turns
Train488347444149
Validation4886618539
dev-test4881981353
final-test5737875264
" }, "TABREF3": { "type_str": "table", "text": "", "html": null, "num": null, "content": "" } } } }