{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:58:12.346910Z" }, "title": "DeepBlueAI at TextGraphs 2021 Shared Task: Treating Multi-Hop Inference Explanation Regeneration as A Ranking Problem", "authors": [ { "first": "Chunguang", "middle": [], "last": "Pan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bingyan", "middle": [], "last": "Song", "suffix": "", "affiliation": {}, "email": "songby@deepblueai.com" }, { "first": "Zhipeng", "middle": [], "last": "Luo", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the winning system for TextGraphs 2021 shared task: Multi-hop inference explanation regeneration. Given a question and its corresponding correct answer, this task aims to select the facts that can explain why the answer is correct for that question and answering (QA) from a large knowledge base. To address this problem and accelerate training as well, our strategy includes two steps. First, fine-tuning pre-trained language models (PLMs) with triplet loss to recall top-K relevant facts for each question and answer pair. Then, adopting the same architecture to train the re-ranking model to rank the top-K candidates. To further improve the performance, we average the results from models based on different PLMs (e.g., RoBERTa) and different parameter settings to make the final predictions. The official evaluation shows that, our system can outperform the second best system by 4.93 points, which proves the effectiveness of our system.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the winning system for TextGraphs 2021 shared task: Multi-hop inference explanation regeneration. Given a question and its corresponding correct answer, this task aims to select the facts that can explain why the answer is correct for that question and answering (QA) from a large knowledge base. To address this problem and accelerate training as well, our strategy includes two steps. First, fine-tuning pre-trained language models (PLMs) with triplet loss to recall top-K relevant facts for each question and answer pair. Then, adopting the same architecture to train the re-ranking model to rank the top-K candidates. To further improve the performance, we average the results from models based on different PLMs (e.g., RoBERTa) and different parameter settings to make the final predictions. The official evaluation shows that, our system can outperform the second best system by 4.93 points, which proves the effectiveness of our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multi-hop inference is the task of doing inference by combining more than one piece of information, such as question answering (Jansen and Ustalov, 2019) . The TextGraphs 2021 Shared Task on Multi-Hop Inference Explanation Regeneration focuses on the theme of determining relevance versus completeness in large multi-hop explanations, which asks participants to rank how likely table row sentences are to be a part of a given explanation. Concretely, given an elementary science question and its corresponding correct answer, the system need to perform the multi-hop inference and rank a set of explanatory facts that are expected to explain why the answer is correct from a large knowledge base. An example is shown in Figure 1 . A number of contemporary challenges exist in performing multi-hop inference for question answering (Thayaparan et al., 2020) , including semantic drift, long inference chains, etc. Several Multi-hop inference shared tasks have been conducted in the past few years Ustalov, 2019, 2020) , and methods based on large pretrained language models (PLMs) such as BERT (Das et al., 2019; Chia et al., 2019) , RoBERTa (Pawate et al., 2020) and ERNIE are proposed.", "cite_spans": [ { "start": 127, "end": 153, "text": "(Jansen and Ustalov, 2019)", "ref_id": "BIBREF3" }, { "start": 830, "end": 855, "text": "(Thayaparan et al., 2020)", "ref_id": null }, { "start": 995, "end": 1015, "text": "Ustalov, 2019, 2020)", "ref_id": null }, { "start": 1092, "end": 1110, "text": "(Das et al., 2019;", "ref_id": "BIBREF2" }, { "start": 1111, "end": 1129, "text": "Chia et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 720, "end": 728, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe the system that we submitted to the TextGraphs 2021 shared task on Multi-Hop Inference Explanation Regeneration. There are two main parts of our system. First, we use a pre-trained language model-based method to recall the top-K relevant explanations for each question. Second, we adopt the same model architecture to re-rank the top-K candidates to do the final prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "When determine whether an explanation sentence is relevant to the question, the previous works (Das et al., 2019; constructed a pair of explanations with the QA (questions with corresponding answers) sentence as the input of the PLMs. To reduce the amount of calculation and accelerate training, instead of using all the explanations from the given table, we propose to fine-tune PLMs with triplet loss (Schroff et al., 2015) , a loss function where a baseline (anchor) input is compared to a positive (true) input and a negative (false) input. For choosing samples as the negative input, we design several ways which will be introduced in Section 3. Experiments on the given dataset show the effectiveness of our model and we rank first in this task.", "cite_spans": [ { "start": 95, "end": 113, "text": "(Das et al., 2019;", "ref_id": "BIBREF2" }, { "start": 403, "end": 425, "text": "(Schroff et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Task Definition The explanation regeneration task supplies models with questions, their correct answers, the gold explanation authored by human annotators, as well as a knowledge base of explanations. From this, for a given question and its correct answer, the model must select a set of explanations from the knowledge base that explain why the answer is correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Dataset The data used in this shared task contains approximately 5,100 science exam questions drawn from the AI2 Reasoning Challenge (ARC) dataset (Clark et al., 2018) , together with multifact explanations for their correct answers extracted from the WorldTree V2.1 explanation corpus (Xie et al., 2020; Jansen et al., 2018) . Different from shared task in 2020 (Jansen and Ustalov, 2020) , this year's dataset has been augmented with a new set of approximately 250k pre-release expert-generated relevancy ratings. The knowledge base supporting these questions and their explanations contains approximately 9,000 facts. These facts are a combination of scientific knowledge as well as commonsense/world knowledge.", "cite_spans": [ { "start": 147, "end": 167, "text": "(Clark et al., 2018)", "ref_id": "BIBREF1" }, { "start": 286, "end": 304, "text": "(Xie et al., 2020;", "ref_id": "BIBREF13" }, { "start": 305, "end": 325, "text": "Jansen et al., 2018)", "ref_id": "BIBREF5" }, { "start": 363, "end": 389, "text": "(Jansen and Ustalov, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Evaluation As mentioned in the official evaluation procedure of TextGraphs 2021, the participating systems are evaluated using Normalized Discounted Cumulative Gain (NDCG), a common measure of ranking quality. Therefore, it inspires us to think of this task as a ranking task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Our system consists of two major components. The first part is the retrieval procedure, which utilize the PLMs fine-tuned with triplet loss to recall top-K (K>100) relevant explanations. The second part is the re-ranking procedure, which use the same model architecture to rank the top-K candidates. The model architecture is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 335, "end": 343, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model Architectures", "sec_num": "3" }, { "text": "Inspired by the work of Schroff et al. (2015), we adopt the triplet loss in this task. The triplet loss minimizes the distance between an anchor and a positive, and maximizes the distance between the anchor and a negative. We treat the sentences of questions with corresponding answers as the anchor, the facts annotated with high reference value as positives. Both in retrieval procedure and reranking procedure, we generate three different negative samples for each positive and anchor pair, which will be discussed in Section 3.3. After constructing triplet (an anchor, a positive, a negative), we put them into the PLMs (e.g., RoBERTa) to get their representations. These PLMs first tokenize the input sentences and then output the last layer embedding of each tokens. We average each token's embedding as the final representations for the positives, anchors and negatives, which are denoted by e p , e a and e n respectively. Then, the models can be fine-tuned by the triplet loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.1" }, { "text": "After obtaining the embeddings of the triplet (an anchor (a), a positive (p) and a negative (n)), the triplet loss can be calculated as follow, L(a, p, n) = max{d(e a , e p ) \u2212 d(e a , e n ) +\u03b1, 0}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triplet loss", "sec_num": "3.2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triplet loss", "sec_num": "3.2" }, { "text": "d(x, y) = x \u2212 y 2 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triplet loss", "sec_num": "3.2" }, { "text": "\u03b1 is a margin that is enforced between positive and negative pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Triplet loss", "sec_num": "3.2" }, { "text": "Retrieval First, we use the model introduced above to recall top-K relevant facts. In this step, for each anchor and positive pair, the negative samples are selected by three ways: 1) a sample which comes from the same table file with the positive one and does not annotated as the relevant one with the anchor; 2) a sample within the same mini-batch of positives and does not annotated as the relevant one with the anchor 3) a sample selected randomly among the facts irrelevant to the anchors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training procedure", "sec_num": "3.3" }, { "text": "Re-ranking After obtaining the top-K relevant facts, we train the re-ranking model with the same model architecture, yet use the another three different ways to select negative samples: 1) a sample within the top-K candidates but is irrelevant to the anchors; 2) a sample within top-100 candidates but irrelevant to the anchors; 3) a sample within the same mini-batch of positives but irrelevant to the anchors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training procedure", "sec_num": "3.3" }, { "text": "Ensembling Finally, to further improve the performance, we average different results from models based on different PLMs and random seeds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training procedure", "sec_num": "3.3" }, { "text": "All models are implemented based on the open source transformers library of hugging face (Wolf et al., 2020) , which provides thousands of pretrained models that can be quickly download and fine-tuned on specific tasks. The PLMs we used in this task are RoBERTa (Liu et al., 2019) and Method NDCG within the same mini-batch 0.7597 randomly 0.7621 within the same file 0.7726 all the three above 0.771 . For all the experiments, we set the batch size as 48 and set 15 epochs for both retrieval and re-ranking procedure. We use the Adam optimizer and create a schedule with a learning rate that decreases linearly from the initial lr set (1e \u22125 ) in the optimizer to 0, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer.", "cite_spans": [ { "start": 89, "end": 108, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF12" }, { "start": 262, "end": 280, "text": "(Liu et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter settings", "sec_num": "4.1" }, { "text": "Retrieval Since we have design three different ways to choose the negative samples during the retrieval procedure, we did experiments on the validation set to test whether these mechanisms valid or not. From Table 1 , we find the most effective way is to choose the negative samples from the same table file with the positive one. Facts in the same table file have the same pattern.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Ablation studies", "sec_num": "4.2" }, { "text": "Since for each question and answer pair, there are usually more than ten annotated relevant facts, we select the top-2000 ranked facts from the retrieval phrase, and we find that the NDCG score can reach 97.56% as shown in Table 2 . Besides, though the TF-IDF method can quickly score all the facts, its NDCG score is very low compared with our retriever, which proves the effectiveness of our proposed method.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Ablation studies", "sec_num": "4.2" }, { "text": "Re-ranking To re-rank the top-K candidates, we adopt the same model architecture. We compared the results of the proposed ensemble re-ranker with the TF-IDF baseline model and the proposed ensemble retriever on the test set, as shown in Table 3. We also set different top-K for calculating NDCG@K including 100, 500, 1000, and 2000. From Table 3 , we can see that after re-ranking the top-K candidates, the model performance will be improved. Besides, as the increase of K value, the growth rate of NDCG gradually slows down. ", "cite_spans": [], "ref_spans": [ { "start": 338, "end": 345, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Ablation studies", "sec_num": "4.2" }, { "text": "We submitted the scores predicted by the re-ranking model introduced above. The official ranking is presented in Table 4 . We rank first in the task, 4.9% higher than the second place, which verifies the validity of our system. ", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Official Ranking", "sec_num": "4.3" }, { "text": "In this paper, we propose a top performing approach for the task of multi-hop inference explanation regeneration. We fine-tune pre-trained language models with the triplet loss to accelerate training and design different ways for negative sampling. The same model architecture is utilized to recall the top-K candidates from all the facts and to re-rank the top-K relevant explanations for the final prediction. Experimental results show the effectiveness of the proposed method and we win the first place for the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Red dragon ai at textgraphs 2019 shared task: Language model assisted explanation generation", "authors": [ { "first": "Ken", "middle": [], "last": "Yew", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Chia", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Witteveen", "suffix": "" }, { "first": "", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.08976" ] }, "num": null, "urls": [], "raw_text": "Yew Ken Chia, Sam Witteveen, and Martin Andrews. 2019. Red dragon ai at textgraphs 2019 shared task: Language model assisted explanation genera- tion. arXiv preprint arXiv:1911.08976.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge", "authors": [ { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Cowhey", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Carissa", "middle": [], "last": "Schoenick", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.05457" ] }, "num": null, "urls": [], "raw_text": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Chains-of-reasoning at textgraphs 2019 shared task: Reasoning over chains of facts for explainable multihop inference", "authors": [ { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ameya", "middle": [], "last": "Godbole", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Shehzaad", "middle": [], "last": "Dhuliawala", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "101--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajarshi Das, Ameya Godbole, Manzil Zaheer, She- hzaad Dhuliawala, and Andrew McCallum. 2019. Chains-of-reasoning at textgraphs 2019 shared task: Reasoning over chains of facts for explainable multi- hop inference. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 101- 117.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Textgraphs 2019 shared task on multi-hop inference for explanation regeneration", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)", "volume": "", "issue": "", "pages": "63--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen and Dmitry Ustalov. 2019. Textgraphs 2019 shared task on multi-hop inference for expla- nation regeneration. In Proceedings of the Thir- teenth Workshop on Graph-Based Methods for Nat- ural Language Processing (TextGraphs-13), pages 63-77.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "TextGraphs 2020 shared task on multi-hop inference for explanation regeneration", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Ustalov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graphbased Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "85--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen and Dmitry Ustalov. 2020. TextGraphs 2020 shared task on multi-hop inference for expla- nation regeneration. In Proceedings of the Graph- based Methods for Natural Language Processing (TextGraphs), pages 85-97, Barcelona, Spain (On- line). Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WorldTree: A corpus of explanation graphs for elementary science questions supporting multi-hop inference", "authors": [ { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Marmorstein", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Morrison", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Jansen, Elizabeth Wainwright, Steven Mar- morstein, and Clayton Morrison. 2018. WorldTree: A corpus of explanation graphs for elementary sci- ence questions supporting multi-hop inference. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pgl at textgraphs 2020 shared task: Explanation regeneration using language and graph learning methods", "authors": [ { "first": "Weibin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Zhengjie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Weiyue", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jiaxiang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "98--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weibin Li, Yuxiang Lu, Zhengjie Huang, Weiyue Su, Jiaxiang Liu, Shikun Feng, and Yu Sun. 2020. Pgl at textgraphs 2020 shared task: Explanation regen- eration using language and graph learning methods. In Proceedings of the Graph-based Methods for Nat- ural Language Processing (TextGraphs), pages 98- 102.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Chisquarex at textgraphs 2020 shared task: Leveraging pretrained language models for explanation regeneration", "authors": [ { "first": "Aditya", "middle": [], "last": "Girish Pawate", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Madhavan", "suffix": "" }, { "first": "Devansh", "middle": [], "last": "Chandak", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)", "volume": "", "issue": "", "pages": "103--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Girish Pawate, Varun Madhavan, and Devansh Chandak. 2020. Chisquarex at textgraphs 2020 shared task: Leveraging pretrained language mod- els for explanation regeneration. In Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs), pages 103-108.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Facenet: A unified embedding for face recognition and clustering", "authors": [ { "first": "Florian", "middle": [], "last": "Schroff", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Kalenichenko", "suffix": "" }, { "first": "James", "middle": [], "last": "Philbin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "815--823", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Ernie 2.0: A continual pre-training framework for language understanding", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "8968--8975", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie 2.0: A continual pre-training framework for language un- derstanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8968- 8975.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "and Andr\u00e9 Freitas. 2020. A survey on explainability in machine reading comprehension", "authors": [ { "first": "Mokanarangan", "middle": [], "last": "Thayaparan", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Valentino", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.00389" ] }, "num": null, "urls": [], "raw_text": "Mokanarangan Thayaparan, Marco Valentino, and An- dr\u00e9 Freitas. 2020. A survey on explainability in machine reading comprehension. arXiv preprint arXiv:2010.00389.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Transformers: State-of-theart natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Julien Chaumond, Lysandre Debut, Vic- tor Sanh, Clement Delangue, Anthony Moi, Pier- ric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the- art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 38-45.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "WorldTree v2: A corpus of sciencedomain structured explanations and inference patterns supporting multi-hop inference", "authors": [ { "first": "Zhengnan", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Thiem", "suffix": "" }, { "first": "Jaycie", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Marmorstein", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "5456--5473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengnan Xie, Sebastian Thiem, Jaycie Martin, Eliz- abeth Wainwright, Steven Marmorstein, and Peter Jansen. 2020. WorldTree v2: A corpus of science- domain structured explanations and inference pat- terns supporting multi-hop inference. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 5456-5473, Marseille, France. European Language Resources Association.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Q\uff1aWhere does the sound waves travel the fastest? A\uff1aThrough the rock E1\uff1aSound travels fastest through solid E2\uff1aA rock is usually a solid E3\uff1aWaves can travel through matter", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "A multi-hop inference example which can explain why the answer is correct for the question.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "The architecture of the proposed model.", "type_str": "figure" }, "TABREF0": { "content": "
MethodsRecall
TF-IDF0.7001
Ensemble Retriever 0.97562
", "num": null, "html": null, "type_str": "table", "text": "The comparison between different ways of selecting negative samples" }, "TABREF1": { "content": "
: The comparison between different retrieval
models
ERNIE 2.0
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF3": { "content": "", "num": null, "html": null, "type_str": "table", "text": "The final results compared with different models" }, "TABREF5": { "content": "
", "num": null, "html": null, "type_str": "table", "text": "Leaderboard" } } } }