{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:32.050994Z" }, "title": "Rethinking the Objectives of Extractive Question Answering", "authors": [ { "first": "Martin", "middle": [], "last": "Fajcik", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brno University of Technology", "location": {} }, "email": "ifajcik@fit.vutbr.cz" }, { "first": "Josef", "middle": [], "last": "Jon", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brno University of Technology", "location": {} }, "email": "ijon@fit.vutbr.cz" }, { "first": "Pavel", "middle": [], "last": "Smrz", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brno University of Technology", "location": {} }, "email": "smrz@fit.vutbr.cz" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This work demonstrates that using the objective with independence assumption for modelling the span probability P (a s , a e) = P (a s)P (a e) of span starting at position a s and ending at position a e has adverse effects. Therefore we propose multiple approaches to modelling joint probability P (a s , a e) directly. Among those, we propose a compound objective, composed from the joint probability while still keeping the objective with independence assumption as an auxiliary objective. We find that the compound objective is consistently superior or equal to other assumptions in exact match. Additionally, we identified common errors caused by the assumption of independence and manually checked the counterpart predictions, demonstrating the impact of the compound objective on the real examples. Our findings are supported via experiments with three extractive QA models (BIDAF, BERT, ALBERT) over six datasets and our code, individual results and manual analysis are available online 1 .", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This work demonstrates that using the objective with independence assumption for modelling the span probability P (a s , a e) = P (a s)P (a e) of span starting at position a s and ending at position a e has adverse effects. Therefore we propose multiple approaches to modelling joint probability P (a s , a e) directly. Among those, we propose a compound objective, composed from the joint probability while still keeping the objective with independence assumption as an auxiliary objective. We find that the compound objective is consistently superior or equal to other assumptions in exact match. Additionally, we identified common errors caused by the assumption of independence and manually checked the counterpart predictions, demonstrating the impact of the compound objective on the real examples. Our findings are supported via experiments with three extractive QA models (BIDAF, BERT, ALBERT) over six datasets and our code, individual results and manual analysis are available online 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of extractive question answering (EQA) is to find the span boundaries -the start and the end of the span from text evidence, which answers a given question. Therefore, a natural choice of the objective to this problem is to model the probabilities of the span boundaries. In the last years, there was a lot of effort put into building better neural models underlying the desired probability distributions. However, there has been a little progress seen towards the change of the objective itself. For instance, the \"default\" choice of objective for modelling the probability over spans in SQuADv1.1 (Rajpurkar et al., 2016 ) -maximization of independent span boundary probabilities P (a s )P (a e ) for answer at position a s ,a e -has stayed the same over the course of years in many influential works (Xiong et al., 2017; Seo et al., 2017; Chen et al., 2017; Yu et al., 2018; Cheng et al., 2020) since the earliest work on this dataset -the submission of Wang and Jiang (2017) . Based on the myths of worse performance of different objectives, these works adopt the deeply rooted assumption of independence. However, this assumption may lead to obviously wrong predictions, as shown in Figure 1 .", "cite_spans": [ { "start": 608, "end": 631, "text": "(Rajpurkar et al., 2016", "ref_id": "BIBREF25" }, { "start": 812, "end": 832, "text": "(Xiong et al., 2017;", "ref_id": "BIBREF33" }, { "start": 833, "end": 850, "text": "Seo et al., 2017;", "ref_id": "BIBREF26" }, { "start": 851, "end": 869, "text": "Chen et al., 2017;", "ref_id": "BIBREF2" }, { "start": 870, "end": 886, "text": "Yu et al., 2018;", "ref_id": null }, { "start": 887, "end": 906, "text": "Cheng et al., 2020)", "ref_id": "BIBREF3" }, { "start": 966, "end": 987, "text": "Wang and Jiang (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 1197, "end": 1205, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Question: What was the name of atom bomb dropped by USA on Hiroshima? Passage: ...The Allies issued orders for atomic bombs to be used on four Japanese cities were issued on July 25. on August 6, one of its b -29s dropped a little boy uranium guntype bomb on Hiroshima. three days later, on August 9, a fat man plutonium implosion-type bomb was dropped by another b -29 on Nagasaki...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Predictions from BERT-base 33.3 little boy uranium gun-type bomb on Hiroshima.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "three days later, on August 9, a fat man 32.15 little boy 23.51 fat man 3.60 a fat man 2.08 a little boy uranium gun -type bomb on hiroshima.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "three days later, on august 9, a fat man 1.03 a little boy Figure 1 : An example of an error which comes with an independence assumption. The model assigns high probability mass to boundaries around \"little boy\", and \"fat man\" answers. However, during decoding, the start of one and the end of another answer is picked up.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 67, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "In addition, this assumption leads to degenerate distribution P (a s , a e ), as high probability mass is assigned to many trivially wrong 2 answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "Some of the earlier work (Wang and Jiang, 2017; Weissenborn et al., 2017) and recent approaches including large language representation models (LRMs) like XLNet , ALBERT (Lan et al., 2020) or ELECTRA (Clark et al., 2020) started modelling the span probability via conditional probability factorization P (a e |a s )P (a s ). However, is it unknown whether this objective improves any performance at all, as almost none of the recent works reported results on its effect, not even described its existence (except ELECTRA paper). Additionally, this objective requires beam search which slows down inference in test time. Exceptionally, Lee et al. (2016) proposed one way for modelling P (a s , a e ) directly, but the approach was only sparsely adopted Khattab et al., 2020) . This may be caused by the belief, that enumerating all possible spans has a large complexity (Cheng et al., 2021) . However, in practice we find the complexity to be often similar to assumption of independence, when implementing the objective efficiently. We continue the in-depth discussion on complexity in Appendix C.", "cite_spans": [ { "start": 25, "end": 47, "text": "(Wang and Jiang, 2017;", "ref_id": "BIBREF30" }, { "start": 48, "end": 73, "text": "Weissenborn et al., 2017)", "ref_id": "BIBREF31" }, { "start": 170, "end": 188, "text": "(Lan et al., 2020)", "ref_id": "BIBREF18" }, { "start": 200, "end": 220, "text": "(Clark et al., 2020)", "ref_id": "BIBREF6" }, { "start": 634, "end": 651, "text": "Lee et al. (2016)", "ref_id": "BIBREF20" }, { "start": 751, "end": 772, "text": "Khattab et al., 2020)", "ref_id": "BIBREF16" }, { "start": 868, "end": 888, "text": "(Cheng et al., 2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "In this work, we try to break the myths about the objectives that have been widely used previously. We experiment with joint objective and we also introduce a new compound objective, that deals with modelling joint probability P (a s , a e ) directly while keeping the traditional independent objective as an auxiliary objective. We experiment with 5 different joint probability function realisations and find that with current LRMs, simple dot product works the best. However, we show that this is not a rule, and for some models, other function realisations might be better. The conducted experiments demonstrate that using compound objective is superior to previously used objectives across the various choices of models or datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "In summary, our work contributions are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "\u2022 introduction of the compound objective and its comparison with the traditional objectives based on assumption of independence, conditional probability factorization, or direct joint probability,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "\u2022 a thorough evaluation on the wide spectrum of models and datasets comparing different objectives supported by statistical tests,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "\u2022 a manual analysis which provides closer look on the different impacts of independent and compound objectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "P", "sec_num": null }, { "text": "This section describes the common approach to the EQA, with its independent modelling of the answer span start and end positions. Secondly, it defines an assumption based on conditional factorization of span probability. Finally a function family for computing joint span probability and a combination of independent and joint assumption we call the compound objective are proposed. The EQA can be defined as follows: Given a question q and a passage or a set of passages D, find a string a from D such that a answers the question q. This can be expressed by modelling a categorical probability mass function (PMF) that has its maximum in the answer start and end indices a = a s , a e from the passage D as P (a s , a e |q, D) for each question-passage-answer triplet (q, D, a) from dataset D. The parameters \u03b8 of such model can be estimated by minimizing maximum likelihood objective", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Assumptions for the Answer Span", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2212 (q,D,a)\u2208D log P \u03b8 (a s , a e |q, D).", "eq_num": "(1)" } ], "section": "Probabilistic Assumptions for the Answer Span", "sec_num": "2" }, { "text": "During inference, the most probable answer span a s , a e is predicted. Although there are works that were able to model the joint probability explicitly (Lee et al., 2016) , modelling it directly results in a number of categories quadratic to the passage's length. Optimizing such models may be seen challenging, as there are often more classes than the amount of data points within the current datasets. Therefore, state-of-the-art approaches resort to independence assumption P (a s , a e |q, D) = P \u03b8 (a s |q, D)P \u03b8 (a e |q, D). The factorized PMFs are usually computed by the model with shared parameters \u03b8, as introduced in Wang and Jiang (2017) . For most of the systems modelling the independent objective with neural networks, the final endpoint probabilities 3 are derived from start/end position passage representations computed via shared model H s ,H e \u2208 R d\u00d7L as shown for b \u2208 {s, e}.", "cite_spans": [ { "start": 154, "end": 172, "text": "(Lee et al., 2016)", "ref_id": "BIBREF20" }, { "start": 630, "end": 651, "text": "Wang and Jiang (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Assumptions for the Answer Span", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P \u03b8 (a b ) = softmax(w b H b + b b )", "eq_num": "(2)" } ], "section": "Probabilistic Assumptions for the Answer Span", "sec_num": "2" }, { "text": "The passage representations H s ,H e are often presoftmax layer representations from neural network with passage and question at the input. Symbols d and L denote the model-specific dimension and the passage length, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Assumptions for the Answer Span", "sec_num": "2" }, { "text": "Occasionally, the conditional factorization P (a s , a e |q, D) = P \u03b8 (a s )P \u03b8 (a e |a s ) is considered instead. The probabilities of span's start and end are computed the same way as in equation 2. The difference is in the end representations H e = f (a s ), which now must be the a function of span's start a s .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Assumptions for the Answer Span", "sec_num": "2" }, { "text": "However, one does not need to apply simplifying assumptions and instead compute joint probability directly. We define a family of joint probability functions P \u03b8 (a s , a e ) with an arbitrary vector-tovector similarity function f sim used for obtaining each span score (e. g., the dot product H s H e ) 4 . P \u03b8 (a s , a e ) = softmax(vec(f sim (H s , H e )))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Assumptions", "sec_num": "2.1" }, { "text": "(3) Finally, we define a multi-task compound objective (4) composing the joint and independent probability formulations, computed via a shared model \u03b8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Assumptions", "sec_num": "2.1" }, { "text": "\u2212 (q,D,a)\u2208D log P \u03b8 (a s , a e )P \u03b8 (a s )P \u03b8 (a e ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Assumptions", "sec_num": "2.1" }, { "text": "Here P (a s )P (a e ) can be seen as a auxiliary objective for the more complex joint objective P \u03b8 (a s , a e ) used for decoding in test time. Empirically, we found the compound objective to be superior or equal to other assumptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Assumptions", "sec_num": "2.1" }, { "text": "We use Transformers (Wolf et al., 2019) for language representation model (LRM) implementation. Our experiments were done on 16GB GPUs using PyTorch (Paszke et al., 2019) . For experiments with LRMs, we used Adam optimizer with a decoupled weight decay (Loshchilov and Hutter, 2017) . The used hyperparameters were the same as the SQuADv1.1 default hyperparameters as proposed by specific LRM authors through all our datasets. For BIDAF, we tuned hyperparameters using Hyperopt (Bergstra et al., 2013) separately for independent and compound objectives 5 . See Appendix D for further details.", "cite_spans": [ { "start": 20, "end": 39, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF32" }, { "start": 149, "end": 170, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF22" }, { "start": 253, "end": 282, "text": "(Loshchilov and Hutter, 2017)", "ref_id": "BIBREF21" }, { "start": 478, "end": 501, "text": "(Bergstra et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "In all our experiments, we apply length filtering (LF). Therefore, probabilities P (a s = i, a e = j) are set to 0 iff j \u2212 i > \u03b6, where \u03b6 is a length threshold. Following , we set \u03b6 = 30 in all of our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Here we sum up the definitions of similarity functions presented in the paper. We experimented with 5 similarity functions. For each start representation h s \u2208 R d and end representation h e \u2208 R d , both column vectors from the matrix of boundary vectors H s , H e \u2208 R d\u00d7L respectively. Note that d here is model specific dimension, L is passage length, \u2022 denotes elementwise multiplication and ; denotes concatenation. The similarity functions above these representations are defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "\u2022 A dot product:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f dot (h s , h e ) = h s h e", "eq_num": "(5)" } ], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "\u2022 A weighted dot product:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f wdot (h s , h e ) = w [h s \u2022 h e ]", "eq_num": "(6)" } ], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "\u2022 An additive similarity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f add (h s , h e ) = w [h s ; h e ]", "eq_num": "(7)" } ], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "\u2022 An additive similarity combined with weighted product:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f madd (h s , h e ) = w [h s ; h e ; h s \u2022 h e ]", "eq_num": "(8)" } ], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "\u2022 A multi-layer perceptron (MLP) as proposed by :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "f M LP (h s , h e ) = w \u03c3(W [h s ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "h e ]+b)+b 2 (9) where \u03c3(x) = ln(relu(x)) and ln denotes layer normalization (Ba et al., 2016) .", "cite_spans": [ { "start": 77, "end": 94, "text": "(Ba et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Similarity Functions", "sec_num": "3.1" }, { "text": "Our experiments are based on three EQA models: BERT-base and ALBERTxxlarge (Lan et al., 2020) are LRMs based on the self-supervised pretraining objective. During fine-tuning, each model receives the concatenation of question and passage are given as input. Outputs H \u2208 R d\u00d7L corresponding to the passage inputs of length L are then reduced to boundary probabilities by two vectors w s , w e as P (a b ) = softmax(w b H + b b ) where b \u2208 {s, e}.", "cite_spans": [ { "start": 75, "end": 93, "text": "(Lan et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "To compute joint probability P (a s , a e ), start representations are computed using W \u2208 R d\u00d7d and b \u2208 R d (broadcasted) as H s = W H + b and end representations as H e = H. A dot product f dot is used as the similarity measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (a s , a e ) = softmax(vec(H s H e ))", "eq_num": "(10)" } ], "section": "Applied Models", "sec_num": "3.2" }, { "text": "For modelling conditional probability factorization objective, we follow the implementation from (Lan et al., 2020) , and provide exact details in the Appendix B.", "cite_spans": [ { "start": 97, "end": 115, "text": "(Lan et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "BIDAF (Seo et al., 2017) dominated the state-ofthe-art systems in 2016 and motivated a lot of following research work (Clark and Gardner, 2018; Yu et al., 2018) . Question and passage inputs are represented via the fusion of word-level embeddings from GloVe (Pennington et al., 2014) and character-level word embeddings obtained via a convolutional neural network. Next, a recurrent layer is applied to both. Independently represented questions and passages are then combined into a common representation via two directions of attention over their similarity matrix S. The similarity matrix is computed via multiplicative-additive interaction (11) between each pair of question vector q i and passage vector p j , where ; denotes concatenation and \u2022 stands for the Hadamard product.", "cite_spans": [ { "start": 6, "end": 24, "text": "(Seo et al., 2017)", "ref_id": "BIBREF26" }, { "start": 118, "end": 143, "text": "(Clark and Gardner, 2018;", "ref_id": "BIBREF5" }, { "start": 144, "end": 160, "text": "Yu et al., 2018)", "ref_id": null }, { "start": 258, "end": 283, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "S ij = f madd (q i , p j ) = w [q i ; p j ; q i \u2022 p j ] (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "Common representations are then concatenated together with document representations yielding G and passed towards two more recurrent layers producing M and M 2 -first to obtain answer-start representations H s = [G; M ] and second to obtain answer-end representations 6 H e = [G;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "M 2 ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "The joint probability P (a s , a e ) is then computed over scores from vectorized similarity matrix of H s and H e using the 2-layer feed-forward network f M LP as a similarity function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applied Models", "sec_num": "3.2" }, { "text": "We evaluate our approaches on a wide spectrum of datasets. We do not split development datasets, as we use fixed hyperparameters with fixed amount of steps and use last checkpoint for our LRM experiments. This also makes our results directly comparable to other works Lan et al., 2020) . The statistics to all datasets are 6 For details, see formulae 2 to 4 in Seo et al. (2017) . shown in Table 1 . To focus only on the extractive part of QA and to keep the format the same, we use curated versions of the last 3 datasets as released in MrQA shared task (Fisch et al., 2019) . SQuADv1.1 (Rajpurkar et al., 2016 ) is a popular dataset composed from question, paragraphs and answer span annotation collected from the subset of Wikipedia passages.", "cite_spans": [ { "start": 268, "end": 285, "text": "Lan et al., 2020)", "ref_id": "BIBREF18" }, { "start": 323, "end": 324, "text": "6", "ref_id": null }, { "start": 361, "end": 378, "text": "Seo et al. (2017)", "ref_id": "BIBREF26" }, { "start": 555, "end": 575, "text": "(Fisch et al., 2019)", "ref_id": "BIBREF12" }, { "start": 588, "end": 611, "text": "(Rajpurkar et al., 2016", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 390, "end": 397, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.3" }, { "text": "SQuADv2.0 (Rajpurkar et al., 2018) is an extension of SQuADv1.1 with additional 50k questions and passages, which are topically similar to the question, but do not contain an answer.", "cite_spans": [ { "start": 10, "end": 34, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.3" }, { "text": "Adversarial SQuAD (Jia and Liang, 2017) tests, whether the system can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In particular, our system is evaluated in ADDSENT adversary setting, which runs the model as a black box for each question on several paragraphs containing different adversarial sentences and picks the worst answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.3" }, { "text": "Natural Questions (Kwiatkowski et al., 2019 ) dataset consists of real users queries obtained from Google search engine. Each example is accompanied by a relevant Wikipedia article found by the search engine, and human annotation for long/short answer. The long answer is typically the most relevant paragraph from the article, while short answer consists of one or multiple entities or short text spans. We only consider short answers in this work.", "cite_spans": [ { "start": 18, "end": 43, "text": "(Kwiatkowski et al., 2019", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.3" }, { "text": "NewsQA (Trischler et al., 2017 ) is a crowdsourced dataset based on CNN news articles. Answers are short text spans and the questions are designed such that they require reasoning and inference besides simple text matching.", "cite_spans": [ { "start": 7, "end": 30, "text": "(Trischler et al., 2017", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.3" }, { "text": "TriviaQA (Joshi et al., 2017) consists of question-answer pairs from 14 different trivia quiz websites and independent evidence passages col-lected using Bing search from various sources such as news, encyclopedias, blog posts and others. Additional evidence is obtained from Wikipedia through entity linker.", "cite_spans": [ { "start": 9, "end": 29, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.3" }, { "text": "To improve the soundness of the presented results, we use several statistical tests. An exact match (EM) metric can be viewed as an average of samples from Bernoulli distribution. As stated via central limit theorem, a good assumption might be the EM comes from the normal distribution. We train 10 models for each presented LRM's result, obtaining 10 EMs for each sample. Anderson-Darling normality test (Stephens, 1974) is used to check this assumption -whether the sample truly comes from the normal distribution. Then we use the onetailed paired t-test to check whether the case of improvement is significant. The improvement is significant iff p-value < 0.05. We use the reference implementation from Dror et al. (2018) .", "cite_spans": [ { "start": 405, "end": 421, "text": "(Stephens, 1974)", "ref_id": "BIBREF27" }, { "start": 706, "end": 724, "text": "Dror et al. (2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Testing", "sec_num": "3.4" }, { "text": "We now show the effectiveness of proposed approaches. Each of the presented results is averaged from 10 training runs. Similarity functions. We analyzed an effect of different similarity functions over all models in Table 3 . We found different similarity functions to work better with different architectures. Namely, for BIDAF, most of similarity functions work equally or worse than independent objective. Exceptionally, f M LP works significantly better. This is surprising especially because we tuned the hyperparameters with the f madd function. For BERT, most of the similarity functions performed better than the independent objective and simple dot-product f dot improved significantly better above all. We choose f M LP for BIDAF and f dot for our LRMs for the rest of experiments.", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 223, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Comparison of objectives. Our main resultsthe performance of independent (I), joint (J), jointconditional (JC) and compound (I+J) objectives -are shown in Table 2 . We note the largest improvements can be seen for an exact match (EM) performance metric. In fact, in some cases objectives modelling joint PMF lead to degradation of F1, while improving EM (e.g., on SQuADv1.1 and NewsQA datasets for BERT). Upon manual analysis of BERT's predictions based on 200 differences between independent and compound models on SQuADv1.1, we found that in 10 cases (5%) the independent model chooses larger span encompassing multiple potential answers, thus obtaining non-zero F1 score. In 9 out of 10 of these cases, we found the compound model to pick just one of these potential answers 7 , obtaining either full match or no F1 score at all. We found no cases of compound model encompassing multiple potential answers in analyzed sample.", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "EM", "sec_num": null }, { "text": "Next, we remark that compound objective outperformed others in most of our experiments. In BERT case, the compound objective performed significantly better than independent objective on 5 out of 6 datasets. In ALBERT case, the compound objective performed significantly better than independent objective 5 from 6 times and it was on par in the last case. Comparing compound to joint objective in BERT case, the two behave almost equally, with compound objective significantly outperforming joint objective on the two SQuAD datasets and no significant differences for the other 4 datasets. However, in ALBERT case, the compound objective significantly improves results over joint objective in all but one case and is on par in this last case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM", "sec_num": null }, { "text": "Conditional objective. Our implementation of the conditional objective performs even or worse than independent objective in most cases. Upon investigation we found the model tends to be overconfident about start predictions and underconfident about its end predictions, often assigning high probability to single answer-start. In Table 5 , we analyze the top-5 most probable samples from BERT on each example of SQuADv1.1 dev data. We found that on average the conditional model kept it top-1 start prediction in 90% of subsequent top-2 to top-5 less probable answers, but kept its top-1 end prediction only in 4% of top-2 to top-5 subsequent answers. We found this statistic to be on par for start/end prediction for different objectives. Interestingly, the table also reveals that independent objective contains less diverse start/end tokens than joint objectives. Table 5 : Proportion of samples, on which top-1 prediction start/end token was kept as start/end token also in top-2 to top-5 subsequent predictions.", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 337, "text": "Table 5", "ref_id": null }, { "start": 867, "end": 874, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "EM", "sec_num": null }, { "text": "Large improvements and degradation. Upon closer inspection of results, we found possible reasons for result degradation of the compound model on SQuADv2.0, and also its large improvements gained on NQ dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "For SQuADv2.0, the accuracies of no-answer detection for independent/joint/compound objectives in case of BERT models are 79.89/78.12/79.32. We found the same trend for ALBERT. We hypothesize, that this inferior performance of joint and compound models may be caused by the model having to learn a more complex problem of K 2 classes of all possible spans over input document, which is often more (e.g. for K = 512) than the size of the datasets, leaving the less of \"model capacity\" to this another task. To confirm that compound model is better at answer extraction step, we run all 10 checkpoints trained on SQuADv2.0 data with an answer, while masking model's no-answer option. The results shown in Table 6 support this hypothesis.", "cite_spans": [], "ref_spans": [ { "start": 703, "end": 710, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "On the other side, we found the large improvements over NQ might be exaggerated by the evaluation approach of MRQA, wherein the case of multi-span answers, choosing one of the spans from multi-span answer counts as correct. Upon closer result inspection, we found that the independent model here was prone to select the start of one span from multi-span answer and end of different span from multi-span answer. To quantify this behavior, we annotated 100 random predictions with multi-span answers in original NaturalQuestions on whether they pick just one span from multi-span answer (which follows from the MRQA formulation) or they encompass multiple spans. For independent/compound objectives we found 59/77 cases of picking just one of the spans and 22/4 cases of encompassing multiple spans from multi-span answer for BERT model and 57/81 and 33/10 cases for ALBERT respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Length filtering heuristic. Additionally, we found the benefit from the commonly used length filtering (LF) heuristic is negligible for models trained via any joint objective, as shown in Table 4 . Therefore, we find it unnecessary to use the heuristic anymore. In this experiment, we also include our results with BIDAF, which show significant improvement of compound objective on SQuADv1.1 dataset from other approaches.", "cite_spans": [], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Apart from example in Figure 1 , we provide more examples of different predictions 8 between models trained with independent and compound objective in Table 7 . In general, by doing manual analysis of errors, we noticed three types of trivially wrong errors being fixed by the compound objective model in BERT:", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 1", "ref_id": null }, { "start": 151, "end": 158, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "1. Uncertainty of the model causes it to assign high probabilities to two different answer boundaries. During decoding the start/end boundaries of two different answers are picked up (fourth row in Table 7 ).", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 205, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "2. The model assigns high probability to answer surrounded by the paired punctuation marks (e. g. quotes).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "It chooses the answer without respecting the symmetry between paired punctuation marks (third row of Table 7 ).", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "3. Uncertainty of the model causes it to assign high probabilities to two spans containing the same answer string. This is the special case of problem (1) -while the model often chooses the correct answer, the boundaries of two different spans are selected (first row of Table 7 ).", "cite_spans": [], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "To quantify an occurrence of these errors, we study our best BERT and ALBERT checkpoint predictions for SQuADv1.1 validation data. For BERT, we found the most frequently occurring is the error type (1), for which we manually annotated 200 random differences between independent and compound model predictions. We found 5% of them to be the case of this error of the independent, and no case of this error for the compound model. Interestingly, 4 out of 10 of these cases were questions clearly asking about single entity, while independent model answered multiple entities, e.g., Q:Which male child of Ghengis Khan and B\u00f6rte was born last? A:Chagatai (1187-1241), \u00d6gedei (1189-1241), and Tolui. For the error type (2), we filtered all prediction differences (more than 1300 for BERT and ALBERT) down to cases, where either independent or compound prediction contained non-alphanumeric paired punctuation marks, which resulted in less than 30 cases for each. For BERT, 37% independent predictions from these cases contained an error type (2), while again no paired punctuation marks errors were observed for compound objective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "For the error type (3), we filter prediction differences down to cases, where independent or compound prediction contained the same prefix and suffix of length at least 2 (only 9 and 5 cases for BERT and ALBERT). From these, error type (3) occurred in 3 cases for BERT and in 1 case for ALBERT in case of independent and again we found no case for the compound for both models. Note the error type (3) can be fully alleviated by marginalizing over probabilities of top-K answer spans during the inference, as in (Das et al., 2019; Cheng et al., 2020 ) (see Appendix E for details). Interestingly, for ALBERT, we found only negligible amount of errors of type (1) and (2) for both objectives 9 .", "cite_spans": [ { "start": 512, "end": 530, "text": "(Das et al., 2019;", "ref_id": "BIBREF8" }, { "start": 531, "end": 549, "text": "Cheng et al., 2020", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "During manual analysis, we observed that, an uncertain models with an independent objective are prone to pick large answer spans. To illustrate, that spans retrieved with approaches modelling joint probability differ, we took the top 20 most probable spans from each model and averaged their length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "This was done for each example in the SQuADv1.1 test data. The histogram of these averages is shown in Figure 2 . For a fair comparison, these predictions were filtered via length filtering. Table 7 : Examples of predictions from SQuADv1.1 using BERT trained with independent and compound objective. ", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 191, "end": 198, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "One of the earliest works in EQA from Wang and Jiang (2017) experimented with generative models based on index sequence generation via pointer networks (Vinyals et al., 2015) and now traditional boundary models that focus on the prediction of start/end of an answer span. Their work shown substantial improvement of conditional factorization boundary models over the index sequence generative models.", "cite_spans": [ { "start": 38, "end": 59, "text": "Wang and Jiang (2017)", "ref_id": "BIBREF30" }, { "start": 152, "end": 174, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Followup work on EQA (Seo et al., 2017; Chen et al., 2017; Clark and Gardner, 2018; Yu et al., 2018; Cheng et al., 2020) and others considered using the assumption of independence in their objectives. Xiong et al. (2017) explored an iterative boundary model. They used RNN and a highway maxout network to decode start/end of span independently in multiple timesteps, each time feeding the RNN with predictions from the previous time step until the prediction was not changing anymore. In their following work Xiong et al. (2018) combined their objective with a reinforcement learning approach, in which the decoded spans from each timestep were treated as a trajectory. They argued that cross-entropy is not reflecting F1 performance well enough, and defined a reward function equal to F1 score. Finally, they used policy gradients as their auxiliary objective, showing 1% improvement in the terms of F1 score.", "cite_spans": [ { "start": 21, "end": 39, "text": "(Seo et al., 2017;", "ref_id": "BIBREF26" }, { "start": 40, "end": 58, "text": "Chen et al., 2017;", "ref_id": "BIBREF2" }, { "start": 59, "end": 83, "text": "Clark and Gardner, 2018;", "ref_id": "BIBREF5" }, { "start": 84, "end": 100, "text": "Yu et al., 2018;", "ref_id": null }, { "start": 101, "end": 120, "text": "Cheng et al., 2020)", "ref_id": "BIBREF3" }, { "start": 201, "end": 220, "text": "Xiong et al. (2017)", "ref_id": "BIBREF33" }, { "start": 509, "end": 528, "text": "Xiong et al. (2018)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Authors of recent LRMs like XLNet , ALBERT (Lan et al., 2020) or ELECTRA (Clark et al., 2020) use conditional probability factorization P (a e |a s )P (a s ) for answer extraction in some cases 10 . Although the objective is not described in mentioned papers (except for ELEC-TRA), we follow the recipe for modelling the conditional probability from their implementation in this work. We believe this is the first official comparison of this objective w.r.t. others.", "cite_spans": [ { "start": 43, "end": 61, "text": "(Lan et al., 2020)", "ref_id": "BIBREF18" }, { "start": 73, "end": 93, "text": "(Clark et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "The most similar to our work is RASOR system (Lee et al., 2016) . In their work, authors compared various objectives -binary answer classification of every input token, BIO sequence classification with CRF layer on top of their model, and most importantly joint objective, which turns out to work the best. However, in our experiments, training with the joint objective alone does not always perform that well. For BIDAF, we failed to find the hyperparameters for model to converge to results similar to different approaches.", "cite_spans": [ { "start": 45, "end": 63, "text": "(Lee et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "The paper closely studies the objectives used within the extractive question answering (EQA). It identifies commonly used independent probability model as a source of trivially wrong answers. As a remedy it experiments with various ways of learning the joint span probability. Finally it shows how the compound objective -the combination of independent and joint probability in objective -improves statistical EQA systems across 6 datasets without using any additional data. Using the proposed approach, we were able to reach significant improvements through the wide spectrum of datasets, including +1.28 EM on Adversarial SQuAD and +2.07 EM on NaturalQuestions for BERT-base. We performed a thorough manual analysis to understand what happened to trivially wrong answers, and we found most of the cases disappear. We also found that independent models tend to \"overfit\" to F1 metric by encompassing multiple possible answer spans, which would explain the effect of joint objectives improving the EM far more significantly than the F1. We shown the samples from joint model contain the greatest start/end token diversity. We further hypothesize that having diverse answers may be especially beneficial towards answer reranking step commonly used in QA (Fajcik et al., 2021; Iyer et al., 2021) . In addition, we also identified the reason for performance decrease with compound objective on SQuADv2.0 -no-answer classifier trained within the same model performs ", "cite_spans": [ { "start": 1253, "end": 1274, "text": "(Fajcik et al., 2021;", "ref_id": "BIBREF11" }, { "start": 1275, "end": 1293, "text": "Iyer et al., 2021)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Predictions from BERT-base-compound", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence", "sec_num": null }, { "text": "Christ and His salvation 10.9 \"Christ and His salvation\" 4.7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "71.8", "sec_num": null }, { "text": "Christ and His salvation\" 4.6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "71.8", "sec_num": null }, { "text": "Luther's rediscovery of \"Christ and His salvation 3.1 \"Christ and His salvation 1.2 Luther's rediscovery of \"Christ and His salvation\" 0.8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "71.8", "sec_num": null }, { "text": "Luther's rediscovery Question: How many species of bird and mammals are there in the Amazon region? Passage: The region is home to about 2.5 million insect species, tens of thousands of plants, and some 2,000 birds and mammals. To date, at least 40,000 plant species, 2,200 fishes, 1,294 birds, 427 mammals, 428 amphibians, and 378 reptiles have been scientifically classified in the region. One in five of all the bird species in the world live in the rainforests of the Amazon, and one in five of the fish species live in Amazonian rivers and streams. Scientists have described between 96,660 and 128,843 invertebrate species in Brazil alone. Ground Truth: 2,000", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "71.8", "sec_num": null }, { "text": "Confidence Predictions from BERT-base 37.0 2,000 birds and mammals. To date, at least 40,000 plant species, 2,200 fishes, 1,294 birds, 427 34.6 427 27.7 2,000 0.2 1,294 birds, 427 0.2 427 mammals 0.1 2,000 birds 0.1 2,000 birds and mammals ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "71.8", "sec_num": null }, { "text": "Predictions from BERT-base-compound 71.7 427 21.5 2,000 5.1 2,000 birds and mammals. To date, at least 40,000 plant species, 2,200 fishes, 1,294 birds, 427 0.8 some 2,000 0.2 427 mammals 0.1 1,294 birds, 427 0.1 2,000 birds and mammals. To date, at least 40,000 plant species, 2,200 fishes, 1,294 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence", "sec_num": null }, { "text": "Some of the recent LRMs assume conditional factorization of span's PMF. For comparison with our joint objective, we reimplemented the conditional objective used in ALBERT (Lan et al., 2020) . First, the probabilities P (a s ) for the start position are computed in the same manner as for the independent objective -by applying a linear transformation layer on top of representations H \u2208 R d\u00d7L from the last layer of the LRM, where d is the model dimension and L denotes the input sequence length.", "cite_spans": [ { "start": 171, "end": 189, "text": "(Lan et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "B Conditional Objective", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (a s ) \u221d exp (w s H + b s )", "eq_num": "(12)" } ], "section": "B Conditional Objective", "sec_num": null }, { "text": "During the validation, top k (k = 10 in our experiments) start positions are selected from these probabilities, while in the training phase, we apply teacher forcing by only selecting the correct start position. Representation of i-th start position h i from the last layer of the LRM corresponding to the selected position is then concatenated with representations corresponding to all the other positions k = 0..L into matrix C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Conditional Objective", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 -[h 0 ; h i ] - -[h 1 ; h i ] - . . . -[h n ; h i ] - \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb", "eq_num": "(13)" } ], "section": "B Conditional Objective", "sec_num": null }, { "text": "Subsequently, a layer with tanh activation is applied on this matrix C, followed by a linear transformation to obtain the end probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Conditional Objective", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (a e |a s = i) \u221d exp (w c tanh (W C + b ) + b)", "eq_num": "(14)" } ], "section": "B Conditional Objective", "sec_num": null }, { "text": "For each start position we again select top k end positions, to obtain k 2 -best list of answer spans. In contrast to the official ALBERT implementation, we omitted a layer normalization after tanh layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Conditional Objective", "sec_num": null }, { "text": "One may ask what complexity joint modelling objectives come with independently of the underlying architecture. Given that L is the length of the input's passage and d is the model dimension, the independent objective contains only linear transformation and is in O dL for time and memory, assuming the multiplication and addition are constant operations. For the rest of this analysis, we will denote both time and memory complexities as just complexity, as they are the same for the analyzed cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Addressing the Complexity", "sec_num": null }, { "text": "The conditional objective increases the complexity for both only constantly, having an extra feed-forward network for end token representations. However, one may experience a significant computational slowdown, because of the beam search. Having a beam size k and a minibatch size b, the end probabilities cannot be computed in parallel with start probabilities, and have to be computed for the kb cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Addressing the Complexity", "sec_num": null }, { "text": "For the direct joint probability modelling, the complexity largely depends on the similarity function. The easiest case is f dot , where in theory the complexity rises to O dL 2 , but in practice the dot product is well optimized and has a barely noticeable impact on the speed or memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Addressing the Complexity", "sec_num": null }, { "text": "For the f add the complexity is given by the linear projection H * w * being in O dL and outer summation of two vectors H s w 1 \u2295 H e w 2 , which is in O L 2 , where w = [w 1 , w 2 ] and H * \u2208 R n\u00d7d are the start/end representation matrices. Therefore the complexity is O dL + L 2 . We observed that in practice this approach is not very different from f dot , probably due to d being close to L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Addressing the Complexity", "sec_num": null }, { "text": "Next, a weighted product f wdot can be efficiently implemented as H s (w \u2022 H e ), where w is broadcasted over every end representation in H e . In this case, the complexity stays the same as for f dot .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Addressing the Complexity", "sec_num": null }, { "text": "To demonstrate that in practice the speed and memory requirements between independent and joint approach are comparable, one BERT epoch on SQuADv1.1 took about 47 minutes and 4.2GB of memory with the same batch size 2 on 12GB 2080Ti GPU with both objective variants. We observed the same requirements for all direct joint probability modelling methods mentioned so far.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Addressing the Complexity", "sec_num": null }, { "text": "Finally, the most complex approach is clearly f M LP . While an a theoretical time and memory complexity of an efficient implementation 11 is in O d 2 L + dL 2 , the complexity of this approach can be improved by pruning down the number of possible spans (and the probability space). Assuming the maximum length of the span is k L, one can reduce the complexity to O d 2 L + dLk (an approach adopted in ). To illustrate this complexity, BERT model with the full probability space on SQuADv1.1 with batch size 2 took 76 minutes per epoch while allocating 8.2GB of GPU memory (we were unable to fit larger batch size to 12GB GPU).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Addressing the Complexity", "sec_num": null }, { "text": "The exact hyperparameters used in this work are documented in our code. We note that for BERT and ALBERT, we simply followed the hyperparameters proposed by the authors for SQuADv1.1. In case of LRM models, each input context is split into windows as proposed by . Each input sequence has maximum length 384, questions are truncated to 64 tokens and context is split with overlap stride 128. For SQuADv2.0, we follow the BERT's approach for computing the noanswer logit in test-time. Having the set of k windows W e for each example e, we compute the nullscore ns w = logitP (a s = 0) + logitP (a e = 0) for each window w \u2208 W e . For joint and compound objectives ns w = logitP (a s = 0, a e = 0). Defining that for each window w the best nonnull answer logit is a w , the no-answer logit is then given by the difference of lowest null-score and best-answer score \u0393 = min w\u2208We (ns w ) \u2212 max w\u2208We (a w ) among all windows of example e. The threshold for \u0393 is determined on the validation data via official script.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Hyperparameters", "sec_num": null }, { "text": "To alleviate the error type (3) from section 5, we experimented with marginalizing over probabilities of top-100 answers (so-called surface form filtering). This is done via summing the probabilities into the most probable string occurrence, and setting the probability of the rest to 0. The results for all trained models averaged over 10 checkpoints are presented in Table 8 . Note this approach sometimes hurts performance, especially in the case of joint probability approaches, where this error type happens very rarely.", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 376, "text": "Table 8", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "E Marginalizing Over the Same String Forms", "sec_num": null }, { "text": "https://github.com/KNOT-FIT-BUT/ JointSpanExtraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We define 'trivially wrong' as not resembling any string form human would answer, e.g., the first or the second last answer ofFigure 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For brevity, q, D dependencies are further omitted and bias terms are broadcasted along dimension L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here, we slightly abuse the notation for the sake of generality. See Subsection 3.2 for specific applications.5 We used f madd similarity during parameter tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For instance, inTable 7, row 4, column 3, we consider 2,000; 40,000; 2,200; 1,294 and 427 as potential answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We chose to analyze the different predictions, as the model is usually more uncertain in these borderline cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The full difference of BERT's and ALBERT's predictions and manual analysis can be found in the supplementary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For instance, ALBERT uses conditional objective for SQuADv2.0, but not for SQuADv1.1. worse -and we leave the solution for this deficiency for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The linear transformation d \u00d7 2d can be applied to each start or end vector separately, and only then the start/end vectors have to be outer-summed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Czech Ministry of Education, Youth and Sports, subprogram INTER-COST, project code: LTC18054. The computation used the infrastructure supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms", "authors": [ { "first": "James", "middle": [], "last": "Bergstra", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Yamins", "suffix": "" }, { "first": "D", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Cox", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 12th Python in science conference", "volume": "13", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Bergstra, Dan Yamins, David D Cox, et al. 2013. Hyperopt: A python library for optimizing the hyper- parameters of machine learning algorithms. In Pro- ceedings of the 12th Python in science conference, volume 13, page 20. Citeseer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reading Wikipedia to answer opendomain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1870--1879", "other_ids": { "DOI": [ "10.18653/v1/P17-1171" ] }, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Probabilistic assumptions matter: Improved models for distantlysupervised document-level question answering", "authors": [ { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5657--5667", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.501" ] }, "num": null, "urls": [], "raw_text": "Hao Cheng, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2020. Probabilistic assump- tions matter: Improved models for distantly- supervised document-level question answering. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5657- 5667, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Posterior differential regularization with f-divergence for improving model robustness", "authors": [ { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lis", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Yaoliang", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1078--1089", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. 2021. Posterior differential regu- larization with f-divergence for improving model ro- bustness. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1078-1089, Online. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Simple and effective multi-paragraph reading comprehension", "authors": [ { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "845--855", "other_ids": { "DOI": [ "10.18653/v1/P18-1078" ] }, "num": null, "urls": [], "raw_text": "Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehen- sion. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 845-855, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "ELECTRA: pretraining text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pre- training text encoders as discriminators rather than generators. In 8th International Conference on", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning Representations", "authors": [], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Multi-step retrieverreader interaction for scalable open-domain question answering", "authors": [ { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" }, { "first": "Shehzaad", "middle": [], "last": "Dhuliawala", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations, ICLR 2019", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever- reader interaction for scalable open-domain ques- tion answering. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The hitchhiker's guide to testing statistical significance in natural language processing", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Gili", "middle": [], "last": "Baumer", "suffix": "" }, { "first": "Segev", "middle": [], "last": "Shlomov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1383--1392", "other_ids": { "DOI": [ "10.18653/v1/P18-1128" ] }, "num": null, "urls": [], "raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "R2-d2: A modular baseline for open-domain question answering", "authors": [ { "first": "Martin", "middle": [], "last": "Fajcik", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Docekal", "suffix": "" }, { "first": "Karel", "middle": [], "last": "Ondrej", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Smrz", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2021", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-d2: A modular baseline for open-domain question answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "MRQA 2019 shared task: Evaluating generalization in reading comprehension", "authors": [ { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering", "volume": "", "issue": "", "pages": "1--13", "other_ids": { "DOI": [ "10.18653/v1/D19-5801" ] }, "num": null, "urls": [], "raw_text": "Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eu- nsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Work- shop on Machine Reading for Question Answering, pages 1-13, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "RECONSIDER: Improved reranking using span-focused cross-attention for open domain question answering", "authors": [ { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1280--1287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivasan Iyer, Sewon Min, Yashar Mehdad, and Wen-tau Yih. 2021. RECONSIDER: Improved re- ranking using span-focused cross-attention for open domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1280-1287, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adversarial examples for evaluating reading comprehension systems", "authors": [ { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2021--2031", "other_ids": { "DOI": [ "10.18653/v1/D17-1215" ] }, "num": null, "urls": [], "raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1601--1611", "other_ids": { "DOI": [ "10.18653/v1/P17-1147" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Van- couver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Relevance-guided supervision for openqa with colbert", "authors": [ { "first": "Omar", "middle": [], "last": "Khattab", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" }, { "first": "Matei", "middle": [], "last": "Zaharia", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.00814" ] }, "num": null, "urls": [], "raw_text": "Omar Khattab, Christopher Potts, and Matei Zaharia. 2020. Relevance-guided supervision for openqa with colbert. arXiv preprint arXiv:2007.00814.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Natural questions: A benchmark for question answering research", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Redfield", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Danielle", "middle": [], "last": "Epstein", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Kelcey", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Dai", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "452--466", "other_ids": { "DOI": [ "10.1162/tacl_a_00276" ] }, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:452-466.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Latent retrieval for weakly supervised open domain question answering", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6086--6096", "other_ids": { "DOI": [ "10.18653/v1/P19-1612" ] }, "num": null, "urls": [], "raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning recurrent span representations for extractive question answering", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shimi", "middle": [], "last": "Salant", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.01436" ] }, "num": null, "urls": [], "raw_text": "Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for ex- tractive question answering. arXiv preprint arXiv:1611.01436.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Fixing weight decay regularization in adam", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.05101" ] }, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "K\u00f6pf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Raison", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Sasank", "middle": [], "last": "Chilamkurthy", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Steiner", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In Advances in Neural Informa- tion Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Know what you don't know: Unanswerable questions for SQuAD", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "784--789", "other_ids": { "DOI": [ "10.18653/v1/P18-2124" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784- 789, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Bidirectional attention flow for machine comprehension", "authors": [ { "first": "Min Joon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Aniruddha", "middle": [], "last": "Kembhavi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Con- ference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Edf statistics for goodness of fit and some comparisons", "authors": [ { "first": "A", "middle": [], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Stephens", "suffix": "" } ], "year": 1974, "venue": "Journal of the American statistical Association", "volume": "69", "issue": "347", "pages": "730--737", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael A Stephens. 1974. Edf statistics for goodness of fit and some comparisons. Journal of the Ameri- can statistical Association, 69(347):730-737.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "NewsQA: A machine comprehension dataset", "authors": [ { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xingdi", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Bachman", "suffix": "" }, { "first": "Kaheer", "middle": [], "last": "Suleman", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "191--200", "other_ids": { "DOI": [ "10.18653/v1/W17-2623" ] }, "num": null, "urls": [], "raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Pointer networks", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Meire", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2692--2700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Con- ference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2692-2700.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Machine comprehension using match-lstm and answer pointer", "authors": [ { "first": "Shuohang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuohang Wang and Jing Jiang. 2017. Machine com- prehension using match-lstm and answer pointer. In 5th International Conference on Learning Repre- sentations, ICLR 2017, Toulon, France, April 24- 26, 2017, Conference Track Proceedings. OpenRe- view.net.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Making neural QA as simple as possible but not simpler", "authors": [ { "first": "Dirk", "middle": [], "last": "Weissenborn", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Wiese", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Seiffe", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "271--280", "other_ids": { "DOI": [ "10.18653/v1/K17-1028" ] }, "num": null, "urls": [], "raw_text": "Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 271-280, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Huggingface's transformers: Stateof-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, et al. 2019. Huggingface's transformers: State- of-the-art natural language processing. ArXiv, pages arXiv-1910.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Dynamic coattention networks for question answering", "authors": [ { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for ques- tion answering. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings. OpenReview.net.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "DCN+: mixed objective and deep residual coattention for question answering", "authors": [ { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2018. DCN+: mixed objective and deep residual coattention for question answering. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5754--5764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Con- ference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancou- ver, BC, Canada, pages 5754-5764.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Qanet: Combining local convolution with global self-attention for reading comprehension", "authors": [ { "first": "Le", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In 6th International Conference on Learning Represen- tations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Histograms of average character length of top-20 predicted answers from BERT trained with different objectives compared with character length of ground-truth answers.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Example of answer span distribution from model trained via independent objective.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Example of answer span distribution from model trained via compound objective.", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "Example of answer span distribution from model trained via independent objective.", "num": null, "type_str": "figure" }, "FIGREF4": { "uris": null, "text": "Example of answer span distribution from model trained via compound objective.", "num": null, "type_str": "figure" }, "TABREF1": { "type_str": "table", "html": null, "content": "", "text": "Number of examples per each dataset used in this paper.", "num": null }, "TABREF2": { "type_str": "table", "html": null, "content": "
F1EMF1
I66.16 76.19 81.31 88.65
I+JBIDAFBERT
f dot64.30 73.84 81.83 88.52
f add66.04
", "text": "75.10 81.52 88.47 f wdot 66.10 75.16 81.35 88.29 f madd 66.11 75.23 81.45 88.44 f M LP 66.96 75.90 81.61 88.44", "num": null }, "TABREF3": { "type_str": "table", "html": null, "content": "", "text": "A comparison of similarity functions in the models trained via compound objective (I+J) and independent objective (I).", "num": null }, "TABREF4": { "type_str": "table", "html": null, "content": "
ModelObjSQ1SQ2AdvSQTriviaQANQNewsQA
BERT 81.31ALBERT I I 88.55/94.62 87.07/90.02 68.12/73.54 J 88.84/74.7/80.3370.78/83.4259.95/75.0
", "text": "/88.65 73.89/76.74 47.04/52.62 62.88/69.85 65.66/78.20 52.39/67.17 J 81.33/88.13 72.66/75.04 48.10/53.54 63.93/69.90 67.75/78.70 52.73/66.41 JC 81.22/88.29 71.51/74.38 46.07/51.35 62.82/69.94 66.48/77.34 52.39/67.05 I+J 81.83/88.52 73.53/76.14 48.32/53.47 63.73/69.75 67.75/78.81 52.96/66.83 94.64 86.87/89.71 68.90/74.17 75.11/80.41 73.36/84.01 60.19/74.28 JC 88.60/94.59 86.78/89.73 68.0/73.25 -72.33/83.35 58.52/72.74 I+J 89.02/94.77 87.13/89.98 69.57/74.76 75.31/80.43 73.32/84.08 60.41/74.46", "num": null }, "TABREF5": { "type_str": "table", "html": null, "content": "
ModelIJJCI+J
BIDAF-LF 66.16/76.19 58.24/67.42 65.85/75.94 ---66.95/75.89 66.96/75.90
BERT-80.98/88
", "text": "EM/F1 results of different objectives through the spectrum of datasets. Bold results mark best EM across the objectives. Italicised I+J results mark significant improvement over the independent objective. .40 81.30/88.11 81.16/88.25 81.80/88.50 LF 81.31/88.65 81.33/88.13 81.22/88.29 81.83/88.52 ALBERT -88.39/94.51 88.82/94.64 88.57/94.57 89.01/94.77 LF 88.55/94.63 88.84/94.64 88.60/94.59 89.02/94.77", "num": null }, "TABREF6": { "type_str": "table", "html": null, "content": "", "text": "SQuADv1.1 EM/F1 results with length filtering (LF) computed from the same set of checkpoints. Differences larger than 0.1 are in bold.", "num": null }, "TABREF9": { "type_str": "table", "html": null, "content": "
", "text": "Performance of SQuADv2.0 models on answerable examples of SQuADv2.0.", "num": null }, "TABREF10": { "type_str": "table", "html": null, "content": "
QuestionPassageIndependentCompoundGround
Truth
What company won a freeQuickBooks sponsored a \"Death Wish Coffee hadDeath WishDeath Wish
advertisement due to thea 30-second commercialCoffeeCoffee
QuickBooks contest?aired free of charge
courtesy of QuickBooks.
Death Wish Coffee
In what city's Marriott didSan Jose State practice fa-San JoseSan Jose
the Panthers stay?cility and stayed at the San
Jose
What was the first point ofLuther's rediscovery of \"Christ and His salvation\" was the first ofChrist and His salvation\"Christ andChrist and
the Reformation?two points that became the foundation for the Reformation. HisHis salvationHis salvation
railing against the sale of indulgences was based on it.
How many species of birdThe region is home to about 2.5 million insect species, tens of2,000 birds and mammals.4272,000
and mammals are there inthousands of plants, and some 2,000 birds and mammals. To date,To date, at least 40,000
the Amazon region?at least 40,000 plant species, 2,200 fishes, 1,294 birds, 427 mam-plant species, 2,200 fishes,
mals, 428 amphibians, and 378 reptiles have been scientifically1,294 birds, 427
classified in the region. One in five of all the bird species in the-
world live in the rainforests of the Amazon, and one in five of
the fishspecies live in Amazonian rivers and streams. Scientists
have describedbetween 96,660 and 128,843 invertebrate species
in Brazil alone.
What was found to be atNASA immediately convened an accident review board, over-deficiencies existed inHarrisondeficiencies
fault for the fire in theseen by both houses of Congress. While the determination ofCommand Module design,Storms
cabin on Apollo 1 regard-responsibility for the accident was complex, the review boardworkmanship and quality
ing the CM design?concluded that \"deficiencies existed in Command Module design,control.\"
workmanship and quality control.\" At the insistence of NASA
Administrator Webb, North American removed Harrison Storms
as Command Module program manager. Webb also reassigned
Apollo Spacecraft Program Office (ASPO) Manager Joseph Fran-
cis Shea, replacing him with George Low.
", "text": "Small Business Big Game\" contest, in which Death Wish Coffee had a 30-second commercial aired free of charge courtesy of QuickBooks. Death Wish Coffee beat out nine other contenders from across the United States for the free advertisement.The Panthers used the San Jose State practice facility and stayed at the San Jose Marriott. The Broncos practiced at Stanford University and stayed at the Santa Clara Marriott.", "num": null }, "TABREF11": { "type_str": "table", "html": null, "content": "
Question: What was the first point of the
Reformation?
Passage: Luther's rediscovery of \"Christ and His
salvation\" was the first of two points that became
the foundation for the Reformation. His railing
against the sale of indulgences was based on it.
Ground Truth: Christ and His salvation
Confidence Predictions from BERT-base
59.7Christ and His salvation\"
35.4Christ and His salvation
2.3Christ
1.3\"Christ and His salvation\"
0.8\"Christ and His salvation
0.1Christ and His salvation\" was
0.1\"Christ
", "text": "A Examples of Answer Span DistributionThis section provides a deeper insight towards most probable elements of answer span PMF.", "num": null }, "TABREF12": { "type_str": "table", "html": null, "content": "
ModelIJI+J
", "text": "BIDAF LF 66.16/76.19 58.24/67.42 66.96/75.90 SF 66.20/76.21 -66.99/75.90 BERT LF 81.31/88.65 81.33/88.13 81.83/88.52 SF 81.38/88.68 81.23/87.97 81.65/88.36 ALBERT LF 88.55/94.63 88.84/94.64 89.02/94.77 SF 88.53/94.00 88.28/94.10 88.68/94.49", "num": null }, "TABREF13": { "type_str": "table", "html": null, "content": "", "text": "SQuADv1.1 EM/F1 results with length filtering (LF) and LF + surface form filtering (SF).", "num": null } } } }