ACL-OCL / Base_JSON /prefixM /json /mrqa /2021.mrqa-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:42.416424Z"
},
"title": "Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap",
"authors": [
{
"first": "Kazutoshi",
"middle": [],
"last": "Shinoda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Saku",
"middle": [],
"last": "Sugawara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Informatics",
"location": {}
},
"email": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Aizawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question-context lexical overlap. This hinders QA models from generalizing to underrepresented samples such as questions with low lexical overlap. Question generation (QG), a method for augmenting QA datasets, can be a solution for such performance degradation if QG can properly debias QA datasets. However, we discover that recent neural QG models are biased towards generating questions with high lexical overlap, which can amplify the dataset bias. Moreover, our analysis reveals that data augmentation with these QG models frequently impairs the performance on questions with low lexical overlap, while improving that on questions with high lexical overlap. To address this problem, we use a synonym replacement-based approach to augment questions with low lexical overlap. We demonstrate that the proposed data augmentation approach is simple yet effective to mitigate the degradation problem with only 70k synthetic examples. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question-context lexical overlap. This hinders QA models from generalizing to underrepresented samples such as questions with low lexical overlap. Question generation (QG), a method for augmenting QA datasets, can be a solution for such performance degradation if QG can properly debias QA datasets. However, we discover that recent neural QG models are biased towards generating questions with high lexical overlap, which can amplify the dataset bias. Moreover, our analysis reveals that data augmentation with these QG models frequently impairs the performance on questions with low lexical overlap, while improving that on questions with high lexical overlap. To address this problem, we use a synonym replacement-based approach to augment questions with low lexical overlap. We demonstrate that the proposed data augmentation approach is simple yet effective to mitigate the degradation problem with only 70k synthetic examples. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question answering (QA) for machine reading comprehension is a central task in natural language understanding, which requires a model to answer questions given textual contexts. Pretrained language models have been successfully applied to QA and achieve scores higher than those of humans on benchmark datasets such as SQuAD (Rajpurkar et al., 2016) . However, QA models have been demonstrated to exploit unintended dataset biases instead of the intended solutions, and lack robustness to challenge test sets whose distributions are different from those of training sets (Jia and Liang, 2017; Sugawara et al., 2018; Gan and Ng, 2019; Ribeiro et al., 2019) , which could be a serious problem in real-world applications.",
"cite_spans": [
{
"start": 325,
"end": 349,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 571,
"end": 592,
"text": "(Jia and Liang, 2017;",
"ref_id": "BIBREF9"
},
{
"start": 593,
"end": 615,
"text": "Sugawara et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 616,
"end": 633,
"text": "Gan and Ng, 2019;",
"ref_id": "BIBREF5"
},
{
"start": 634,
"end": 655,
"text": "Ribeiro et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Question generation (QG) has also been extensively studied to augment QA datasets (Du et al., 2017; Du and Cardie, 2018) . It is demonstrated that QG can improve not only the in-domain generalization but also the out-of-distribution generalization capability of QA models (Zhang and Bansal, 2019; Shinoda et al., 2021) . In other areas, data augmentation techniques have been successfully used to reduce dataset biases and increase the performance of machine learning models on under-represented samples in vision (McLaughlin et al., 2015; Wong et al., 2016) and language (Zhao et al., 2018; Zhou and Bansal, 2020) . Thus, we assume that QG is useful to debias QA models and improve its robustness by augmenting QA datasets. However, it has not been fully studied whether existing QG models can contribute to debiasing QA models (i.e., improve the robustness of QA models to under-represented questions).",
"cite_spans": [
{
"start": 82,
"end": 99,
"text": "(Du et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 100,
"end": 120,
"text": "Du and Cardie, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 272,
"end": 296,
"text": "(Zhang and Bansal, 2019;",
"ref_id": "BIBREF30"
},
{
"start": 297,
"end": 318,
"text": "Shinoda et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 514,
"end": 539,
"text": "(McLaughlin et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 540,
"end": 558,
"text": "Wong et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 572,
"end": 591,
"text": "(Zhao et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 592,
"end": 614,
"text": "Zhou and Bansal, 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we focus on question-context lexical overlap, inspired by the findings presented in Sugawara et al. (2018) . Their work revealed that questions having low lexical overlap with context tend to require reasoning skills rather than superficial word matching, and existing QA models are not robust to these questions (Table 1) . To see if data augmentation with recent neural QG models can improve the robustness to those questions, we analyze the performance of BERT (Devlin et al., 2019) trained on SQuAD v1.1 (Rajpurkar et al., 2016) augmented with them. Our analysis reveals that data augmentation with neural QG models frequently sacrifices the QA performance of the BERT-base model on questions with low lexical overlap, while improving that on questions with high lexical overlap. We conjecture that this is because neural QG models frequently generate questions with high lexical overlap as indicated in Table C Besides earning a reputation as a respected entertainment device, the iPod has also been accepted as a business device. Government departments, major institutions and international organisations have turned to the iPod line as a delivery mechanism for business communication and training, such as the Royal and Western Infirmaries in Glasgow, Scotland, where iPods are used to train new staff.",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "Sugawara et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 479,
"end": 500,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 523,
"end": 547,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 328,
"end": 337,
"text": "(Table 1)",
"ref_id": "TABREF1"
},
{
"start": 923,
"end": 931,
"text": "Table C",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q A |Q\u2229C| |Q| C, Q \u2192 A C, A \u2192 Q |Q \u2229C| |Q |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Where is Royal and Western Infirmaries located?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Glasgow, Scotland Table 1 : Examples of ground-truth question-answer pairs and predictions of question answering (BERT-base (Devlin et al., 2019) ) and generation (SemanticQG (Zhang and Bansal, 2019) ) models. C: context, Q: question, A: answer, |Q\u2229C| |Q| : question-context lexical overlap, A : predicted answer, Q : generated question. Overlapping words in the questions are underlined.",
"cite_spans": [
{
"start": 124,
"end": 145,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 175,
"end": 199,
"text": "(Zhang and Bansal, 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. This behavior can be interpreted as a consequence of the recent QG models pursuing higher average BLEU scores on SQuAD, which inherently contains reference questions with high lexical overlap, by copying many words from contexts to generate questions. By doing so, QG models can amplify the lexical overlap bias in the original dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the performance degradation, we use a simple data augmentation approach using synonym replacement to generate questions with low question-context lexical overlap. We found that the proposed approach not only debiases the dataset but also improves the QA performance on questions with low lexical overlap with only 70k synthetic examples, whereas conventional neural QG approaches use more than one million synthetic examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We found that not only QA but also QG models are biased in terms of question-context lexical overlap; that is, QG models fail to generate questions with low lexical overlap ( \u00a72).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We discovered that data augmentation using recent neural QG models does not contribute to debias QA datasets; rather, it frequently degrades the QA performance on questions with low lexical overlap, while improving that on questions with high lexical overlap ( \u00a74).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We demonstrated that the proposed simple data augmentation approach using synonym replacement ( \u00a73) for augmenting questions with low lexical overlap is effective to improve QA performance on questions with low lexical overlap with only 70k synthetic examples ( \u00a74), while preserving or slightly hurting the overall accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Revisiting the QA and QG Performance in Terms of Question-Context Lexical Overlap",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we denote question-context lexical overlap as QCLO. We define QCLO as the ratio of the overlapping words between question Q and context C to the total number of words in question. 2 Precisely, QCLO is calculated as",
"cite_spans": [
{
"start": 195,
"end": 196,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "QCLO = |Q \u2229 C| |Q| .",
"eq_num": "(1)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second example in Experimental setups For QA, we use the finetuned BERT-base and -large models (Devlin et al., 2019) . For QG, we use SemanticQG (Zhang and Bansal, 2019) . 3 For the dataset, we use the SQuAD-Du dataset; the train, dev, and test split of SQuAD v1.1 (Rajpurkar et al., 2016) proposed by Du et al. (2017) , which we denote as SQuAD Du train , SQuAD Du dev , SQuAD Du test , respectively. This split Results We show the result in Figure 1 . This indicates that the performance of the BERT models on the questions with lower QCLO is degraded compared to the questions with higher QCLO. For QG, the BLEU-4 score (Papineni et al., 2002) is highly correlated with QCLO, which means that the model fails to generate questions with low QCLO accurately.",
"cite_spans": [
{
"start": 99,
"end": 120,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 149,
"end": 173,
"text": "(Zhang and Bansal, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 176,
"end": 177,
"text": "3",
"ref_id": null
},
{
"start": 269,
"end": 293,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 306,
"end": 322,
"text": "Du et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 367,
"end": 369,
"text": "Du",
"ref_id": null
},
{
"start": 627,
"end": 650,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 447,
"end": 455,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also show the distributions in terms of QCLO of questions generated by recent neural QG models (HarvestingQG (Du and Cardie, 2018), Seman-ticQG (Zhang and Bansal, 2019) , InfoHCVAE (Lee et al., 2020), and VQAG (Shinoda et al., 2021)) in Figure 2 . This indicates that all the QG models are biased towards generating questions with higher QCLO than SQuAD Du train , which is used to train those QG models.",
"cite_spans": [
{
"start": 147,
"end": 171,
"text": "(Zhang and Bansal, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 357,
"end": 359,
"text": "Du",
"ref_id": null
}
],
"ref_spans": [
{
"start": 240,
"end": 248,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on the result, we suspect that when neural QG is used to augment a QA dataset, the degraded QG performance on questions with low QCLO could exacerbate the degraded QA performance. Our experiments in \u00a74 show that this is often true. We hypothesize that this is caused by the strong tendency of neural QG models to generate questions with high QCLO as shown in Figure 2 . The percentages of questions in the datasets, SQuAD-Du (Du et al., 2017) , HarvestingQG (Du and Cardie, 2018) , SemanticQG (Zhang and Bansal, 2019) , InfoHCVAE (Lee et al., 2020), VQAG (Shinoda et al., 2021) , and ours ( \u00a73), for each range of QCLO. While neural question generation models are biased towards generating questions with high QCLO, ours can generate questions with low QCLO.",
"cite_spans": [
{
"start": 431,
"end": 448,
"text": "(Du et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 464,
"end": 485,
"text": "(Du and Cardie, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 499,
"end": 523,
"text": "(Zhang and Bansal, 2019)",
"ref_id": "BIBREF30"
},
{
"start": 561,
"end": 583,
"text": "(Shinoda et al., 2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We assume that if we augment questions with low QCLO unlike existing neural QG approaches, the robustness of QA models to questions with low QCLO can be improved. In this section, we describe the proposed method for generating questions with low QCLO. We extend the idea of synonym replacement used in (Wei and Zou, 2019) to reduce the lexical overlap. The proposed method is as follows:",
"cite_spans": [
{
"start": 302,
"end": 321,
"text": "(Wei and Zou, 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "1. List all the overlapping words between question and context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "2. Replace every word in the listed words other than predefined stop words with one of its synonyms chosen randomly from WordNet (Miller, 1995) , and obtain a synthetic question.",
"cite_spans": [
{
"start": 129,
"end": 143,
"text": "(Miller, 1995)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "3. If the lexical overlap decreases after synonym replacement, add the synthetic question to our dataset; if not, discard the question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "After repeating this procedure once for every ground-truth question in the training set, we obtain 70k synthetic questions with significantly lower lexical overlap, as indicated in Figure 2 (ours). For example, What is heresy mainly at odds with? is converted into What is heterodoxy mainly at odds with?, and How many documents remain classified? is converted into How many text file remain classified?. Because heterodoxy, text, and file do not appear in the contexts, the lexical overlap is reduced in each example.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 189,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "It is worth mentioning a couple of limitations of our method. First, synonym replacement may slightly change the meaning of questions depending on the context. Second, our approach relies on the assumption that annotated questions are available, which makes it impossible to apply to unlabeled passages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "To determine the effect of data augmentation on improving the QA model robustness to questions with low QCLO, we conducted experiments with several QG approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Dataset We used the SQuAD-Du dataset as in \u00a72. Considering the QCLO statistics of SQuAD displayed in Figure 2 , we split SQuAD Du dev and SQuAD Du test into Easy and Hard subsets that contain questions with QCLO greater than 0.3, and the others, respectively. Our Easy and Hard subsets offered concise, yet sufficient, evaluation in terms of QCLO.",
"cite_spans": [
{
"start": 144,
"end": 146,
"text": "Du",
"ref_id": null
}
],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Baselines We adopted the following four baselines that use neural QG models for data augmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 HarvestingQG (Du and Cardie, 2018) gener-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "ates question-answer pairs from 10,000 topranking Wikipedia articles with neural answer extraction and question generation. 4 The size is 1.2 million.",
"cite_spans": [
{
"start": 124,
"end": 125,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 SemanticQG (Zhang and Bansal, 2019 ) is a QG model that uses reinforcement learning to generate semantically valid questions. Following this work, we generated questions using the publicly available model 5 from the same context-answer pairs as HarvestingQG. The size is 1.2 million.",
"cite_spans": [
{
"start": 13,
"end": 36,
"text": "(Zhang and Bansal, 2019",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 InfoHCVAE (Lee et al., 2020) is a questionanswer pair generation model based on conditional variational autoencoder with mutual information maximization. We trained this model on SQuAD Du train , and then generated 50 questions and answers from each context in SQuAD Du train . The size is 824k.",
"cite_spans": [
{
"start": 269,
"end": 271,
"text": "Du",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "4 https://github.com/xinyadu/ harvestingQA 5 https://github.com/ZhangShiyue/ QGforQA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 VQAG (Shinoda et al., 2021 ) is a questionanswer pair generation model based on conditional variational autoencoder with explicit KL control. We used the publicly available dataset. 6 The size is 432k.",
"cite_spans": [
{
"start": 7,
"end": 28,
"text": "(Shinoda et al., 2021",
"ref_id": "BIBREF22"
},
{
"start": 184,
"end": 185,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The distributions of the lexical overlap of these datasets are presented in Figure 2 . We indicate that these methods are more biased towards high lexical overlap than SQuAD Du train , which was used as the training set for these QG models.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Experimental Setups As in our previous experiment ( \u00a72), we used BERT-base and -large models, whose total number of parameters are 110M and 340M, respectively. Dhingra et al. (2018) proposed to pretrain a QA model using synthetic data composed of cloze-style questions and then finetune it on the ground-truth data. We adopted the pretrain-and-fine-tune approach for the neural QG approaches, which generated over 1.2 million questions. However, as discussed by Zhang and Bansal (2019), we observed that when the size of the synthetic data was small or similar to the ground-truth data, a performance gain could not be obtained by the pretrain-and-fine-tune approach. Thus, for the proposed approach, which generated 70k questions, we fine-tuned QA models on the ground-truth data randomly mixed with the generated data. We used the Hugging Face's implementation of BERT (Wolf et al., 2019) . We use the Adam (Kingma and Ba, 2014) optimizer with epsilon set to 1e-8. The batch size was 32 for all the settings. In both the pretraining and fine-tuning procedure, the learning rate decreased linearly from 3e-5 to zero. We train the QA models for one epoch for pretraining with synthetic data and two epochs for fine-tuning with SQuAD Du train .",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "Dhingra et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 871,
"end": 890,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 1233,
"end": 1235,
"text": "Du",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Results The results of the data augmentation are displayed in Table 2 . In all the settings, the proposed approach achieved the best EM score on the Hard subset. Notably, the proposed method significantly improved the performance by 2.72 (EM) / 1.50 (F1) points using BERT-base on the Hard subset in the test set, while maintaining the overall scores compared to the no data augmentation baseline. This improvement indicates that the proposed approach for debiasing the dataset in terms of QCLO is helpful for addressing the performance degradation. However, the proposed approach degraded the scores on the Easy subsets when using BERT-large. Addressing the trade-off between the scores in the Hard and Easy subsets using BERTlarge is future work. When using BERT-base, the neural QG baselines except for HarvestingQG improved the scores on the Easy subset; however, the baselines except for InfoHCVAE often degraded the scores on the Hard subset. This could be due to the tendency to generate questions with high QCLO (Figure 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 1020,
"end": 1030,
"text": "(Figure 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "When using BERT-large, the QG approaches often fail to improve the scores in both the Hard and Easy subsets. Generating useful examples for a larger model is more challenging than for a smaller one according to these results. Utilizing pretrained language models for QG may be useful given the fact that only RNNs are used in all the baseline QG methods in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "HarvesingQG was not effective in almost all the settings. Comparing its scores with those of Se-manticQG, which used the same context-answer pairs as HarvestingQG, some feature of generated questions other than lexical overlap appeared to be critical in improving the QA scores on the Easy subset, because the distributions of QCLO of two synthetic datasets were similar to each other (see Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 398,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For further boosting the overall average score, we can make an ensemble prediction using the best performing models in the Easy and Hard subsets, although improving the overall scores is not the main focus in this paper. The performance gains were positive but not very significant in our case. We leave utilizing the ensemble prediction to address the performance trade-off to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To demonstrate the effect of the baseline QG models and proposed method qualitatively, we present examples in both the Hard (QCLO \u2264 0.3) and Easy (QCLO > 0.3) subsets in Table 3 . The first two examples show that only the QA model trained with the proposed method could correctly answer the questions. Answering the questions in these examples required a knowledge of synonyms, such as \"recreational\" vs. \"entertainment,\" \"besides\" vs. \"aside from,\" \"employees\" vs. \"workers,\" and \"kill oneself\" vs. \"commit suicide.\" These examples imply that the proposed data augmentation method based on synonym replacement enabled the QA model to acquire knowledge regarding synonyms. This kind of reasoning beyond superficial word matching is indispensable for QA systems to achieve human-level language understanding.",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 177,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "The third example in Table 3 displays an example where data augmentation using the neural QG models made the original prediction incorrect. This example implies that current QG models may harm the robustness of QA models to questions with low QCLO. As Geirhos et al. (2020) discussed, if QG models just amplify the dataset bias, QA models could learn dataset-specific solutions (i.e., shortcuts) and fail to generalize to challenge test sets.",
"cite_spans": [
{
"start": 252,
"end": 273,
"text": "Geirhos et al. (2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "In contrast, the fourth and fifth examples in Table 3 display examples in the Easy subset where Besides earning a reputation as a respected entertainment (Original, InfoHCVAE) device, the iPod has also been accepted as a business (Ours) device. Government departments, major institutions and international organisations have turned to the iPod line as a delivery mechanism for business communication and training, such as the Royal and Western Infirmaries (HarvestingQG, SemanticQG, VQAG) in Glasgow, Scotland, where iPods are used to train new staff. In 2010 (Ours), a number of workers committed suicide at a Foxconn operations in China. Apple, HP, and others stated that they were investigating the situation. Foxconn guards have been videotaped beating employees. Another employee killed himself in 2009 (Original, HarvestingQG, SemanticQG, VQAG) when an Apple prototype went missing, and claimed in messages to friends, that he had been beaten and interrogated.",
"cite_spans": [
{
"start": 456,
"end": 488,
"text": "(HarvestingQG, SemanticQG, VQAG)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "-In what year did Chinese Foxconn emplyees* kill themselves? (*: annotator's typo) (QCLO: 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "The BBC began its own regular television programming from the basement of Broadcasting House, London, on 22 August 1932 (HarvestingQG, SemanticQG). The studio moved to larger quarters in 16 Portland Place, London, in February 1934 (Original, Ours) , and continued broadcasting the 30-line images, carried by telephone line to the medium wave transmitter at Brookmans Park, until 11 September 1935, by which time advances in all-electronic television systems made the electromechanical broadcasts obsolete.",
"cite_spans": [
{
"start": 206,
"end": 247,
"text": "London, in February 1934 (Original, Ours)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "-When did the BBC first change studios? (QCLO: 0.25 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "Peyton Manning (VQAG) became the first quarterback ever to lead two different teams to multiple Super Bowls. He is also the oldest quarterback ever to play in a Super Bowl at age 39 (Original, Ours Despite being relatively unaffected by the embargo (Original, HarvestingQG, VQAG, Ours) , the UK nonetheless faced an oil crisis of its own -a series of strikes by coal miners and railroad workers (SemanticQG, InfoHCVAE) over the winter of 1973-74 became a major factor in the change of government. Heath asked the British to heat only one room in their houses over the winter. The UK, Germany, Italy, Switzerland and Norway banned flying, driving and boating on Sundays. Sweden rationed gasoline and heating oil. The Netherlands imposed prison sentences for those who used more than their ration of electricity.",
"cite_spans": [
{
"start": 249,
"end": 285,
"text": "(Original, HarvestingQG, VQAG, Ours)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "-What caused UK to have an oil crisis in its own country? (QCLO: 0.62 ) data augmentation with neural QG models is beneficial, while the original and proposed models fail to answer them correctly. These examples require multiple-sentence reasoning, i.e., one has to read and understand multiple sentences to answer these questions. This observation implies that some under-represented features (e.g., multiple-sentence reasoning (Rajpurkar et al., 2016) ) exist even in the Easy subset, and the existing neural QG models might amplify such features (possibly by copying many words from multiple sentences to formulate questions) and make it easy to capture them. Investigating what kind of features are learned by using data augmentation with neural QG models in more detail is future work.",
"cite_spans": [
{
"start": 429,
"end": 453,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5"
},
{
"text": "The Robustness of QA models Pretrained language models such as BERT (Devlin et al., 2019) have surpassed the human score on the SQuAD leaderboard. 7 However, such powerful QA models have been shown to exhibit the lack of robustness. A QA model that is trained on SQuAD is not robust to paraphrased questions (Gan and Ng, 2019), implications derived from SQuAD (Ribeiro et al., 2019), questions with low lexical overlap (Sugawara et al., 2018) , and other QA datasets (Yogatama et al., 2019; Talmor and Berant, 2019; Sen and Saffari, 2020) . Ko et al. (2020) showed that extractive QA model can suffer from positional bias and fail to generalize to different answer positions. The lack of robustness demonstrated in these studies can be explained by shortcut learning of deep neural networks (Geirhos et al., 2020) . A high score on an in-distribution test set can be achieved by just exploiting unintended dataset biases (Levesque, 2014) . Therefore, evaluating QA models only on an in-distribution test set is not enough to evaluate the robustness of the QA models.",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 419,
"end": 442,
"text": "(Sugawara et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 467,
"end": 490,
"text": "(Yogatama et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 491,
"end": 515,
"text": "Talmor and Berant, 2019;",
"ref_id": "BIBREF24"
},
{
"start": 516,
"end": 538,
"text": "Sen and Saffari, 2020)",
"ref_id": "BIBREF20"
},
{
"start": 541,
"end": 557,
"text": "Ko et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 791,
"end": 813,
"text": "(Geirhos et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 921,
"end": 937,
"text": "(Levesque, 2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Question Generation for Question Answering QG has been studied extensively in order to augment QA datasets and boost the QA performance, which has been evaluated primarily on SQuAD (Du et al., 2017; Zhou et al., 2018; Yang et al., 2017; Zhang and Bansal, 2019 ). Question answer pair generation, which consists of answer candidate extraction and QG, has been also received attention because question-worthy answers for the input of QG are not freely available (Du and Cardie, 2018; Shinoda et al., 2021) . The de facto standard of QG models is to utilize a copy mechanism (Gu et al., 2016; Gulcehre et al., 2016) . The tendency of QG models to copy words from textual contexts as indicated in Figure 2 is partially due to this copy mechanism. While the existing QG works have increased the BLEU scores on SQuAD 8 and successfully generated fluent questions in terms of human scores, the bias regarding lexical overlap in QG has not received sufficient attention.",
"cite_spans": [
{
"start": 181,
"end": 198,
"text": "(Du et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 199,
"end": 217,
"text": "Zhou et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 218,
"end": 236,
"text": "Yang et al., 2017;",
"ref_id": "BIBREF28"
},
{
"start": 237,
"end": 259,
"text": "Zhang and Bansal, 2019",
"ref_id": "BIBREF30"
},
{
"start": 460,
"end": 481,
"text": "(Du and Cardie, 2018;",
"ref_id": "BIBREF2"
},
{
"start": 482,
"end": 503,
"text": "Shinoda et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 572,
"end": 589,
"text": "(Gu et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 590,
"end": 612,
"text": "Gulcehre et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 693,
"end": 701,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Data Augmentation and Dataset Bias Data augmentation has been widely used in other domains to reduce dataset biases such as the background bias in person re-identification (McLaughlin et al., 2015) , the gender bias in coreference resolution (Zhao et al., 2018) , and the lexical bias in natural language inference (Zhou and Bansal, 2020) . These works repeated training examples or added synthetic data to increase under-represented samples and reduce the imbalance in a training set. Our proposed approach has the same motivation as these works.",
"cite_spans": [
{
"start": 172,
"end": 197,
"text": "(McLaughlin et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 242,
"end": 261,
"text": "(Zhao et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 315,
"end": 338,
"text": "(Zhou and Bansal, 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "On the other hand, data augmentation can unintentionally introduce or amplify dataset bias. Backtranslation (Sennrich et al., 2016) , which is the common data augmentation approach for machine translation, can introduce the translationese bias. That is, machine translation systems trained with back-translation, compared to ones without backtranslation, can enhance the BLEU scores when the input is translationese (i.e., human-translated texts) but harm the BLEU scores when the input is naturally occurring texts (Edunov et al., 2020; Marie et al., 2020) . This phenomenon is analogous to the observation in our work, where we demonstrated that SQuAD QG models are biased towards generating questions with high QCLO, and this tendency can harm the QA performance on questions with low QCLO while improving that on questions with high QCLO.",
"cite_spans": [
{
"start": 108,
"end": 131,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 516,
"end": 537,
"text": "(Edunov et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 538,
"end": 557,
"text": "Marie et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We demonstrated that not only QA models but also QG models are biased in terms of the questioncontext lexical overlap. To determine the influence of the bias, we analyzed the QA performance with data augmentation using the recent QG models. We demonstrated that they frequently degraded the QA performance on questions with low lexical overlap, while improving that on questions with high lexical overlap when using BERT-base. To address this problem, we designed a simple approach using synonym replacement to debias a QA dataset. We demonstrated that the proposed approach improved the QA performance on questions with low lexical overlap while maintaining or slightly degrading the overall scores with only 70k synthetic examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our results suggest that future research in QG for data augmentation should exercise caution to prevent the amplification of dataset bias in terms of lexical overlap. In addition, what features are learned by data augmentation with neural QG models is worth to be explored in more detail to clarify what is improved and what is not improved by QG. It is also worth investigating whether our findings still hold in other QA datasets where annotated questions have lower lexical overlap than those in SQuAD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our data is publicly available at https://github. com/KazutoshiShinoda/Synonym-Replacement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When computing lexical overlap, we do not exclude stop words because even overlapping stop words are important cues to determine the correct answer.3 We used the ELMo+QPP&QAP (Zhang and Bansal, 2019) model for QG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/KazutoshiShinoda/ VQAG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://rajpurkar.github.io/ SQuAD-explorer/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://aqleaderboard.tomhosking.co. uk/squad",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their detailed and valuable comments. This work was supported by NEDO SIP-2 \"Big-data and AI-enabled Cyberspace Technologies,\" and JSPS KAKENHI Grant Numbers 21H03502, 20K23335.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Simple and effective semi-supervised question answering",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Danish",
"middle": [],
"last": "Danish",
"suffix": ""
},
{
"first": "Dheeraj",
"middle": [],
"last": "Rajagopal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "582--587",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2092"
]
},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Danish Danish, and Dheeraj Ra- jagopal. 2018. Simple and effective semi-supervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 582-587, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Harvesting paragraph-level question-answer pairs from Wikipedia",
"authors": [
{
"first": "Xinya",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1907--1917",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1177"
]
},
"num": null,
"urls": [],
"raw_text": "Xinya Du and Claire Cardie. 2018. Harvest- ing paragraph-level question-answer pairs from Wikipedia. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1907-1917, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning to ask: Neural question generation for reading comprehension",
"authors": [
{
"first": "Xinya",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Junru",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1342--1352",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1123"
]
},
"num": null,
"urls": [],
"raw_text": "Xinya Du, Junru Shao, and Claire Cardie. 2017. Learn- ing to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1342-1352, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the evaluation of machine translation systems trained with back-translation",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2836--2846",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.253"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Marc'Aurelio Ranzato, and Michael Auli. 2020. On the evaluation of machine translation systems trained with back-translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2836- 2846, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improving the robustness of question answering systems to question paraphrasing",
"authors": [
{
"first": "Chung",
"middle": [],
"last": "Wee",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6065--6075",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1610"
]
},
"num": null,
"urls": [],
"raw_text": "Wee Chung Gan and Hwee Tou Ng. 2019. Improv- ing the robustness of question answering systems to question paraphrasing. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6065-6075, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Shortcut learning in deep neural networks",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Geirhos",
"suffix": ""
},
{
"first": "J\u00f6rn-Henrik",
"middle": [],
"last": "Jacobsen",
"suffix": ""
},
{
"first": "Claudio",
"middle": [],
"last": "Michaelis",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Wieland",
"middle": [],
"last": "Brendel",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Bethge",
"suffix": ""
},
{
"first": "Felix",
"middle": [
"A"
],
"last": "Wichmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Nature Machine Intelligence",
"volume": "2",
"issue": "11",
"pages": "665--673",
"other_ids": {
"DOI": [
"10.1038/s42256-020-00257-z"
]
},
"num": null,
"urls": [],
"raw_text": "Robert Geirhos, J\u00f6rn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1631--1640",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1154"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Pointing the unknown words",
"authors": [
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "140--149",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 140-149, Berlin, Germany. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1215"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Look at the first sentence: Position bias in question answering",
"authors": [
{
"first": "Miyoung",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hyunjae",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Gangwoo",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1109--1121",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.84"
]
},
"num": null,
"urls": [],
"raw_text": "Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. 2020. Look at the first sentence: Position bias in question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1109-1121, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Generating diverse and consistent QA pairs from contexts with information-maximizing hierarchical conditional VAEs",
"authors": [
{
"first": "Seanie",
"middle": [],
"last": "Dong Bok Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Donghwan",
"middle": [],
"last": "Woo Tae Jeong",
"suffix": ""
},
{
"first": "Sung",
"middle": [
"Ju"
],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hwang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "208--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Bok Lee, Seanie Lee, Woo Tae Jeong, Dongh- wan Kim, and Sung Ju Hwang. 2020. Gener- ating diverse and consistent QA pairs from con- texts with information-maximizing hierarchical con- ditional VAEs. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 208-224, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On our best behaviour",
"authors": [
{
"first": "Hector",
"middle": [
"J"
],
"last": "Levesque",
"suffix": ""
}
],
"year": 2014,
"venue": "Artif. Intell",
"volume": "212",
"issue": "1",
"pages": "27--35",
"other_ids": {
"DOI": [
"10.1016/j.artint.2014.03.007"
]
},
"num": null,
"urls": [],
"raw_text": "Hector J. Levesque. 2014. On our best behaviour. Artif. Intell., 212(1):27-35.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Tagged back-translation revisited: Why does it really work?",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Marie",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5990--5997",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.532"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Marie, Raphael Rubino, and Atsushi Fujita. 2020. Tagged back-translation revisited: Why does it really work? In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 5990-5997, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Data-augmentation for reducing dataset bias in person re-identification",
"authors": [
{
"first": "Niall",
"middle": [],
"last": "Mclaughlin",
"suffix": ""
},
{
"first": "Jesus",
"middle": [
"M"
],
"last": "Del Rincon",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2015,
"venue": "12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niall McLaughlin, Jesus M. Del Rincon, and Paul Miller. 2015. Data-augmentation for reducing dataset bias in person re-identification. In 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pages 1-6, Karlsruhe, Germany. IEEE.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Wordnet: A lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Commun. ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {
"DOI": [
"10.1145/219717.219748"
]
},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Are red roses red? evaluating consistency of question-answering models",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6174--6184",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1621"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Carlos Guestrin, and Sameer Singh. 2019. Are red roses red? evaluating con- sistency of question-answering models. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6174-6184, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "What do models learn from question answering datasets?",
"authors": [
{
"first": "Priyanka",
"middle": [],
"last": "Sen",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Saffari",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2429--2438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Priyanka Sen and Amir Saffari. 2020. What do models learn from question answering datasets? In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2429-2438, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving the robustness of QA models to challenge sets with variational questionanswer pair generation",
"authors": [
{
"first": "Kazutoshi",
"middle": [],
"last": "Shinoda",
"suffix": ""
},
{
"first": "Saku",
"middle": [],
"last": "Sugawara",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Aizawa",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the ACL-IJCNLP 2021 Student Research Workshop",
"volume": "",
"issue": "",
"pages": "197--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazutoshi Shinoda, Saku Sugawara, and Akiko Aizawa. 2021. Improving the robustness of QA models to challenge sets with variational question- answer pair generation. In Proceedings of the ACL- IJCNLP 2021 Student Research Workshop, pages 197-214, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "What makes reading comprehension questions easier?",
"authors": [
{
"first": "Saku",
"middle": [],
"last": "Sugawara",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Aizawa",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4208--4219",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1453"
]
},
"num": null,
"urls": [],
"raw_text": "Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What makes reading com- prehension questions easier? In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 4208-4219, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "MultiQA: An empirical investigation of generalization and transfer in reading comprehension",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4911--4921",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1485"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and trans- fer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4911-4921, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6382--6388",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1670"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data aug- mentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Understanding data augmentation for classification: When to warp",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Wong",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Stamatescu",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Mc-Donnell",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. C. Wong, A. Gatt, V. Stamatescu, and M. D. Mc- Donnell. 2016. Understanding data augmentation for classification: When to warp? In 2016 Inter- national Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1-6, Gold Coast, QLD, Australia. IEEE.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Semi-supervised QA with generative domain-adaptive nets",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1040--1050",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1096"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semi-supervised QA with generative domain-adaptive nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1040-1050, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning and evaluating general linguistic intelligence",
"authors": [
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Cyprien",
"middle": [],
"last": "De Masson D'autume",
"suffix": ""
},
{
"first": "Jerome",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Chrzanowski",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.11373"
]
},
"num": null,
"urls": [],
"raw_text": "Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Ling- peng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evaluat- ing general linguistic intelligence. arXiv preprint arXiv:1901.11373.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Addressing semantic drift in question generation for semisupervised question answering",
"authors": [
{
"first": "Shiyue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2495--2509",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1253"
]
},
"num": null,
"urls": [],
"raw_text": "Shiyue Zhang and Mohit Bansal. 2019. Address- ing semantic drift in question generation for semi- supervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495-2509, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2003"
]
},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural question generation from text: A preliminary study",
"authors": [
{
"first": "Qingyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Hangbo",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "662--671",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2018. Neural ques- tion generation from text: A preliminary study. In Natural Language Processing and Chinese Comput- ing, pages 662-671, Cham. Springer International Publishing.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Towards robustifying NLI models against lexical dataset biases",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8759--8771",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.773"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Zhou and Mohit Bansal. 2020. Towards robusti- fying NLI models against lexical dataset biases. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8759- 8771, Online. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 2: The percentages of questions in the datasets, SQuAD-Du (Du et al., 2017), HarvestingQG (Du and Cardie, 2018), SemanticQG (Zhang and Bansal, 2019), InfoHCVAE (Lee et al., 2020), VQAG (Shinoda et al., 2021), and ours ( \u00a73), for each range of QCLO. While neural question generation models are biased towards generating questions with high QCLO, ours can generate questions with low QCLO."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "-Aside from recreational use, in what other arena have iPods found use? (QCLO: 0.29 )"
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>indicates a question</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "/81.11 80.74/88.39 80.35/88.05 70.88/81.99 73.22/84.75 73.06/84.57 + HarvestingQG 70.25/78.27 80.06/87.62 79.60/87.19 69.28/79.92 73.15/84.20 72.90/83.93 + SemanticQG 70.45/80.25 81.70/89.08 81.17/88.67 71.68/82.49 74.39/85.59 74.21/85.39",
"num": null,
"content": "<table><tr><td/><td/><td/><td>SQuAD Du dev (EM/F1)</td><td/><td/><td>SQuAD Du test (EM/F1)</td><td/></tr><tr><td>Model</td><td>Train Source</td><td>Hard</td><td>Easy</td><td>ALL</td><td>Hard</td><td>Easy</td><td>ALL</td></tr><tr><td>base</td><td colspan=\"7\">SQuAD Du train 72.31+ InfoHCVAE 72.05/80.66 81.79/89.35 81.34/88.95 73.47/83.91 73.50/85.08 73.48/84.99</td></tr><tr><td/><td>+ VQAG</td><td colspan=\"6\">73.29/82.04 81.88/88.93 81.48/88.62 71.60/83.07 73.79/85.23 73.63/85.08</td></tr><tr><td/><td>+ Ours</td><td colspan=\"6\">73.50/82.81 80.34/87.81 80.02/87.58 73.60/83.49 73.08/84.41 73.11/84.34</td></tr><tr><td/><td>SQuAD Du train</td><td colspan=\"6\">78.72/87.71 87.06/93.23 86.67/92.98 77.93/87.84 79.33/89.88 79.24/89.74</td></tr><tr><td/><td colspan=\"7\">+ HarvestingQG 79.13/86.92 85.55/92.12 85.26/91.88 76.99/86.61 77.58/88.28 77.54/88.17</td></tr><tr><td>large</td><td colspan=\"7\">+ SemanticQG 79.96/87.73 85.90/92.57 85.62/92.35 76.99/87.29 77.82/88.68 77.77/88.59 + InfoHCVAE 77.85/86.44 85.25/92.15 84.91/91.89 76.00/87.55 78.02/88.90 77.87/88.80</td></tr><tr><td/><td>+ VQAG</td><td colspan=\"6\">79.50/87.55 86.68/93.01 86.35/92.76 77.33/87.70 78.98/89.36 78.86/89.25</td></tr><tr><td/><td>+ Ours</td><td colspan=\"6\">81.37/88.33 86.49/92.78 86.25/92.57 78.40/88.52 77.94/89.00 77.96/88.97</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "QA performance with data augmentation. EM/F1 scores on the Hard (where QCLO \u2264 0.3) and Easy (where QCLO > 0.3) subsets, and the whole set of SQuADDu dev and SQuAD Du test are reported.",
"num": null,
"content": "<table/>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"text": "Illustrative predictions on SQuAD Du dev and SQuAD Du test by a BERT-base model trained on SQuAD Du train (Original), +HarvestingQG, +SemanticQG, +InfoHCVAE, +VQAG, and +Ours. The ground truth answers are in bold. The incorrectly predicted answers are written in red. The QA models that predict them are written in italics. The overlapping words in the questions are underlined. Question-context lexical overlap (QCLO) is given in parentheses.",
"num": null,
"content": "<table/>"
}
}
}
}