|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:56:25.867032Z" |
|
}, |
|
"title": "Data Weighted Training Strategies for Grammatical Error Correction", |
|
"authors": [ |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Lichtarge", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Alberti", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Shankar", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recent progress in the task of Grammatical Error Correction (GEC) has been driven by addressing data sparsity, both through new methods for generating large and noisy pretraining data and through the publication of small and higher-quality finetuning data in the BEA-2019 shared task. Building upon recent work in Neural Machine Translation (NMT), we make use of both kinds of data by deriving example-level scores on our large pretraining data based on a smaller, higher-quality dataset. In this work, we perform an empirical study to discover how to best incorporate delta-logperplexity, a type of example scoring, into a training schedule for GEC. In doing so, we perform experiments that shed light on the function and applicability of delta-log-perplexity. Models trained on scored data achieve stateof-the-art results on common GEC test sets.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recent progress in the task of Grammatical Error Correction (GEC) has been driven by addressing data sparsity, both through new methods for generating large and noisy pretraining data and through the publication of small and higher-quality finetuning data in the BEA-2019 shared task. Building upon recent work in Neural Machine Translation (NMT), we make use of both kinds of data by deriving example-level scores on our large pretraining data based on a smaller, higher-quality dataset. In this work, we perform an empirical study to discover how to best incorporate delta-logperplexity, a type of example scoring, into a training schedule for GEC. In doing so, we perform experiments that shed light on the function and applicability of delta-log-perplexity. Models trained on scored data achieve stateof-the-art results on common GEC test sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Grammatical Error Correction (GEC), the task of automatically correcting errors in written text, can be framed as a translation task from 'bad grammar' to 'good grammar.' This framing has enabled GEC to borrow models and techniques from the vast literature in machine translation (MT). Neural approaches have dominated recent stateof-the-art advances in GEC, and have been shown to be more effective in direct comparison with non-neural methods (Chollampatt and Ng, 2018; . Nevertheless, GEC continues to pose a challenge for data-reliant neural models, given that the publicly available training data is relatively limited, with the largest corpus numbering only 2M examples (Mizumoto et al., 2012) . Therefore, much recent work in GEC has focused on diverse methods to address data sparsity by supplementing available annotated corpora with much larger pretraining data (Ge et al., 2018a; Kasewa et al., 2018; Lichtarge et al., 2019; Grundkiewicz et al., 2019; Zhao et al., 2019) . A contrasting approach to addressing data sparsity in GEC has been explored in the Building Educational Application (BEA) 2019 Shared Task on Grammatical Error Correction (Bryant et al., 2019) . The task introduced the Write and Improve training set, a new high-quality annotated corpus numbering only \u223c34k examples (referred to in this work as BEA-19 train), and encouraged exploration of low-resource methods by organizing two tracks specifically for data-restricted competition. Despite the relatively small size, many approaches using the BEA-19 train data achieved better results on common GEC test sets than previous approaches that did not have access to this small but high-quality data (Bryant et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 471, |
|
"text": "(Chollampatt and Ng, 2018;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 699, |
|
"text": "(Mizumoto et al., 2012)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 872, |
|
"end": 890, |
|
"text": "(Ge et al., 2018a;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 891, |
|
"end": 911, |
|
"text": "Kasewa et al., 2018;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 935, |
|
"text": "Lichtarge et al., 2019;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 936, |
|
"end": 962, |
|
"text": "Grundkiewicz et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 963, |
|
"end": 981, |
|
"text": "Zhao et al., 2019)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1155, |
|
"end": 1176, |
|
"text": "(Bryant et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1679, |
|
"end": 1700, |
|
"text": "(Bryant et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the context of neural MT (NMT), models have been shown to be sensitive to noise in the training data (Khayrallah and Koehn, 2018) . Although much effort has been dedicated to methods which either filter or downweight noisy pretraining data in NMT (Junczys- Dowmunt, 2018) , less attention has thus far been paid in GEC. To the best of our knowledge, previously explored techniques for filtering pretraining data in GEC are limited to hand-engineered heuristic cutoffs (Grundkiewicz and Junczys-Dowmunt, 2014) and n-gram language model filtering (Ge et al., 2018a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 132, |
|
"text": "(Khayrallah and Koehn, 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 274, |
|
"text": "Dowmunt, 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 511, |
|
"text": "(Grundkiewicz and Junczys-Dowmunt, 2014)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 566, |
|
"text": "(Ge et al., 2018a)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recent work in NMT (Wang et al., 2018) presents a training technique for scoring the 'noise' of training data by employing a much smaller, higher-quality 'trusted' dataset. The authors describe a curriculum-style training over data scored by this metric, and demonstrate significant improvements over a baseline. We refer to this score as delta-log-perplexity (\u2206ppl).", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 38, |
|
"text": "(Wang et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work presents an empirical study of training strategies for GEC in multiple dimensions. Using a standard training setup (without scoring), we explore arrangements of GEC corpora into pretraining and finetuning data, establishing a strong baseline. We then apply data scoring via \u2206ppl to the GEC task, demonstrating the value of \u2206ppl as a heuristic for example quality. By comparing multiple plausible methods for applying \u2206ppl, we gain some insight into the interpretation and practical applicability of the metric. We train on the scored data via four simple methods that instantiate different intuitions about how to treat a heuristic score for data quality. We demonstrate performance gains for various strategies incorporating scoring into the training, and present state-of-the-art results on the CoNLL-14 (Ng et al., 2014) and JFLEG (Napoles et al., 2017) test sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 807, |
|
"end": 833, |
|
"text": "CoNLL-14 (Ng et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 844, |
|
"end": 866, |
|
"text": "(Napoles et al., 2017)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contributions of this Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In recent GEC work, most approaches pretrain on some synthetic data and then finetune on the union of multiple annotated data sources, with some variation in which datasets are included for fine-tuning (Grundkiewicz et al., 2019; Lichtarge et al., 2019) . In a thorough study of incorporating generated pseudo-data into GEC training, Kiyono et al. (2019) report that this typical pretrainfinetune setup scales with size of pretraining data better than a setup in which all data is trained on simultaneously. Choe et al. (2019) describe a 'sequential transfer learning' approach in which the pretrained model, finetuned on all available annotated data, is finetuned again separately for each test set. A thorough review of the GEC field is made by Wang et al. (2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 229, |
|
"text": "(Grundkiewicz et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 253, |
|
"text": "Lichtarge et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 354, |
|
"text": "Kiyono et al. (2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 765, |
|
"text": "Wang et al. (2020)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Data selection in MT has been performed along two dimensions: domain-relevance and denoising. Multiple researchers (Moore and Lewis, 2010; Axelrod et al., 2011; van der Wees et al., 2017) have used the difference in cross-entropy between two language models as a criteria for the selection of in-domain sentences. In contrast, Wang et al. (2018) and Junczys-Dowmunt (2018) have used data selection for denoising. Recently, demonstrate that a co-curriculum training for dynamic selection of data that is both clean and in-domain, can outperform independent selection along each of the two dimensions. training example before and after improving a pretrained model by finetuning on a small trusted dataset. Wang et al. use this metric to order the pretrain data, and train a new model via a curriculum-style strategy using this ordering. In their setup, this metric is interpreted as measuring 'noise', describing the change in log probability of an example between a noisy pretrained model and its 'denoised' finetuned counterpart. Because log perplexity for an example is the negative of the log-probability, we refer to this score as 'delta-log-perplexity'(\u2206ppl). 1", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 160, |
|
"text": "Axelrod et al., 2011;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 187, |
|
"text": "van der Wees et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 345, |
|
"text": "Wang et al. (2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the most general case, \u2206ppl describes the change in a model's log perplexity for an individual example between two checkpoints in model training. If the first checkpoint (with parameterization \u03b8 \u2212 ) is sampled after model convergence on a base dataset D \u2212 , and the second checkpoint (\u03b8 + ), after further finetuning on a second target dataset D + , then the \u2206ppl between those models for a given example (composed of input, output pair (i, o)) should suggest which of the datasets the example is more similar to, from the perspective of the successive models \u03b8 \u2212 and \u03b8 + .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2206ppl(i, o; \u03b8 \u2212 , \u03b8 + ) = log p(o|i; \u03b8 \u2212 )\u2212log p(o|i; \u03b8 + )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Calculation", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "In the course of this work, we make use of the relative ordering of examples from the scored dataset D \u2212 \u2206 when sorted by their \u2206ppl scores, rather than the actual \u2206ppl score values. 2 We refer to this quantity as 'delta-perplexity-rank':", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 184, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "D \u2212 \u2206 = {(i, o, \u2206ppl(i, o))|(i, o) \u2208 D \u2212 } \u03b4ppl(i, o; D \u2212 \u2206 ) = 1 \u2212 %ile rank(\u2206ppl(i, o); D \u2212 \u2206 ) 100", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Calculation", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "'%ile rank' refers to percentile rank. \u03b4ppl has range [0,1], and is computed such that the example with the most negative \u2206ppl will have the highest \u03b4ppl score of 1. The median example will have a \u03b4ppl of 0.5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculation", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Any example drawn from D + should trivially be expected to have a negative \u2206ppl because Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "base dataset D \u2212 , target dataset D + Result: \u2206ppl-scored base dataset D \u2212 \u2206 \u03b4ppl-scored base dataset D \u03b4 \u03b8 \u2212 \u2190 train new model on D \u2212 \u03b8 + \u2190 finetune \u03b8 \u2212 on D + D \u2212 \u2206 \u2190 {} for example x \u2208 D \u2212 do ppl \u2212 x \u2190 \u2212 log p(x.o|x.i, \u03b8 \u2212 ) ppl + x \u2190 \u2212 log p(x.o|x.i, \u03b8 \u2212 ) x.\u2206ppl \u2190 (ppl + x \u2212 ppl \u2212 x ) D \u2212 \u2206 \u2190 D \u2212 \u2206 \u222a x end D \u2212 \u03b4 \u2190 {} for scored example x \u2208 D \u2212 \u2206 do x.\u03b4ppl \u2190 1 \u2212 %ile rank(x.\u2206ppl,D \u2212 \u2206 ) 100 D \u2212 \u03b4 \u2190 D \u2212 \u03b4 \u222a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "x end Algorithm 1: Score base data with \u2206ppl, and calculate \u03b4ppl for each sentence pair. The symbols x.i and x.o refer to the input and output sequences of the example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "\u03b8 + has just been trained directly upon the exact example, whereas \u03b8 \u2212 has never seen the example before. The negative \u2206ppl can be explained by suggesting \u03b8 + has begun to memorize the specific examples in D + .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "Scoring examples drawn from D \u2212 reveals the value of the technique; both checkpoints have been trained on D \u2212 and no example in D \u2212 was present during further training on D + , so the \u2206ppl reflects the general changes learned during the transition from \u03b8 \u2212 to \u03b8 + . Examples from D \u2212 that are similar to examples from D + can be expected to have relatively lower log perplexity for \u03b8 + , and thus lower \u2206ppl. Examples from D \u2212 that are markedly different from those of D + should be expected to have higher \u2206ppl scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "Although D \u2212 (base data) and D + (target data) refer to the pretraining and fine-tuning datasets, respectively, in our setup, we note that these two datasets could be selected according to alternative criteria. The only requirement is that these sets differ in terms of some observable qualitative aspect, for which \u2206ppl becomes a heuristic. While in this work we use a target dataset to focus on example quality, it may also be feasible to employ a target dataset that differs from the base data chiefly in domain, and use \u2206ppl to negotiate domain transfer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "When D + is selected to be 'higher quality' than D \u2212 , then the \u2206ppl scores of examples drawn from D \u2212 provide a heuristic for example quality. Given a heuristic score for example quality, there are many plausible strategies to incorporate the score into a training schedule. We explore the following schemes: [a] Filter the pretraining data by discarding examples for which \u03b4 ppl < k, where k is a fixed cutoff parameter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "[b] Instead of discarding data, down-weight the loss on lowscoring examples during training proportionally to their rank: weight x = \u03b4ppl x . A more sophisticated variation of filtering the data is implemented by Wang et al. (2018) : [c] Define a curriculum by an exponentially decaying function over training, so that by the end of training, only the best-scoring examples remain in the training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 231, |
|
"text": "Wang et al. (2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "include x (\u03b4ppl x , k(t)) = 1 if \u03b4ppl x \u2265 k(t) 0 if \u03b4ppl x < k(t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "k(t) = 0.5 t H", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "for training step t and constant H. To combine the benefits of downweighting and the curriculum-style annealing, we also implement a mixed strategy [d]:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "weight x (k(t)) = 1 \u03b4ppl x \u2265 k(t) \u03b4ppl x \u03b4ppl x < k(t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "where k(t) = 0.5 t H for training step t and constant H.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annealing Strategies", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use the Transformer sequence-to-sequence model (Vaswani et al., 2017) , using the Tensor2 Tensor open-source implementation with the ''transformer clean big tpu'' setting. 3 We use a 32k word piece dictionary (Schuster and Nakajima, 2012) . For all training stages we use the Adafactor optimizer (Shazeer and Stern, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 72, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 176, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 241, |
|
"text": "(Schuster and Nakajima, 2012)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 324, |
|
"text": "(Shazeer and Stern, 2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We train on the public version of the Lang-8 corpus (Mizumoto et al., 2012) , the FCE corpus (Yannakoudakis et al., 2011) , and the Cambridge English Write & Improve training split described in the BEA-2019 shared task (BEA-19 train) (Bryant et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 75, |
|
"text": "(Mizumoto et al., 2012)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 93, |
|
"end": 121, |
|
"text": "(Yannakoudakis et al., 2011)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 255, |
|
"text": "(Bryant et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The Lang-8 corpus is scraped from the social language learning Web site, 4 and is composed of potentially erroneous sentences from Englishlanguage-learners with crowd-sourced corrections. The corpus includes many sentence pairs that are noisy or irrelevant to GEC for a variety of reasons. In contrast, FCE 5 and BEA-19 train 6 are much smaller corpora that have been carefully annotated by a small number of professional annotators. Due to their highly curated origin, these datasets have a much higher proportion of highquality GEC-relevant sentence pairs than Lang-8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For pretraining data, we follow Lichtarge et al. (2019) in using a large and noisy corpus of edits crawled from Wikipedia's publicly available revision histories (REV). We also use a similarsized corpus of sentence pairs, where the target sentences are drawn from Wikipedia, and the source sentences are generated via round-triptranslation through a bridge language (RT) (Lichtarge et al., 2019) . We generate four parallel datasets of equal size by round-trip-translating the same 'clean' sequences through four bridge languages. 7 Both pretraining corpora are further probabilistically corrupted via character-level insertions, deletions, transpositions, and replacements. We corrupt each character of REV, which already contains some 'natural' spelling errors, at a rate of 0.003 per character. For the RT data, which does not already have spelling errors, we use a rate of 0.005 per character.", |
|
"cite_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 395, |
|
"text": "(Lichtarge et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Prior research on GEC has used the NUCLE corpus (Dahlmeier et al., 2013) for model training. Our pilot experiments showed that a model pretrained on REV/RT yielded similar performance when fine tuned on either Lang-8 or a combination of Lang-8 and NUCLE. Because both corpora contain corrections of sentences written by nonnative speakers, and NUCLE, which has only a fourth as many sentences as Lang-8, did not give additional improvements on top of Lang-8, we decided to exclude the corpus in our experiments to simplify the presentation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 72, |
|
"text": "(Dahlmeier et al., 2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "bea2019st/data/fce_v2.1.bea19.tar.gz.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "6 https://www.cl.cam.ac.uk/research/nl/ bea2019st/data/wi+locness v2.1.bea19.tar.gz. 7 Japanese, Russian, French, and German, following Lichtarge et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 159, |
|
"text": "Lichtarge et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "When pretraining, we train the Transformer model for 1M steps. We set the learning rate to 0.01 for the first 10,000 steps, after which we decrease it proportionally to the inverse square root of the number of steps. When finetuning, we set the learning rate to a constant 3 \u00d7 10 \u22125 . Regardless of the dataset being used, we run finetuning for \u223d30 epochs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Scored Training and Finetuning", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "When applying the scored training strategies to Lang-8, we discard the base model that was used in calculating the \u2206ppl scores (which was trained on: Pretrain \u2192 Lang-8), and start a new finetuning run on the scored Lang-8, from a model initialized on the same pretraining data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scored Training and Finetuning", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "When applying our scored training strategies to the much larger pretraining data, rather than start the model from random initialization and repeat 1M steps of training, we continue training from the 1M checkpoint of the base model and train on the scored data for an additional 100,000 steps (using the same pretraining settings).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scored Training and Finetuning", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In the course of our experiments, we evaluate on the development set of the BEA-2019 shared task (BEA-19 dev), which includes examples from both W&I and the LOCNESS corpus (Granger, 1998) , using the ERRANT scorer (Bryant et al., 2017) . In our analysis (Section 7), we report on the BEA-19 test, with scores provided by the official Codalab of the BEA-2019 task. 8 We also report on the popular GEC evaluation corpora: CoNLL-2014 (Ng et al., 2014) and JFLEG (Napoles et al., 2017; Heilman et al., 2014) , for which we report F 0.5 with the M 2 scorer (Dahlmeier and Ng, 2012) and the GLEU + metric (Napoles et al., 2016) respectively. For BEA-19 dev and BEA-19 test, following the conventions of the shared task, we post-processed the model output with the spaCy tokenizer. 9 For decoding, we use iterative decoding (Lichtarge et al., 2019 ) with a beam size of 4. For each reported test result, we select the model checkpoint, set the number of decoding iterations, and tune a scalar identity threshold based Lang-8 occupies a middle ground, as the data, which is largely relevant to GEC but scraped from a crowd-sourced medium, does not rise to the standard of professional annotation. In light of this, we combine the single REV dataset with each of the four RT datasets to produce four large pretraining datasets, each containing half Wiki revisions and half round-trip translated data (PRE). All experiments are run for each of these merged datasets, and all reported figures are the average of those four models. We also merge the FCE and BEA-19 train into a single finetuning set, which we refer to as 'BEA-FCE' (BF). We explore three training schemes: including Lang-8 with the higher-quality annotated data, including Lang-8 with the pretraining data, and a two-stage finetuning scheme, with Lang-8 as the intermediate step. Table 2 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 187, |
|
"text": "(Granger, 1998)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 235, |
|
"text": "(Bryant et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 365, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 481, |
|
"text": "(Napoles et al., 2017;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 503, |
|
"text": "Heilman et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 576, |
|
"text": "(Dahlmeier and Ng, 2012)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 621, |
|
"text": "(Napoles et al., 2016)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 775, |
|
"end": 776, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 817, |
|
"end": 840, |
|
"text": "(Lichtarge et al., 2019", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1835, |
|
"end": 1842, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "higher-quality than D \u2212 . For these experiments, we use the soft-weighting training strategy ([b] in Section 4.2), as it is has no tunable hyperparameters and does not discard any data. Results are shown in Table 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 97, |
|
"text": "([b]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 214, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Applying Delta-log-perplexity", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Given a set of training data for which each example has an associated heuristic 'quality' score, there are many plausible options for incorporating that score into a training schedule. For the bestperforming scoring arrangement, [D] in Table 3 , we repeat the scored training stage in order to compare the following strategies for incorporating scores in training. Table 3 . The asterisk indicates the training stage that is being varied in each experiment. In (ii) all models are finetuned on Lang-8 BF using the soft strategy. The hard strategies filter out all examples with positive \u2206ppl, which leaves 37% of the dataset remaining for both PRE BF , and Lang-8 BF . The curriculum strategies anneal down to the best 5% of the dataset, following (Wang et al., 2018) . Table 4 . We note that Wang et al. (2018) used the hard-cclm strategy for noise-filtering in NMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 232, |
|
"text": "[D]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 748, |
|
"end": 767, |
|
"text": "(Wang et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 372, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 777, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training With Scored Examples", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Training a model on PRE \u222a Lang-8 BF ([A] in Table 3 ) achieves a +12.5 F 0.5 gain over a model trained on the same unscored dataset, and outperforms a model trained on PRE \u2192 Lang-8 by +2.4 F 0.5 on BEA-19 dev ([3] in Table 2 ). Figure 1 explores the characteristics of the \u2206ppl scores for the merged dataset, with examples labeled by their original source dataset (REV, RT, or Lang-8).", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 213, |
|
"text": "([3]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 51, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 224, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 236, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Understanding \u2206ppl Scores", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "The scatter plot (a) offers some insight into how \u2206ppl works. Strikingly, all data clusters tightly around the diagonal on which \u2206ppl=0. Almost all examples with negative \u2206ppl also have low ppl + as well. Variance in \u2206ppl between examples is much less than variance in ppl + . The scatter plot yields distinct shapes for each of the datasets, and the percentile-rank plot (c) (which depicts the relative proportions of each dataset per percentile bin) shows that the datasets have drastically different scoring profiles. Lang-8, RT and REV have 52%, 30%, and 66% examples with negative (good) \u2206ppl respectively, and Lang-8 carries a disproportionate share of the most Table 5 we draw individual examples from PRE \u222a Lang-8 BF alongside their ppl + and \u2206ppl scores. The examples exhibit some characteristics particular to the methodology of their origination.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 668, |
|
"end": 675, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Understanding \u2206ppl Scores", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Some of the REV examples [d,f,g] demonstrate the shortcomings of the dataset; significant additions or deletions of information with no grammatical content. Although most such examples have positive (bad) \u2206ppl, it is noteworthy that example [d] , which seems catastrophically out-of-domain, has a better \u2206ppl than [e], which simply changes the tense of the sentence. ppl + is much higher for examples that have significant information change. This explains why the REV data in the scatter plot extends thinly along the \u2206ppl=0 diagonal; REV contains many examples with information change, for which both source and target are grammatically correct. For these examples, absolute value of both ppl + and ppl \u2212 is large, but the change in \u2206ppl is relatively small. This demonstrates a shortcoming of using only \u2206ppl as a heuristic for example quality:", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 32, |
|
"text": "[d,f,g]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 244, |
|
"text": "[d]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Wikipedia Revisions", |
|
"sec_num": "7.1.1" |
|
}, |
|
{ |
|
"text": "REV has a higher percentage of 'good' examples than Lang-8 according to \u2206ppl, but many of those examples actually have large ppl + , and do not capture grammatical changes. Example [a] illustrates a related failure case; it has high ppl \u2212 , but according to \u2206ppl alone, is the 'best' example in the table.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 184, |
|
"text": "[a]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Wikipedia Revisions", |
|
"sec_num": "7.1.1" |
|
}, |
|
{ |
|
"text": "The roundtrip-translated data does not suffer from large information changes, except when the meaning is so garbled as to produce a semantically irreconcilable sequence, as in [n] . As a result, the distribution of RT examples has lower ppl + than that of REV. However, many examples include re-arrangements or re-phrasings that are out of scope for the task of GEC [k, m] ; of the 10k sampled examples, only 30% have 'good' (negative) \u2206ppl. Interestingly, in example [l], passing a sequence through two translation models introduced a reasonably placed comma in what should have been the 'corrupted' source sequence; removing this comma yields a bad \u2206ppl score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 179, |
|
"text": "[n]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 372, |
|
"text": "[k, m]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Roundtrip-Translations", |
|
"sec_num": "7.1.2" |
|
}, |
|
{ |
|
"text": "Most Lang-8 examples, for better or worse, do involve grammatically relevant changes to the source sequence. Lang-8 contains many sentence pairs that contain some bad or awkward changes, and these examples perform poorly according to \u2206ppl [s, u] . Interestingly, partial corrections, even apparently good ones, also perform poorly [t] . This may be a result of the relatively complete nature of the corrections made in BF, in which few if any target sequences appear to need further correction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 245, |
|
"text": "[s, u]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 334, |
|
"text": "[t]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lang-8", |
|
"sec_num": "7.1.3" |
|
}, |
|
{ |
|
"text": "The scored training strategies (Table 4 ) explore approaches to making use of an examplelevel quality heuristic that accommodate distinct intuitions about how to treat the data. Filtering out examples beforehand (hard) follows the intuition that bad examples only hurt performance and should be excluded. Down-weighting the loss (soft) modifies the relative importance of examples, but avoids throwing any out, maintaining the value of having a large dataset. The 'curriculum'-style counterparts of each apply the same logic, while incorporating (albeit in a hardcoded manner) the intuition that the value of some examples changes over the course of training.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 39, |
|
"text": "(Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training Strategies", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "It is worthwhile to note that the optimal strategy, even among these simple hard-coded strategies, is a function of the characteristics of the dataset in question. The hard-cclm strategy is worst for Lang-8 BF , where it gradually isolates a small portion of an already small dataset, but is best for PRE BF , which is so large that 5% of the dataset is still considerable. Also, much of what is lost in the 'bad' portion of PRE BF is lower-quality data than that which exists in Lang-8 BF , which may explain both why hard-cclm does so well for PRE BF and why soft-cclm, which does not throw out the large portion of bad examples, does relatively poorly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Strategies", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "The hard strategy outperforms both soft and soft-cclm for the first stage of both experiments, but the advantage disappears following finetuning on BF. This suggests that cutting out the 'worst' examples entirely, while beneficial in the scored training stage, may prevent a sort of regularization that is beneficial to the ultimate finetuned model. That all strategies similarly outperform the baseline suggests that \u2206ppl is a robust heuristic Dataset proportion examples learning rate full 60011 3 \u00d7 10 \u22125 \u223c1/2 29998 3 \u00d7 10 \u22125 \u223c1/4 15121 25 \u00d7 10 \u22126 \u223c1/8 7608 1 \u00d7 10 \u22127 \u223c1/16 3749 1 \u00d7 10 \u22127 \u223c1/32 1841 1 \u00d7 10 \u22127 \u223c1/64 905 1 \u00d7 10 \u22127 for quality; that all are simple and un-tuned to the data suggests that there remains headroom for more sophisticated training strategies to do even better.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Strategies", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "We observe that scoring any combination of lower-quality datasets using BF as the target data leads to large improvements over unscored pretraining models, and modest performance gains over those unscored models after finetuning (Table 3) . We now explore how each of these effects varies as a function of the target data size. For the scoring setup with the largest relative gains over unscored pretraining ([A] in Table 3 ), we repeat the same experiment multiple times, but using nested subsets of BF for both scoring and finetuning, each half the size of the previous one. While halving the datasets, we maintain the ratio of BEA-19 train and FCE data within each subset. Because using the same finetuning learning rate would quickly overfit for the smaller datasets, learning rates were tuned for each subset using the test set of the CoNLL-2013 shared task (Ng et al., 2013) ( Table 6 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 408, |
|
"end": 412, |
|
"text": "([A]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 238, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 423, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 883, |
|
"end": 890, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scoring With Less Target Data", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "All models are trained via the hard-cclm strategy, which, prior to finetuning, significantly outperforms other strategies for training on scored pretraining data (section 'ii' in Table 4 ). Results are shown in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 186, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 219, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scoring With Less Target Data", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "The marginal benefit of scoring the pretraining data yields a drastic performance gain over unscored pretraining, even for very small amounts of target data (see Figure 2 and Table 3 ). This pretrain gain reflects the value of obliquely incorporating the information of the target dataset into the pretraining data via \u2206ppl scoring. Because finetuning on the target dataset directly incorporates that same information again, this gain is diminished once the scored models are finetuned (see \"\u2206 vs unscored\" column in Table 3 ). However, the benefits of finetuning are limited by over-fitting to the finetuning dataset, which is likely to occur given that it is substantially smaller (\u2248 1M words) than pretraining data (\u2248 8B words). Thus the scored pretrained model, which has already incorporated some of the information of the target dataset without yet having seen any of the specific examples therein, is able to make better use of the finetuning set before the harm of over-fitting outweighs the benefit of further training. This difference explains why even after finetuning, the models with scored training stages outperform the unscored models, though by less than if directly comparing the scored and unscored stages themselves.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 170, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 182, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 524, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Understanding the Benefits of Scoring", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "In Figure 2 , the marginal benefit of scoring for the 30k dataset size is +0.5 F 0.5 , compared with +0.9 F 0.5 for doubling the size of the finetuning data (without scoring). For tasks constrained by the availability of high-quality data, and for which labeling costs are high, scoring noisy pretraining data may be a thrifty path to performance gains.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Understanding the Benefits of Scoring", |
|
"sec_num": "7.4" |
|
}, |
|
{ |
|
"text": "We evaluate our best unscored and scored systems at all stages of training on BEA-19 test, CoNLL-14, and JFLEG. Results are shown in Table 7 . Results for BEA-19 test are provided by the official Codalab competition of the BEA-2019 shared task, where this work qualifies as Unrestricted because of its reliance on additional parallel data like the Wikipedia revisions pretraining dataset. Because the most competitive results in the BEA-2019 task were submitted to the Restricted track, the results of this work are not perfectly comparable to most recent and competitive GEC publications. Additionally, many of the cited works make use of the NUCLE dataset (Dahlmeier et al., 2013) , which was not used in this work. Nonetheless, it is useful to contextualize the results within the scope of recent progress in GEC. A comparison to recent prior work is made in Table 8 . This work achieves state-of-the-art results for the JFLEG and CoNLL-14 test sets, and obtains competitive results on BEA-19 test.", |
|
"cite_spans": [ |
|
{ |
|
"start": 658, |
|
"end": 682, |
|
"text": "(Dahlmeier et al., 2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 140, |
|
"text": "Table 7", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 862, |
|
"end": 869, |
|
"text": "Table 8", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test Set Results", |
|
"sec_num": "7.5" |
|
}, |
|
{ |
|
"text": "The huge jump in performance between unscored and scored pretraining data demonstrates the possibility of making much more effective use of large and noisy datasets through the incorporation of example-level quality scores. While \u2206ppl is one such score, there is significant room for (Grundkiewicz et al., 2019) 69.5 64.2 61.2 (Kiyono et al., 2019) 70.2 65.0 61.4 (Lichtarge et al., 2019) -60.4 63.3 (Xu et al., 2019) 66.6 63.2 62.6 (Omelianchuk et al., 2020) 73.7 66.5 this work -unscored 71.9 65.3 64.7 this work -scored 73.0 66.8 64.9 In our scored training, we have presented hard-coded training strategies selected for their simplicity. These un-tuned strategies are easy to implement, but do not represent optimal uses of an example-level heuristic score. The fact that there is such variability between them in the two experiments of Table 4 suggests that training methods that are sensitive to the particularities of the scored dataset and the model may be able to make much better use of the same scored data. For example, a training scheme that, during training, dynamically decided which data to include or exclude (or how to weight the included data) could be expected to outperform our hardcoded strategies and hyperparameters. A training strategy along these lines has been implemented successfully by for NMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 311, |
|
"text": "(Grundkiewicz et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 348, |
|
"text": "(Kiyono et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 388, |
|
"text": "(Lichtarge et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 417, |
|
"text": "(Xu et al., 2019)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 459, |
|
"text": "(Omelianchuk et al., 2020)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 841, |
|
"end": 848, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "These two complementary directions of future work, the development of new example-level quality heuristics and the techniques to apply them in scored training, present an intriguing path for future exploration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Note that \u2206ppl is a difference between log perplexities, not between the example perplexities themselves.2 This allows us to implement curriculum-style data selection and directly weight examples using the same score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/tensorflow/tensor2tensor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.Lang-8.com. 5 https://www.cl.cam.ac.uk/research/nl/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://competitions.codalab.org/ competitions/20229. 9 https://spacy.io/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each of the four bridge languages. The 'clean' target sentences are the shared between the four.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank Felix Stahlberg, and the three anonymous reviewers, for their helpful comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Domain adaptation via pseudo indomain data selection", |
|
"authors": [ |
|
{ |
|
"first": "Amittai", |
|
"middle": [], |
|
"last": "Axelrod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "355--362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in- domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Nat- ural Language Processing, pages 355-362, Edinburgh, Scotland, UK. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The BEA-2019 shared task on grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00d8istein", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Andersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. Andersen, and Ted Briscoe. 2019. The BEA- 2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Automatic annotation and evaluation of error types for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "793--805", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and eval- uation of error types for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 793-805, Vancouver, Canada. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A neural grammatical error correction system built on better pre-training and sequential transfer learning", |
|
"authors": [ |
|
{ |
|
"first": "Yo Joong", |
|
"middle": [], |
|
"last": "Choe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiyeon", |
|
"middle": [], |
|
"last": "Ham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyubyong", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yeoil", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "213--227", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yo Joong Choe, Jiyeon Ham, Kyubyong Park, and Yeoil Yoon. 2019. A neural grammatical error correction system built on better pre-training and sequential transfer learning. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 213-227, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A multilayer convolutional encoder-decoder neural network for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Shamil", |
|
"middle": [], |
|
"last": "Chollampatt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural network for grammatical error correc- tion. In The Thirty-Second AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Kyoto university participation to WAT 2016", |
|
"authors": [ |
|
{ |
|
"first": "Fabien", |
|
"middle": [], |
|
"last": "Cromieres", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenhui", |
|
"middle": [], |
|
"last": "Chu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toshiaki", |
|
"middle": [], |
|
"last": "Nakazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 3rd Workshop on Asian Translation (WAT2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "166--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabien Cromieres, Chenhui Chu, Toshiaki Nakazawa, and Sadao Kurohashi. 2016. Kyoto university participation to WAT 2016. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016), pages 166-174, Osaka, Japan. The COLING 2016 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Better evaluation for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Dahlmeier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "568--572", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correc- tion. In Proceedings of the 2012 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 568-572, Montr\u00e9al, Canada. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Building a large annotated corpus of learner English: The NUS corpus of learner English", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Dahlmeier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siew Mei", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Work- shop on Innovative Use of NLP for Build- ing Educational Applications, pages 22-31, Atlanta, Georgia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Fluency boost learning and inference for neural grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1055--1065", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Ge, Furu Wei, and Ming Zhou. 2018a. Fluency boost learning and inference for neural grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1055-1065, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Reaching human-level performance in automatic grammatical error correction: An empirical study", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Ge, Furu Wei, and Ming Zhou. 2018b. Reaching human-level performance in auto- matic grammatical error correction: An empiri- cal study. CoRR, abs/1807.01270.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The computer learner corpus: A versatile new source of data for SLA research", |
|
"authors": [ |
|
{ |
|
"first": "Sylviane", |
|
"middle": [], |
|
"last": "Granger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Learner English on Computer", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sylviane Granger. 1998. The computer learner corpus: A versatile new source of data for SLA research. In Sylviane Granger, editor, Learner English on Computer, pages 3-18, Addison Wesley Longman, London and New York.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The wiked error corpus: A corpus of corrective wikipedia edits and its application to grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roman Grundkiewicz and Marcin Junczys- Dowmunt. 2014. The wiked error corpus: A corpus of corrective wikipedia edits and its application to grammatical error correction.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Advances in Natural Language Processing -Lecture Notes in Computer Science", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Przepi\u00f3rkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maciej", |
|
"middle": [], |
|
"last": "Ogrodniczuk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "8686", |
|
"issue": "", |
|
"pages": "478--490", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Przepi\u00f3rkowski and Maciej Ogrodniczuk, editors, In Advances in Natural Language Processing -Lecture Notes in Computer Science, volume 8686, pages 478-490. Springer. Roman Grundkiewicz, Marcin Junczys-", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "252--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252-263, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Predicting grammaticality on an ordinal scale", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aoife", |
|
"middle": [], |
|
"last": "Cahill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Madnani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melissa", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Mulholland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "174--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 174-180, Baltimore, Maryland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Dual conditional crossentropy filtering of noisy parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers, Brussels, Belgium. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt. 2018. Dual condi- tional crossentropy filtering of noisy parallel corpora. In Proceedings of the Third Confer- ence on Machine Translation: Research Papers, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Approaching neural grammatical error correction as a low-resource machine translation task", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shubha", |
|
"middle": [], |
|
"last": "Guha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "595--606", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correc- tion as a low-resource machine translation task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 595-606, New Orleans, Louisiana. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Wronging a right: Generating better errors to improve grammatical error detection", |
|
"authors": [ |
|
{ |
|
"first": "Sudhanshu", |
|
"middle": [], |
|
"last": "Kasewa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4977--4983", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sudhanshu Kasewa, Pontus Stenetorp, and Sebastian Riedel. 2018. Wronging a right: Gen- erating better errors to improve grammatical error detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4977-4983, Brus- sels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "On the impact of various types of noise on neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Huda", |
|
"middle": [], |
|
"last": "Khayrallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Second Workshop on Neural Machine Translation and Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the Second Workshop on Neural Machine Transla- tion and Generation, Melbourne. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "An empirical study of incorporating pseudo data into grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Shun", |
|
"middle": [], |
|
"last": "Kiyono", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masato", |
|
"middle": [], |
|
"last": "Mita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoya", |
|
"middle": [], |
|
"last": "Mizumoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1236--1242", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An empiri- cal study of incorporating pseudo data into grammatical error correction. In Proceed- ings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 1236-1242, Hong Kong, China. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Reinforcement learning based curriculum optimization for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Gaurav", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2054--2061", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gaurav Kumar, George Foster, Colin Cherry, and Maxim Krikun. 2019. Reinforcement learning based curriculum optimization for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2054-2061, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Corpora generation for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Lichtarge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Alberti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shankar", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Tong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3291--3301", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Corpora generation for grammatical error correction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3291-3301, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The effect of learner corpus size in grammatical error correction of ESL writings", |
|
"authors": [ |
|
{ |
|
"first": "Tomoya", |
|
"middle": [], |
|
"last": "Mizumoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuta", |
|
"middle": [], |
|
"last": "Hayashibe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mamoru", |
|
"middle": [], |
|
"last": "Komachi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The COLING 2012 Organizing Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "863--872", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomoya Mizumoto, Yuta Hayashibe, Mamoru Komachi, Masaaki Nagata, and Yuji Matsumoto. 2012. The effect of learner corpus size in grammatical error correction of ESL writings. In Proceedings of COLING 2012: Posters, pages 863-872, Mumbai, India. The COLING 2012 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Intelligent selection of language model training data", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the ACL 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert C. Moore and William Lewis. 2010. Intelligent selection of language model train- ing data. In Proceedings of the ACL 2010", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Conference Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "220--224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Conference Short Papers, pages 220-224, Uppsala, Sweden. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "GLEU without tuning", |
|
"authors": [ |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keisuke", |
|
"middle": [], |
|
"last": "Sakaguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1605.02592" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2016. GLEU without tuning. arXiv:1605.02592.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "JFLEG: A fluency corpus and benchmark for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keisuke", |
|
"middle": [], |
|
"last": "Sakaguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "229--234", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Proceedings of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics: Volume 2, Short Papers, pages 229-234, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The conll-2013 shared task on grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mei", |
|
"middle": [], |
|
"last": "Siew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"Hendy" |
|
], |
|
"last": "Hadiwinoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Susanto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "CoNLL Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2013. The conll-2013 shared task on grammatical error correction. In CoNLL Shared Task.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The CoNLL-2014 shared task on grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mei", |
|
"middle": [], |
|
"last": "Siew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"Hendy" |
|
], |
|
"last": "Hadiwinoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Susanto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 1-14, Baltimore, Maryland. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Gector -grammatical error correction: Tag, not rewrite", |
|
"authors": [ |
|
{ |
|
"first": "Kostiantyn", |
|
"middle": [], |
|
"last": "Omelianchuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vitaliy", |
|
"middle": [], |
|
"last": "Atrasevych", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Artem", |
|
"middle": [], |
|
"last": "Chernodub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oleksandr", |
|
"middle": [], |
|
"last": "Skurzhanskyi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. Gector -grammatical error correction: Tag, not rewrite. In Proceed- ings of the Fifteenth Workshop on Innova- tive Use of NLP for Building Educational Applications, Seattle, WA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Japanese and korean voice search", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaisuke", |
|
"middle": [], |
|
"last": "Nakajima", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the IEEE Conference on Acoustics, Speech and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In Proceed- ings of the IEEE Conference on Acoustics, Speech and Signal Processing.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Adafactor: Adaptive learning rates with sublinear memory cost", |
|
"authors": [ |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Stern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.04235" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noam Shazeer and Mitchell Stern. 2018. Ada- factor: Adaptive learning rates with sublinear memory cost. arXiv:1804.04235.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6000--6010", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Dynamic data selection for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Arianna", |
|
"middle": [], |
|
"last": "Marlies Van Der Wees", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Bisazza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1400--1410", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine translation. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1400-1410, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Dynamically composing domain-data selection with clean-data selection by ''cocurricular learning'' for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isaac", |
|
"middle": [], |
|
"last": "Caswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ciprian", |
|
"middle": [], |
|
"last": "Chelba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1282--1292", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Wang, Isaac Caswell, and Ciprian Chelba. 2019. Dynamically composing domain-data selection with clean-data selection by ''co- curricular learning'' for neural machine trans- lation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1282-1292, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Denoising neural machine translation training with trusted data and online data selection", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taro", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Macduff", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tetsuji", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ciprian", |
|
"middle": [], |
|
"last": "Chelba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--143", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018. Denoising neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 133-143, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "A comprehensive survey of grammar error correction", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuelin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhuo", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Wang, Yuelin Wang, Jie Liu, and Zhuo Liu. 2020. A comprehensive survey of grammar error correction. 2005.06600.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Erroneous data generation for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Shuyao", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiehao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Long", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuyao Xu, Jiehao Zhang, Jin Chen, and Long Qin. 2019. Erroneous data generation for grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 149-158, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "A new dataset and method for automatically grading ESOL texts", |
|
"authors": [ |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Medlock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "180--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceed- ings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 180-189, Port- land, Oregon, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kewei", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruoyu", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "156--165", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156-165, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Wang et al. (2018) present a metric defined as the difference in log-probability of an individual", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "A comparison of the log-perplexity of base and target models (a), the corresponding histogram across the \u2206ppl axis (b), and the relative proportions of the three datasets in each \u03b4ppl percentile (c), for 30k examples sampled from PRE \u222a Lang-8 BF such that 10k examples were selected from REV, RT, and Lang-8 respectively. The histogram (b) x-axis has been reversed to align the 'best' examples (with the lowest \u2206ppl) towards the right, copying the alignment of the \u03b4ppl plot (c); for the scatter plot (a), the best examples are towards the bottom-right. The \u03b4ppl scores shown (c) are the values actually used by the various training strategies.(a) hard Filter by preset rank-score cutoff (b) soft Down-weight loss by rank-score (c) hard-cclm Curriculum-style filtering (d) soft-cclm Curriculum-style down-weighting Results are shown in", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Performance of scored and unscored pretraining and finetuning as a function of proportion of the target dataset used. The pretraining dataset is (PRE \u222a Lang-8). Full BF dataset is shown at the far right (n=60011). Each smaller dataset is a randomly halved subset of the last, with proportion of BEA-19 train / FCE examples held constant. The smallest subset, (BF randomly halved six times) has 905 examples. Logarithmic lines of best fit are shown.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>on performance on the corresponding develop-</td></tr><tr><td>ment sets. Ensemble decoding is computed using</td></tr><tr><td>the average (Cromieres et al., 2016) of the logits</td></tr><tr><td>of multiple identical Transformers, trained</td></tr><tr><td>separately.</td></tr><tr><td>6 Experiments</td></tr><tr><td>6.1 Standard Training</td></tr></table>", |
|
"html": null, |
|
"text": "Training datasets. Wiki refers to Wikipedia." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Training Data</td><td>BEA-19 dev F 0.5</td></tr><tr><td>1</td><td>PRE \u2192 (Lang-8 \u222a BF)</td><td>24.0 46.3</td></tr><tr><td>2</td><td>(PRE \u222a Lang-8) \u2192 BF</td><td>32.4 51.4</td></tr><tr><td>3</td><td>PRE \u2192 Lang-8 \u2192 BF</td><td>42.5 51.5</td></tr></table>", |
|
"html": null, |
|
"text": "For experiments[2] and [3] of the standard training setup(Table 2), we apply delta-log-perplexity scoring. For the multistage finetuning setup, we explore arrangements of base (D \u2212 ) and target (D + ) datasets that ensure that D + is smaller and" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Training Data</td><td>F 0.5</td><td>BEA-19 dev \u2206 vs unscored</td></tr><tr><td>A</td><td>(PRE \u222a Lang-8) BF \u2192 BF</td><td>44.9 51.8</td><td>+12.5 +0.4</td></tr><tr><td/><td>PRE BF</td><td>37.0</td><td>+6.8</td></tr><tr><td>B</td><td>\u2192 Lang-8</td><td>43.3</td><td>+0.8</td></tr><tr><td/><td>\u2192 BF</td><td>51.7</td><td>+0.2</td></tr><tr><td/><td>PRE</td><td>24.0</td><td>-</td></tr><tr><td>C</td><td>\u2192 Lang-8 BF</td><td>47.2</td><td>+4.7</td></tr><tr><td/><td>\u2192 BF</td><td>51.9</td><td>+0.4</td></tr><tr><td>D</td><td>PRE BF \u2192 Lang-8 BF \u2192 BF</td><td>48.0 52.3</td><td>+5.5 +0.8</td></tr></table>", |
|
"html": null, |
|
"text": "Comparing pretrain-finetune arrangements. The arrow indicates finetuning." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Comparing training strategies for PRE BF , and Lang-8 BF , following setup (D) in" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Dataset</td><td/><td>Example</td><td>ppl +</td><td>\u2206ppl</td></tr><tr><td/><td>a</td><td>It comprises gives birth to 3 genera.</td><td>2.44</td><td>\u22120.25</td></tr><tr><td/><td>b</td><td>They also can also live in desert and forest areas.</td><td>0.1</td><td>\u22120.14</td></tr><tr><td/><td>c</td><td>It included 10 tracks, half of them with Joe on vocals.</td><td>0.07</td><td>\u22120.05</td></tr><tr><td>REV</td><td>d</td><td/><td/><td/></tr><tr><td/><td/><td>to a particular subject or type of resource.</td><td>0.42</td><td>0.12</td></tr><tr><td/><td>f</td><td>She drove a blue Ford SUV.</td><td>1.36</td><td>0.14</td></tr><tr><td/><td>g</td><td>The circle is complete. Fr.</td><td>2.65</td><td>2.06</td></tr><tr><td/><td>h</td><td>In winter, the sport was hockey.</td><td>0.1</td><td>\u22120.2</td></tr><tr><td/><td>i</td><td>Nearly a thousand people was were injured.</td><td>0.01</td><td>\u22120.16</td></tr><tr><td/><td>j</td><td>This section provides only provides a brief overview of some translated versions.</td><td>0.16</td><td>\u22120.08</td></tr><tr><td>RT</td><td>k</td><td>The sets are now depleted out of print.</td><td>0.12</td><td>0.06</td></tr><tr><td/><td>l</td><td>In 1902 , they held a garden party on the grounds of the Rose Bay Cottage.</td><td>0.23</td><td>0.1</td></tr><tr><td/><td colspan=\"3\">m This meant a reduction of the runtime by resulted in a 25 minutes run time reduction. 1.19</td><td>0.15</td></tr><tr><td/><td>n</td><td>The bad case was Adverse weather is the third largest cause of accidents.</td><td>2.0</td><td>0.5</td></tr><tr><td/><td>o</td><td>Please check it whether the way of speaking is right.</td><td>0.09</td><td>\u22120.18</td></tr><tr><td/><td>p</td><td>So, can't government make up for holiday gaps</td><td>0.43</td><td>\u22120.12</td></tr><tr><td/><td>q</td><td>I really enjoyed watching the movie , although I never read the manga.</td><td>0.14</td><td>\u22120.08</td></tr><tr><td>Lang-8</td><td>r</td><td>I am worry worried about their damages of mind mental well-being.</td><td>1.03</td><td>\u22120.003</td></tr><tr><td/><td>s</td><td>I always wake up 6 AM every days a.m.everyday and then I go to college.</td><td>1.05</td><td>0.11</td></tr><tr><td/><td>t</td><td>First The first time, He applogized apologized to me,</td><td>0.5</td><td>0.12</td></tr><tr><td/><td>u</td><td>I often use the google translation translator.</td><td>1.33</td><td>0.27</td></tr></table>", |
|
"html": null, |
|
"text": "The threee churches in the latter parish , at Rathgaroguie, Cushintown 3.89 0.06 and Terrerath, cater for has a population of approximately 2500.eBrowsing by subject, for example, is was possible as is was restricting searches" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Examples from PRE \u222a Lang-8 BF . Italicized text represents differences between source and target. Strikethroughs represent deletions and bold text represents insertions." |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Successive halves of the BF dataset</td></tr><tr><td>used in Figure 2. Proportion of FCE and BEA-</td></tr><tr><td>19 train is held constant during down-sampling.</td></tr><tr><td>Learning rates are tuned based on the test set of</td></tr><tr><td>the CoNLL-2013 shared task.</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF10": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Training Strategy</td><td colspan=\"2\">BEA-19 test</td><td colspan=\"2\">CoNLL-14 test</td><td>JFLEG test</td></tr><tr><td colspan=\"3\">PRE \u2192 Lang-8 Prec. unscored 35.7 41.7 62.7 52.4 \u2192 BF 67.4 61.7</td><td>36.8 60.3 66.1</td><td>44.6 36.2 64.0 42.8 67.6 44.3</td><td>42.6 58.3 61.1</td><td>54.1 62.5 63.6</td></tr><tr><td/><td>ensemble</td><td>74.1 64.3</td><td>71.9</td><td>72.6 46.7</td><td>65.3</td><td>64.7</td></tr><tr><td/><td>PRE BF (soft)</td><td>56.6 47.1</td><td>54.4</td><td>61.6 38.2</td><td>54.8</td><td>59.4</td></tr><tr><td/><td>\u2192 Lang-8 BF (soft)</td><td>68.0 57.8</td><td>65.7</td><td>68.6 44.7</td><td>62.0</td><td>63.7</td></tr><tr><td/><td>\u2192 BF</td><td>67.6 62.5</td><td>66.5</td><td>69.4 43.9</td><td>62.1</td><td>63.8</td></tr><tr><td>scored</td><td>ensemble</td><td>75.4 64.7</td><td>73.0</td><td>74.7 46.9</td><td>66.8</td><td>64.5</td></tr><tr><td/><td colspan=\"2\">PRE BF (soft) \u2192 Lang-8 64.1 52.2</td><td>61.3</td><td>66.0 41.8</td><td>59.2</td><td>62.5</td></tr><tr><td/><td>\u2192 BF</td><td>66.8 61.5</td><td>65.7</td><td>68.3 45.4</td><td>62.0</td><td>63.6</td></tr><tr><td/><td>ensemble</td><td>71.7 67.4</td><td>70.8</td><td>71.2 49.9</td><td>65.6</td><td>64.9</td></tr></table>", |
|
"html": null, |
|
"text": "Rec. F 0.5 (ERRANT) Prec. Rec. F 0.5 (M 2 ) GLEU +" |
|
}, |
|
"TABREF11": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>BEA-19 test</td><td>CoNLL-14 test</td><td>JFLEG test</td></tr><tr><td>F 0.5 (ERRANT)</td><td>F 0.5 (M 2 )</td><td>GLEU +</td></tr></table>", |
|
"html": null, |
|
"text": "Test set evaluation results. For each test set, the finetuning checkpoint selected, the identitycorrection threshold, and the number of rounds of iterative decoding are tuned to the respective dev sets. BEA-19 test results are provided via the Codalab competition website of the BEA-2019 shared task. Each non-ensemble row represents the average of four models, whose construction is described in Section 6. The ensembles combine the four models from the preceding row." |
|
}, |
|
"TABREF12": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Comparison of test set evaluation results to prior work, showing the best reported result for each test set in each cited work. Cited values for different test sets do not necessarily represent the same model. improvement, as seen in the example-level analysis in Section 7. Other methods for scoring individual examples should be explored." |
|
} |
|
} |
|
} |
|
} |