|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:15:34.388274Z" |
|
}, |
|
"title": "FELIX: Flexible Text Editing Through Tagging and Insertion", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Mallinson", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Malmi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Guillermo", |
|
"middle": [], |
|
"last": "Garrido", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Google", |
|
"middle": [], |
|
"last": "Research", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present FELIX-a flexible text-editing approach for generation, designed to derive maximum benefit from the ideas of decoding with bi-directional contexts and self-supervised pretraining. In contrast to conventional sequenceto-sequence (seq2seq) models, FELIX is efficient in low-resource settings and fast at inference time, while being capable of modeling flexible input-output transformations. We achieve this by decomposing the text-editing task into two sub-tasks: tagging to decide on the subset of input tokens and their order in the output text and insertion to in-fill the missing tokens in the output not present in the input. The tagging model employs a novel Pointer mechanism, while the insertion model is based on a Masked Language Model (MLM). Both of these models are chosen to be non-autoregressive to guarantee faster inference. FELIX performs favourably when compared to recent text-editing methods and strong seq2seq baselines when evaluated on four NLG tasks:", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present FELIX-a flexible text-editing approach for generation, designed to derive maximum benefit from the ideas of decoding with bi-directional contexts and self-supervised pretraining. In contrast to conventional sequenceto-sequence (seq2seq) models, FELIX is efficient in low-resource settings and fast at inference time, while being capable of modeling flexible input-output transformations. We achieve this by decomposing the text-editing task into two sub-tasks: tagging to decide on the subset of input tokens and their order in the output text and insertion to in-fill the missing tokens in the output not present in the input. The tagging model employs a novel Pointer mechanism, while the insertion model is based on a Masked Language Model (MLM). Both of these models are chosen to be non-autoregressive to guarantee faster inference. FELIX performs favourably when compared to recent text-editing methods and strong seq2seq baselines when evaluated on four NLG tasks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The ideas of text in-filling coupled with selfsupervised pre-training of deep Transformer networks on large text corpora have dramatically changed the landscape in Natural Language Understanding. BERT (Devlin et al., 2019) and its successive refinements RoBERTa , ALBERT (Lan et al., 2019) implement this recipe and have significantly pushed the state-of-the-art on multiple NLU benchmarks such as GLUE (Wang et al., 2018) and SQuAD (Rajpurkar et al., 2016) . More recently, masked or in-filling style objectives for model pretraining have been applied to seq2seq tasks, significantly pushing the state-of-the-art * Equal contribution. Figure 1 : FELIX transforms the source \"The big very loud cat\" into the target text \"The very big old cat\". on a number of text generation tasks, e.g, KER-MIT , MASS (Song et al., 2019) , Bert2Bert (Rothe et al., 2020) , BART (Lewis et al., 2020) and T5 (Raffel et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 222, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 289, |
|
"text": "(Lan et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 422, |
|
"text": "(Wang et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 457, |
|
"text": "(Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 821, |
|
"text": "(Song et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 834, |
|
"end": 854, |
|
"text": "(Rothe et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 862, |
|
"end": 882, |
|
"text": "(Lewis et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 890, |
|
"end": 911, |
|
"text": "(Raffel et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 636, |
|
"end": 644, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While seq2seq frameworks offer a generic tool for modeling almost any kind of text-to-text transduction, there are still many real-world tasks where generating target texts completely from scratchas is done with seq2seq approaches-can be unnecessary. This is especially true for monolingual settings where input and output texts have relatively high degrees of overlap. In such cases a natural approach is to cast conditional text generation as a text-editing task, where the model learns to reconstruct target texts by applying a set of edit operations to the inputs. Typically, the set of edit operations is fixed and pre-defined ahead of time, which on one hand limits the flexibility of the model to reconstruct arbitrary output texts from their inputs, but on the other leads to higher sample-efficiency as the limited set of allowed operations significantly reduces the search space. Based on this observation, text-editing approaches have recently re-gained sig-nificant interest (Gu et al., 2019; Dong et al., 2019; Awasthi et al., 2019; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 987, |
|
"end": 1004, |
|
"text": "(Gu et al., 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1005, |
|
"end": 1023, |
|
"text": "Dong et al., 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1024, |
|
"end": 1045, |
|
"text": "Awasthi et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we present a novel text-editing framework, FELIX, which is heavily inspired by the ideas of bi-directional decoding (slot in-filling) and self-supervised pre-training. In particular, we have designed FELIX with the following requirements:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sample efficiency. Training a high-precision text generation model typically requires large amounts of high-quality supervised data. Self-supervised techniques based on text in-filling have been shown to provide a crucial advantage in low-resource settings. Hence, we focus on approaches able to benefit from already existing pre-trained language models such as BERT, where the final model is directly fine-tuned on the downstream task. We show that this allows us to train on as few as 450 datapoints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Fast inference time. Achieving low latencies when serving text-generation models typically requires specialized hardware and finding a trade-off between model size and accuracy. One major reason for slow inference times is that text-generation models typically employ an autoregressive decoder, i.e., output texts are generated in a sequential non-parallel fashion. To ensure faster inference times we opt for keeping FELIX fully non-autoregressive, resulting in two orders of magnitude speedups.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Flexible text editing. While simplifying the learning task, text-editing models are not as powerful as general purpose sequence-to-sequence approaches when it comes to modeling arbitrary inputoutput text transductions. Hence, we strive to strike a balance between the complexity of learned edit operations and the percentage of input-output transformations the model can capture.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose to tackle text editing by decomposing it into two sub-problems: tagging and insertion (see Fig. 1 ). Our tagger is a Transformer-based network that implements a novel Pointing mechanism (Vinyals et al., 2015) . It decides which source tokens to preserve and in which order they appear in the output, thus allowing for arbitrary word reordering.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 219, |
|
"text": "(Vinyals et al., 2015)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 108, |
|
"text": "Fig. 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Target words not present in the source are represented by the generic slot predictions to be infilled by the insertion model. To benefit from self-supervised pre-training, we chose our insertion model to be fully compatible with the BERT archi-tecture, such that we can easily re-use a publiclyavailable pre-trained checkpoint.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "By decomposing text-editing tasks in this way we redistribute the complexity load of generating an output text between the two models: the source text already provides most of the building blocks required to reconstruct the target, which is handled by the tagging model. The missing pieces are then in-filled by the insertion model, whose job becomes much easier as most of the output text is already in place. Moreover, such a two-step approach is the key for being able to use completely non-autoregressive decoding for both models and still achieve competitive results compared to fully autoregressive approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluate FELIX on four distinct text generation tasks: Sentence Fusion, Text Simplification, Summarization, and Automatic Post-Editing for Machine Translation and compare it to recent text-editing and seq2seq approaches. Each task is unique in the editing operations required and the amount of training data available, which helps to better quantify the value of solutions we have integrated into FELIX 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "FELIX decomposes the conditional probability of generating an output sequence y from an input x as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(y|x) \u2248 p ins (y|y m )p tag (y t , \u03c0|x)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Model description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where the two terms correspond to the tagging and the insertion model. Term y t corresponds to the output of the tagging model and consists of a sequence of tags assigned to each input token x and a permutation \u03c0, which reorders the input tokens. Term y m denotes an intermediate sequence with masked spans and is fed into the insertion model. Given this factorization, both models can be trained independently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The tagging model is composed of three steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(1) Encoding, the source sentence is first encoded using a 12-layer BERT-base model. (2) Tagging, a tagger is applied on top of the encoder and tags each source token. (3) Pointing, a pointer network, using attention applied to the encoders hidden states, re-orders the source tokens. FELIX is Figure 2 : An example of two ways to model inputs to the insertion model: via token masking (Mask) or infilling (Infill). In the former case the tagging model predicts the number of masked tokens (INS 2), while in the latter it is delegated to the insertion model, which replaces the generic INS tag with a fixed length span (length 4), the insertion model then predicts a special PAD symbol to mark the end of the predicted span. Replacements are modeled by keeping the deleted spans between the [REPL] tags. For simplicity we do not show reordering.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 302, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "trained to optimize both the tagging and pointing loss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "L = L pointing + \u03bbL tagging (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where \u03bb is a hyperparameter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Tagging. The tag sequence y t is constructed as follows: source tokens that must be copied are assigned the KEEP tag, tokens not present in the output are marked by the DELETE tag, token spans present in the output but missing from the input are modeled by the INSERT (INS) tag. This tag is then converted into masked token spans in-filled by the insertion model. Tags are predicted by applying a single feedforward layer f to the output of the encoder h L . We define: p(y t |x) = i p(y t i |x), where i is the index of the source token. The model then is trained to minimize the cross-entropy loss. During decoding we use argmax to determine the tags,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "y t i = argmax(f (h L i )).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Pointing. FELIX explicitly models word reordering to allow for larger global edits, as well as smaller local changes, such as swapping nearby words, John and Mary \u2192 Mary and John. Without this word reordering step a vanilla editing model based on just tagging such as Dong et al., 2019) , would first need to delete a span (and Mary) and then insert Mary and before John.", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 286, |
|
"text": "Dong et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "FELIX is able to model this without the need for deletions or insertions. Given a sequence x and the predicted tags y t , the re-ordering model generates a permutation \u03c0 so that from \u03c0 and y t we can reconstruct the insertion model input y m . Thus we have:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "P (y m |x) \u2248 i p(\u03c0(i)|x, y t , i)p(y t i |x). (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "[CLS] The big very loud cat root Figure 3 : Pointing mechanism to transform \"the big very loud cat\" into \"the very big cat\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 41, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We highlight that each \u03c0(i) is predicted independently, non auto-autoregressivly. The output of this model is a series of predicted pointers (source token \u2192 next target token). y m can easily be constructed by daisy-chaining the pointers together, as seen in Fig. 3 . As highlighted by this figure, FELIX's reordering process is similar to non-projective dependency parsing Dozat and Manning (2017) , where head relationships are non-autoregressively predicted to form a tree. Similarly FELIX predicts next word relationship and instead forms a sequence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 398, |
|
"text": "Dozat and Manning (2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 265, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Our implementation is based on a pointer network (Vinyals et al., 2015) , where an attention mechanism points to the next token. Unlike previous approaches where a decoder state attends over an encoder sequence, our setup applies intraattention, where source tokens attend to all other source tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 71, |
|
"text": "(Vinyals et al., 2015)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The input to the Pointer layer at position i is a combination of the encoder hidden state h L i , the embedding of the predicted tag e(y t i ) and the positional embedding e(p i ) 2 as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "h L+1 i = f ([h L", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "i ; e(y t i ); e(p i )]). The pointer network attends over all hidden states, as such:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "p(\u03c0(i)|h L+1 i ) = attention(h L+1 i , h L+1 \u03c0(i) ) (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Attention between hidden states is calculated using a query-key network with a scaled dot-product:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Attention(Q, K) = softmax( QK T \u221a d k ),", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where K and Q are linear projections of h L+1 and d k is the hidden dimension. We found the optional inclusion of an additional Transformer layer prior to the query projection increased the performance on movement-heavy datasets. The model is trained to minimize cross-entropy loss of the pointer network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To realize the pointers, we use constrained beam search (Post and Vilar, 2018) . Like Figure 3 , we create the output by daisy chaining pointers, starting with [CLS] , and finding the most probable pointer path, a token at a time. We ensure no loops are formed by preventing source token from being pointed to twice, and ensure that all source tokens not tagged with delete are pointed to 3 . We note that when using argmax, loops are only form in < 3% of the cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 78, |
|
"text": "(Post and Vilar, 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 165, |
|
"text": "[CLS]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 94, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Dataset construction. When constructing the training dataset, there are many possible combinations of \u03c0 and y t which could produce y. For instance, all source tokens could be replaced by MASK tokens. However, we wish to minimize the number of edits, particularly minimizing the amount of inserted tokens. To do so we greedily apply the following rules, iterating through the target tokens:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. If the target token appears within the source sentence, point to it and tag it with keep. In the case, the target token appears multiple times in the source sentence, point to the nearest source token, as determined by the previously pointed to source token.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "2. If a source token is already pointed to, then it cannot be pointed to again. 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "3. If a target token does not appear within the source sentence, then it must be inserted. The previously pointed to source token is tagged with insert.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "4. If a source token is not pointed to, then it is tagged with delete.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagging Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "An input to the insertion model y m contains a subset of the input tokens in the order determined by the tagging model, as well as masked token spans that it needs to in-fill. To represent masked token spans we consider two options: masking and infilling (see Fig. 2 ). In the former case the tagging model predicts how many tokens need to be inserted by specializing the INSERT tag into INS k, where k translates the span into k MASK tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 266, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Insertion Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For the infilling case the tagging model predicts a generic INS tag, which signals the insertion model to infill it with a span of tokens of an arbitrary length. If we were to use an autoregressive insertion model, the natural way to model it would be to run the decoder until it decides to stop by producing a special stop symbol. Since by design we opted for using a non-autoregressive model, to represent variable-length insertions we use a PAD symbol to pad all insertions to a fixed-length 5 sequence of MASK tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Insertion Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Note that we preserve the deleted span in the input to the insertion model by enclosing it between [REPL] and [/REPL] tags. Even though this introduces an undesired discrepancy between the pretraining and fine-tuning data that the insertion model observes, we found that making the model aware of the text it needs to replace significantly boosts the accuracy of the insertion model. FELIX as Insertion Transformer. Another intuitive way to picture how FELIX works is to draw a connection with Insertion Transformer . In the latter, the decoder starts with a blank output text (canvas) and iteratively infills it by deciding which token and in which position should appear in the output. Multiple tokens can be inserted at a time thus achieving sub-linear decoding times. In contrast, FELIX trains a separate tagger model to pre-fill 6 the output canvas with the input tokens in a single step. As the second and final step FELIX does the insertion into the slots predicted by the tagger. This is equivalent to a single decoding step of the Insertion Transformer. Hence, FELIX requires significantly fewer (namely, two) decoding steps than Insertion Transformer, and through the tagging/insertion decomposition of the task it is straightforward to directly take advantage of existing pre-trained MLMs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Insertion Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Similar to the tagger, our insertion model is also based on a 12-layer BERT-base and is initialized from a public pretrained checkpoint.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Insertion Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "When using the masking approach, the insertion model is solving a masked language modeling task and, hence, we can directly take advantage of the BERT-style pretrained checkpoints. This is a considerable advantage, especially in the low-resource settings, as we do not waste training data on learning a language model component of the text-editing model 7 . With the task decomposition where tagging and insertion can be trained disjointly it essentially comes for free.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Insertion Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Switching from masking to infilling shifts the complexity of modeling the length of inserted token spans from the tagging model to the insertion model. Depending on the amount of training data available it provides interesting trade-offs between the accuracy of the tagging and insertion models. We compare these approaches in Sec. 3.4; for all other tasks we use the masking approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Insertion Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We evaluate FELIX on four distinct text editing tasks: Sentence Fusion, Text Simplification, Summarization, and Automatic Post-Editing for Machine Translation. In addition to reporting previously published results for each task 8 , we also compare to a recent text-editing approach LASERTAG-GER , which combines editing operations with a fixed vocabulary of additional phrases which can be inserted. We follow their setup and set the phrase vocabulary size to 500 and run all experiments using their most accurate autoregressive model. To decode a batch of 32 on a Nvidia Tesla P100, LASERTAGGER takes 1,300ms, FELIX takes 300ms and a a similarly sized seq2seq model takes 27,000ms .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For all tasks we run an ablation study, examining the effect of an open vocabulary with no reordering (FELIXINSERT), and a fixed vocabulary 9 with reordering model (FELIXPOINT).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Task analysis. The chosen tasks cover a diverse set of edit operations and a wide range of dataset sizes. Table 1 provides dataset statistics including: the size, sentence length, and the translation error rate (TER) (Snover et al., 2006) between the source and target sentences. We use TER to highlight unique properties of each task. The summarization dataset is a deletion-heavy dataset, with the highest number of deletion edits and the largest reduction in sentence length. It contains moderate amounts of substitutions and a large number of shift edits, caused by sentence re-ordering. Both the simplification and post-editing datasets contain a large number of insertions and substitutions, while simplification contains a greater number of deletion edits. Post-editing, however, is a much larger dataset covering multiple languages. Sentence fusion has the lowest TER, indicating that obtaining the fused targets requires only a limited number of local edits. However, these edits require modeling the discourse relation between the two input sentences, since a common edit type is predicting the correct discourse connective (Geva et al., 2019) . Additionally, within Table 2 we provide coverage statistics (the percentage of training instances for which an editing model can fully reconstruct the output) and MASK percentages (the percentage of output tokens which the insertion model must predict). As both FELIX and FELIX-INSERT use an open vocabulary, they cover 100% of the data, whereas FELIXPOINT and LASERTAG-GER often cover less than half. For every dataset FELIXPOINT covers a significantly higher percentage than LASERTAGGER, with the noticeable case being summarization, where there is a 3x increase in coverage. This can be explained by the high number of shift edits within summarization (Table 1) , something FELIXPOINT is explicitly designed to model. We found that the difference in coverage between FELIXPOINT and LASERTAGGER correlates strongly (correlation 0.99, p<0.001) with the number of shift edits. Comparing MASK percentages, we see that FELIX always inserts (\u223c50%) fewer MASKs than FELIXINSERT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 238, |
|
"text": "(Snover et al., 2006)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 1134, |
|
"end": 1153, |
|
"text": "(Geva et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 113, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1177, |
|
"end": 1184, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1811, |
|
"end": 1820, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Summarization is the task that requires systems to shorten texts in a meaning-preserving way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summarization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Data. We use the dataset from (Toutanova et al., 2016) , which contains 6,168 short input texts (one or two sentences) and one or more human-written Dataset summaries, resulting in 26,000 total training pairs. The human experts were not restricted to just deleting words when generating a summary, but were allowed to also insert new words and reorder parts of the sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 54, |
|
"text": "(Toutanova et al., 2016)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summarization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Size Lsrc Ltgt TER Ins Del Sub Shft Post", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summarization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Metrics. We report SARI (Xu et al., 2016) , which computes the average F1 scores of the added, kept, and deleted n-grams, as well as breaking it down into each component KEEP, DELETE, and ADD, as we found the scores were uneven across these metrics. We also include ROUGE-L and BLEU-4, as these metrics are commonly used in the summarization literature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 41, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summarization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Results. In Table 3 we compare against LASERTAGGER and SEQ2SEQ BERT from ), a seq2seq model initialized using BERT. The results show that FELIX achieves the highest SARI, ROUGE and BLEU scores. All ablated models achieve higher SARI scores than all other models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Summarization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Sentence simplification is the problem of simplifying sentences such that they are easier to understand. Simplification can be both lexical, replacing or deleting complex words; or syntactic, replacing complex syntactic constructions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Data. Training is performed on WikiLarge, (Zhang and Lapata, 2017a) a large simplification corpus which consists of a mixture of three Wikipedia simplification datasets collected by (Kauchak, 2013; Woodsend and Lapata, 2011; Zhu et al., 2010) . The test set was created by Xu et al. (2016) and consists of 359 source sentences taken from Wikipedia, and then simplified using Amazon Mechanical Turkers to create eight references per source sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 197, |
|
"text": "(Kauchak, 2013;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 224, |
|
"text": "Woodsend and Lapata, 2011;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 242, |
|
"text": "Zhu et al., 2010)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 289, |
|
"text": "Xu et al. (2016)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Metrics. We report SARI, a readability metric FleschKincaid grade level (FKGL), and the percentage of unchanged source sentences (copy).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Results. In Table 4 we compare against: Three state-of-the-art SMT-based simplification systems:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(1) PBMT-R (Wubben et al., 2012), a phrase-based machine translation model; (2) Hybrid (Narayan and Gardent, 2014) , a model which performs sentence splitting and deletions and then simplifies with PBMT-R;", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 114, |
|
"text": "(Narayan and Gardent, 2014)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(3) SBMT-SARI (Xu et al., 2016) , a syntax-based translation model trained on PPDB and then tuned using SARI. Four neural seq2seq approaches: (1) DRESS (Zhang and Lapata, 2017b) , an LSTM-based seq2seq trained with reinforcement learning;", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 31, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 177, |
|
"text": "(Zhang and Lapata, 2017b)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(2) DRESS-Ls, a variant of DRESS which has an additional lexical simplification component;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(3) NTS (Nisioi et al., 2017 ), a seq2seq model; and (4) DMASS (Zhao et al., 2018) , a transformer-based model enhanced with simplification rules from PPDB. And two neural editing models: (1) LASERTAGGER and (2) EditNTS (Dong et al., 2019) , an autoregressive LSTM-based approach for text simplification, using KEEP/DELETE tags and open vocabulary predictions. FELIX achieves the highest overall SARI score and the highest SARI-KEEP score. In addition, all ablated models achieve higher SARI scores than LASERTAGGER. While FELIXINSERT achieves a higher SARI score than EditNTS, FELIXPOINT does not; this can in part be explained by the large number of substitutions and insertions within this dataset, with FELIXPOINT achieving a low SARI-ADD score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 28, |
|
"text": "(Nisioi et al., 2017", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 63, |
|
"end": 82, |
|
"text": "(Zhao et al., 2018)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 239, |
|
"text": "(Dong et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simplification", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Automatic Post-Editing (APE) is the task of automatically correcting common and repetitive errors found in machine translation (MT) outputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Editing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Data. APE approaches are trained on triples: the source sentence, the machine translation output, and the target translation. We experiment on the WMT17 EN-DE IT post-editing task 10 , where the goal is to improve the output of an MT system that translates from English to German and is applied to documents from the IT domain. We follow the procedures introduced in (Junczys-Dowmunt and Grundkiewicz, 2016) and train our models using two synthetic corpora of 4M and 500K examples merged with a corpus of 11K real examples oversampled 10 times. The models that we study expect a single input string. To obtain this and to give the models a possibility to attend to the English source text, we append the source text to the German translation. Since the model input consists of two different languages, we use the multilingual Cased BERT checkpoint for FELIX and LASERTAGGER.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Editing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Metrics. We follow the evaluation procedure of WMT17 APE task and use TER as the primary metric and BLEU as a secondary metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Editing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "10 http://statmt.org/wmt17/ape-task.html Results. We consider the following baselines: COPY, which is a competitive baseline given that the required edits are typically very limited; LASERTAGGER ; LEVEN-SHTEIN TRANSFORMER (LEVT) (Gu et al., 2019) , a partially autoregressive model that also employs deletion and insertion mechanisms; a standard TRANSFORMER evaluated by (Gu et al., 2019) ; and a state-of-the-art method by . Unlike the other methods, the last baseline is tailored specifically for the APE task by encoding the source separately and conditioning the MT output encoding on the source encoding . Results are shown in Table 5 . First, we can see that using a custom method brings significant improvements over generic text transduction methods. Second, FELIX performs very competitively, yielding comparative results to LEVT which is a partially autoregressive model, and outperforming the other generic models in terms of TER. Third, FELIXINSERT performs considerably worse than FELIX and FELIXPOINT, suggesting that the pointing mechanism is important for the APE task. This observation is further supported by Table 2 which shows that without the pointing mechanism the average proportion of masked tokens in a target is 42.39% whereas with pointing it is only 17.30%. This suggests that, removing the pointing mechanism shifts the responsibility too heavily from the tagging model to the insertion model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 246, |
|
"text": "(Gu et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 371, |
|
"end": 388, |
|
"text": "(Gu et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 632, |
|
"end": 639, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 1127, |
|
"end": 1134, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Post-Editing", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Sentence Fusion is the problem of fusing independent sentences into a coherent output sentence(s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Fusion", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Data. We use the balanced Wikipedia portion of the DiscoFuse dataset (Geva et al., 2019) and also study the effect of the training data size by creating four smaller subsets of DiscoFuse: 450,000 (10%), 45,000 (1%), 4,500 (0.1%) and 450 (0.01%) data Table 6 : Sentence Fusion results on DiscoFuse using the full and subsets 10%, 1%, 0.1% and 0.01% of the training set. We report three model variants: FELIXPOINT, FELIXINSERT and FELIX using either Mask or Infill insertion modes. Rows in gray background report scores assuming oracle tagging (TAG) or insertion (INS) predictions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 88, |
|
"text": "(Geva et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 257, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Fusion", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Metrics. Following Geva et al. 2019, we report two metrics: Exact score, which is the percentage of exactly correctly predicted fusions, and SARI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "points.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Results. Table 6 includes additional BERT-based seq2seq baselines: SEQ2SEQ BERT and BERT2BERT from (Rothe et al., 2020) . For all FELIX variants we further break down the scores based on how the INSERTION is modelled: via token-masking (Mask) or Infilling (Infill). Additionally, to better understand the contribution of tagging and insertion models to the final accuracy, we report scores assuming oracle insertion and tagging predictions respectively (highlighted rows).", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 119, |
|
"text": "(Rothe et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 16, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "points.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results show that FELIX and its variants significantly outperform the baselines LASERTAGGER and SEQ2SEQ BERT , across all data conditions. Under the 100% condition BERT2BERT achieves the highest SARI and Exact score, however for all other data conditions FELIX outperforms BERT2BERT. Both seq2seq models perform poorly with less than 4500 (0.1%) datapoints, whereas all editing models achieve relatively good performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "points.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When comparing FELIX variants we see on the full dataset FELIXINSERT outperforms FELIX, however we note that for FELIXINSERT we followed and used an additional sentence re-ordering tag, a hand crafted feature tailored to DiscoFuse which swaps the sentence order. It was included in and resulted in a significant (6% Exact score) increase. However, in the low resource setting, FELIX outperforms FELIXINSERT, suggesting that FELIX is more data efficient than FELIXINSERT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "points.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Ablation. We first contrast the impact of the insertion model and the tagging model, noticing that for all models Infill achieves better tagging scores and worse insertion scores than Mask. Secondly, FELIX achieves worse tagging scores but better insertion scores than FELIXINSERT. This highlights the amount of pressure each model is doing, by making the tagging task harder, such as the inclusion of reordering, the insertion task becomes easier. Finally, the insertion models, even under very low data conditions, achieve impressive performance. This suggests that under low data conditions most pressure should be applied to the insertion model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "points.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Seq2seq models (Sutskever et al., 2014) have been applied to many text generation tasks that can be cast as monolingual translation, but they suffer from well-known drawbacks (Wiseman et al., 2018) : they require large amounts of training data, and their outputs are difficult to control. Whenever input and output sequences have a large overlap, it is reasonable to cast the problem as a text editing task, rather than full-fledged sequence-to-sequence generation. Ribeiro et al. (2018) argued that the general problem of string transduction can be re-duced to sequence labeling. Their approach applied only to character deletion and insertion and was based on simple patterns. LaserTagger is a general approach that has been shown to perform well on a number of text editing tasks, but it has two limitations: it does not allow for arbitrary reordering of the input tokens; and insertions are restricted to a fixed phrase vocabulary that is derived from the training data. Similarly, Ed-itNTS (Dong et al., 2019) and PIE (Awasthi et al., 2019) are two other text-editing models developed specifically for simplification and grammatical error correction, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 39, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 197, |
|
"text": "(Wiseman et al., 2018)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 487, |
|
"text": "Ribeiro et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 995, |
|
"end": 1014, |
|
"text": "(Dong et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1023, |
|
"end": 1045, |
|
"text": "(Awasthi et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Pointer networks have been previously proposed as a way to copy parts of the input in hybrid seq2seq models. Gulcehre et al. (2016) and trained a pointer network to specifically deal with out-of-vocabulary words or named entities. Chen and Bansal (2018) proposed a summarization model that first selects salient sentences and then rewrites them abstractively, using a pointer mechanism to directly copy some out-of-vocabulary words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 131, |
|
"text": "Gulcehre et al. (2016)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 253, |
|
"text": "Chen and Bansal (2018)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Previous approaches have proposed alternatives to autoregressive decoding (Gu et al., 2018; Lee et al., 2018; Wang and Cho, 2019) . Instead of the left-to-right autoregressive decoding, Insertion Transformer and BLM (Shen et al., 2020) generate the output sequence through insertion operations, whereas LEVT (Gu et al., 2019) additionally incorporates a deletion operation. These methods produce the output iteratively, while FELIX requires only two steps: tagging and insertion.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 91, |
|
"text": "(Gu et al., 2018;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 109, |
|
"text": "Lee et al., 2018;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 129, |
|
"text": "Wang and Cho, 2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 235, |
|
"text": "(Shen et al., 2020)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 325, |
|
"text": "(Gu et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The differences between the proposed model, FELIX, its ablated variants, and a selection of related works is summarized in Table 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 130, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We have introduced FELIX, a novel approach to text editing, by decomposing the task into tagging and insertion which are trained independently. Such separation allows us to take maximal benefit from the already existing pretrained masked-LM models. FELIX works extremely well in low-resource settings and it is fully non-autoregressive which favors faster inference. Our empirical results demonstrate that it delivers highly competitive performance when compared to strong seq2seq baselines and other recent text editing approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the future we plan to investigate the following Table 7 : Model comparison along five dimensions: model type, whether the model: is non-autoregressive (LEVT is partially autoregressive), uses a pretrained checkpoint, uses a word reordering mechanism (T5 uses a reordering pretraining task but does not have a copying mechanism), able to generate any possible output (Open vocab).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 58, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "ideas: (i) how to effectively share representations between the tagging and insertion models using a single shared encoder, (ii) how to perform joint training of insertion and tagging models instead of training them separately, (iii) strategies for unsupervised pre-training of the tagging model. which appears to be the bottleneck in highly low-resource settings, and (iv) distillations recipes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The code is publicly available at: https:// felixmodel.page.link/code", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Voita et al. (2019) have shown that models trained with masked language modeling objectives lose positional information, a property we consider important for reordering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We fix the beam size to 5. For a batch size of 32 and maximum sequence length of 128, beam search incurs an additional penalty of about 12ms when run on a Xeon [email protected] As each word has at most one out-going edge, having two incoming edges would form a loop.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A length of 8 was sufficient to represent over 99% of insertion spans.6 This corresponds to more than 80% of the output tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We still fine-tune the insertion model to accommodate for the additional token spans between the [REPL] and [/REPL] 8 To ensure fairness, unless otherwise stated, we recalculate all scores using our evaluation scripts.9 For simplicity we use the LASERTAGGER phrase vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Aleksandr Chuklin, Daniil Mirylenka, Ryan McDonald, and Sebastian Krause for useful discussions, running early experiments and paper suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Parallel iterative edit models for local sequence transduction", |
|
"authors": [ |
|
{ |
|
"first": "Abhijeet", |
|
"middle": [], |
|
"last": "Awasthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rasna", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabyasachi", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vihari", |
|
"middle": [], |
|
"last": "Piratla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4260--4270", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1435" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Par- allel iterative edit models for local sequence trans- duction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4260-4270. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Kermit: Generative insertion-based modeling for sequences", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikita", |
|
"middle": [], |
|
"last": "Kitaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelvin", |
|
"middle": [], |
|
"last": "Guu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Stern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. Kermit: Gener- ative insertion-based modeling for sequences.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Fast abstractive summarization with reinforce-selected sentence rewriting", |
|
"authors": [ |
|
{ |
|
"first": "Yen-Chun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "675--686", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1063" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675-686. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehdi", |
|
"middle": [], |
|
"last": "Rezagholizadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jackie Chi Kit", |
|
"middle": [], |
|
"last": "Cheung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3393--3402", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1331" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neu- ral programmer-interpreter model for sentence sim- plification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393-3402. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Deep biaffine attention for neural dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "5th International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "DiscoFuse: A large-scale dataset for discourse-based sentence fusion", |
|
"authors": [ |
|
{ |
|
"first": "Mor", |
|
"middle": [], |
|
"last": "Geva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Malmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Idan", |
|
"middle": [], |
|
"last": "Szpektor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3443--3455", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1348" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mor Geva, Eric Malmi, Idan Szpektor, and Jonathan Berant. 2019. DiscoFuse: A large-scale dataset for discourse-based sentence fusion. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3443-3455. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Nonautoregressive neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O.K. Li, and Richard Socher. 2018. Non- autoregressive neural machine translation. In Inter- national Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Levenshtein transformer", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Changhan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "11179--11189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11179- 11189. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Pointing the unknown words", |
|
"authors": [ |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungjin", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "140--149", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1014" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 140-149. Association for Computational Linguistics (ACL), Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Log-linear combinations of monolingual and bilingual neural machine translation models for automatic post-editing", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "751--758", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-2378" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Log-linear combinations of monolingual and bilingual neural machine translation models for auto- matic post-editing. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers, pages 751-758, Berlin, Germany. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Improving text simplification language modeling using unsimplified text data", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Kauchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1537--1546", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1537-1546. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Albert: A lite bert for self-supervised learning of language representations", |
|
"authors": [ |
|
{ |
|
"first": "Zhenzhong", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingda", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elman", |
|
"middle": [], |
|
"last": "Mansimov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1173--1182", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1149" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Transformer-based automatic post-editing with a context-aware encoding approach for multi-source inputs", |
|
"authors": [ |
|
{ |
|
"first": "Wonkee", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junsu", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Byung-Hyun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jong-Hyeok", |
|
"middle": [], |
|
"last": "Go", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.05679" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "WonKee Lee, Junsu Park, Byung-Hyun Go, and Jong-Hyeok Lee. 2019. Transformer-based auto- matic post-editing with a context-aware encoding approach for multi-source inputs. arXiv preprint arXiv:1908.05679.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal ; Abdelrahman Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7871--7880", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.703" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Encode, tag, realize: High-precision text editing", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Malmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Krause", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sascha", |
|
"middle": [], |
|
"last": "Rothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniil", |
|
"middle": [], |
|
"last": "Mirylenka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5054--5065", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1510" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5054-5065. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cicero Dos Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Aglar Gul\u00e7ehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "280--290", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K16-1028" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, \u00c7 aglar Gul\u00e7ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Hybrid simplification using deep semantics and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Shashi", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Gardent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "435--445", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-1041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shashi Narayan and Claire Gardent. 2014. Hybrid sim- plification using deep semantics and machine trans- lation. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 435-445. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Exploring neural text simplification models", |
|
"authors": [ |
|
{ |
|
"first": "Sergiu", |
|
"middle": [], |
|
"last": "Nisioi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [ |
|
"Paolo" |
|
], |
|
"last": "Sanja\u0161tajner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liviu", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dinu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "85--91", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-2014" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergiu Nisioi, Sanja\u0160tajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text sim- plification models. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85-91. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Fast lexically constrained decoding with dynamic beam allocation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vilar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1314--1324", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1119" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post and David Vilar. 2018. Fast lexically con- strained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1314-1324, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Exploring the limits of transfer learning with a unified text-to", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Raffel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharan", |
|
"middle": [], |
|
"last": "Narang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Matena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanqi", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "SQuAD: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2383--2392", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1264" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383-2392. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Local string transduction as sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Joana", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shashi", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shay", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1360--1371", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joana Ribeiro, Shashi Narayan, Shay B. Cohen, and Xavier Carreras. 2018. Local string transduction as sequence labeling. In Proceedings of the 27th In- ternational Conference on Computational Linguis- tics, pages 1360-1371. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Sascha", |
|
"middle": [], |
|
"last": "Rothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shashi", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "264--280", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00313" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for se- quence generation tasks. Transactions of the Asso- ciation for Computational Linguistics, pages 264- 280.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Blank language models", |
|
"authors": [ |
|
{ |
|
"first": "Tianxiao", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Quach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianxiao Shen, Victor Quach, Regina Barzilay, and Tommi Jaakkola. 2020. Blank language models.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A study of translation edit rate with targeted human annotation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linnea", |
|
"middle": [], |
|
"last": "Micciulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of association for machine translation in the Americas", |
|
"volume": "200", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Mass: Masked sequence to sequence pre-training for language generation", |
|
"authors": [ |
|
{ |
|
"first": "Kaitao", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Insertion transformer: Flexible sequence generation via insertion operations", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Stern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ume 97 of Proceedings of Machine Learning Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5976--5985", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. vol- ume 97 of Proceedings of Machine Learning Re- search, pages 5976-5985, Long Beach, California, USA. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. Advances in NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saleema", |
|
"middle": [], |
|
"last": "Amershi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "340--350", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1033" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Toutanova, Chris Brockett, Ke M. Tran, and Saleema Amershi. 2016. A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 340-350. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Pointer networks", |
|
"authors": [ |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meire", |
|
"middle": [], |
|
"last": "Fortunato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2692--2700", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in neural in- formation processing systems, pages 2692-2700.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4396--4406", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1448" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019. The bottom-up evolution of representations in the trans- former: A study with machine translation and lan- guage modeling objectives. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4396-4406. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "BERT has a mouth, and it must speak: BERT as a Markov random field language model", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--36", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-2304" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov ran- dom field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluat- ing Neural Language Generation, pages 30-36. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "353--355", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5446" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Learning neural templates for text generation", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3174--3187", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1356" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 3174-3187. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Learning to simplify sentences with quasi-synchronous grammar and integer programming", |
|
"authors": [ |
|
{ |
|
"first": "Kristian", |
|
"middle": [], |
|
"last": "Woodsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "409--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristian Woodsend and Mirella Lapata. 2011. Learn- ing to simplify sentences with quasi-synchronous grammar and integer programming. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 409-420. Association for Computational Linguistics, Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Sentence simplification by monolingual machine translation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sander Wubben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Bosch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1015--1024", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sander Wubben, Antal van den Bosch, and Emiel Krah- mer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015- 1024, Jeju Island, Korea. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Optimizing statistical machine translation for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quanze", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "401--415", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00107" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Sentence simplification with deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "584--594", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1062" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang and Mirella Lapata. 2017a. Sen- tence simplification with deep reinforcement learn- ing. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing, pages 584-594. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Sentence simplification with deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "584--594", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1062" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang and Mirella Lapata. 2017b. Sen- tence simplification with deep reinforcement learn- ing. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing, pages 584-594. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Integrating transformer and paraphrase rules for sentence simplification", |
|
"authors": [ |
|
{ |
|
"first": "Sanqiang", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daqing", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andi", |
|
"middle": [], |
|
"last": "Saptono", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bambang", |
|
"middle": [], |
|
"last": "Parmanto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3164--3173", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1355" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating trans- former and paraphrase rules for sentence simplifi- cation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3164-3173. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A monolingual tree-based translation model for sentence simplification", |
|
"authors": [ |
|
{ |
|
"first": "Zhemin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Delphine", |
|
"middle": [], |
|
"last": "Bernhard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1353--1361", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1353-1361. Asso- ciation for Computational Linguistics, Coling 2010 Organizing Committee.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">The very big [REPL] loud [/REPL] old cat</td></tr><tr><td>insertion</td><td colspan=\"2\">MASKED LANGUAGE MODEL</td></tr><tr><td/><td>POINTER</td></tr><tr><td/><td>The big very loud</td><td>cat</td></tr></table>", |
|
"text": "tagging KEEP KEEP INS KEEP DEL KEEP The very big [REPL] loud [/REPL] MASK cat TAGGER" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "-editing 5M 18.10 17.74 24.97 04.24 06.25 11.30 02.69 Simplification 296K 22.61 21.65 26.02 04.75 08.97 09.90 02.41 Summarization 26K 32.48 22.16 43.23 00.29 32.06 09.34 10.71 Sentence fusion 4.5M 30.51 30.04 10.92 02.49 04.91 03.75 00.62" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">Coverage % \u2191</td><td colspan=\"2\">MASK % \u2193</td></tr><tr><td/><td colspan=\"4\">LASERTAGGER FELIXPOINT FELIXINSERT FELIX</td></tr><tr><td>Postediting</td><td>35.10</td><td>40.40</td><td>42.39</td><td>17.30</td></tr><tr><td>Simplification</td><td>36.87</td><td>42.27</td><td>18.23</td><td>13.85</td></tr><tr><td>Summarization</td><td>16.71</td><td>48.33</td><td>15.92</td><td>11.91</td></tr><tr><td>Sentence fusion</td><td>85.39</td><td>95.25</td><td>14.69</td><td>09.20</td></tr></table>", |
|
"text": "Statistics across tasks: size of the dataset (Size), source length in tokens (L src ), target length in tokens (L tgt ), and TER scorse, including number of insertions (Ins), deletions (Del), substitutions (Sub), and shifts (Shft)." |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">SARI ADD DEL KEEP Rouge BLEU</td></tr><tr><td>SEQ2SEQBERT</td><td>32.10</td><td>52.70 08.30</td></tr><tr><td colspan=\"3\">LASERTAGGER 40.23 06.10 54.48 60.12 81.36 35.05</td></tr><tr><td colspan=\"3\">FELIXPOINT 41.61 * 06.80 58.67 * 59.36 80.58 32.90</td></tr><tr><td colspan=\"3\">FELIXINSERT 41.99 * 06.80 61.65 * 57.53 77.78 29.68</td></tr><tr><td>FELIX</td><td>42.78</td><td/></tr></table>", |
|
"text": "Coverage and MASK statistics. FELIXINSERT and FELIX have 100% coverage." |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Summarization results. All models copied the source less than 2% of the time. Models significantly different from LASERTAGGER are marked with" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>WikiLarge</td><td>SARI ADD DEL KEEP FKGL Copy</td></tr><tr><td colspan=\"2\">SBMT-SARI 37.94 DRESS-LS 32.98 02.57 30.77 65.60 8.94 0.27</td></tr><tr><td>EDITNTS</td><td>34.94 03.23 32.37 69.22 9.42 0.12</td></tr><tr><td colspan=\"2\">LASERTAGGER 32.31 03.02 33.63 60.27 9.82 0.21</td></tr><tr><td>FELIXPOINT</td><td>34.37 02.35 34.80 65.97 9.47 0.18</td></tr><tr><td>FELIXINSERT</td><td>35.79 04.03 39.70 63.64 8.14 0.09</td></tr><tr><td>FELIX</td><td>38.13 03.55 40.45 70.39 8.98 0.08</td></tr></table>", |
|
"text": "05.60 37.96 70.27 8.89 0.10 DMASS+DCSS 37.01 05.16 40.90 64.96 9.24 0.06 PBMT-R 35.92 05.44 32.07 70.26 10.16 0.11 HYBRID 28.75 01.38 41.45 43.42 7.85 0.04 NTS 33.97 03.57 30.02 68.31 9.63 0.11 DRESS 33.30 02.74 32.93 64.23 8.79 0.22" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Sentence Simplification results on WikiLarge." |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "WMT17 En\u2192De post-editing results." |
|
}, |
|
"TABREF10": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Model</td><td colspan=\"2\">Insertion</td><td>Oracle</td><td>SARI Exact 10%</td><td>1%</td><td>0.1%</td><td>0.01%</td></tr><tr><td/><td colspan=\"3\">Mask Infill TAG INS</td><td/><td/><td/></tr><tr><td colspan=\"8\">BERT2BERT 89.52 FELIXINSERT \u2022 \u2022 \u2022 \u2022 \u2022 88.44 60.80 52.82 46.09 34.11 15.34 82.91 77.25 71.49 57.94 36.61 75.00 71.97 66.87 57.08 38.89 \u2022 \u2022 72.91 64.00 55.45 39.71 18.89</td></tr><tr><td/><td>\u2022</td><td/><td>\u2022</td><td colspan=\"4\">88.86 84.11 81.76 75.88 61.68</td></tr><tr><td/><td>\u2022</td><td/><td/><td colspan=\"4\">88.72 63.37 56.67 48.85 33.32 13.99</td></tr><tr><td/><td/><td>\u2022</td><td>\u2022</td><td colspan=\"4\">70.32 71.78 64.28 51.20 28.42</td></tr><tr><td/><td/><td>\u2022</td><td>\u2022</td><td colspan=\"4\">78.37 75.56 72.24 65.95 55.97</td></tr><tr><td>FELIX</td><td>\u2022</td><td>\u2022</td><td>\u2022</td><td colspan=\"4\">87.69 58.32 55.11 48.84 38.01 20.49 67.78 59.62 52.74 41.48 17.30</td></tr><tr><td/><td>\u2022</td><td/><td>\u2022</td><td colspan=\"4\">87.52 86.45 83.13 79.79 67.60</td></tr><tr><td/><td>\u2022</td><td/><td/><td colspan=\"4\">88.78 61.31 52.85 45.45 36.87 16.96</td></tr></table>", |
|
"text": "63.90 54.45 42.07 03.35 00.00 SEQ2SEQBERT 85.30 53.60 52.80 43.70 00.00 00.00 LASERTAGGER 85.45 53.80 47.31 38.46 25.74 12.32 FELIXPOINT 88.20 60.76 53.75 44.90 31.87 13.82" |
|
} |
|
} |
|
} |
|
} |