|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:05:28.990079Z" |
|
}, |
|
"title": "SimpleNER Sentence Simplification System for GEM 2021", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Srivatsa", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Monil", |
|
"middle": [], |
|
"last": "Gokani", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Manish", |
|
"middle": [], |
|
"last": "Shrivastava", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes SimpleNER, a model developed for the sentence simplification task at GEM-2021. Our system is a monolingual Seq2Seq Transformer architecture that uses control tokens pre-pended to the data, allowing the model to shape the generated simplifications according to user desired attributes. Additionally, we show that NER-tagging the training data before use helps stabilize the effect of the control tokens and significantly improves the overall performance of the system. We also employ pretrained embeddings to reduce data sparsity and allow the model to produce more generalizable outputs.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes SimpleNER, a model developed for the sentence simplification task at GEM-2021. Our system is a monolingual Seq2Seq Transformer architecture that uses control tokens pre-pended to the data, allowing the model to shape the generated simplifications according to user desired attributes. Additionally, we show that NER-tagging the training data before use helps stabilize the effect of the control tokens and significantly improves the overall performance of the system. We also employ pretrained embeddings to reduce data sparsity and allow the model to produce more generalizable outputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sentence simplification aims at reducing the linguistic complexity of a given text, while preserving all the relevant details of the initial text. This is particularly useful for people with cognitive disabilities (Evans et al., 2014) , as well as for second language learners and people with low-literacy levels (Watanabe et al., 2009) . Text and Sentence simplification also play an important role within NLP. Simplification has been utilized as a preprocessing step in larger NLP pipelines, which can greatly aid learning by reducing vocabulary and regularizing of syntax.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 234, |
|
"text": "(Evans et al., 2014)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 336, |
|
"text": "(Watanabe et al., 2009)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our model, we use control tokens to tune a Seq2Seq Transformer model (Vaswani et al., 2017) for sentence simplification. We take character length compression, extent of paraphrase, and lexical & syntactic complexity as attributes to gauge the transformations between complex and simple sentence pairs. We then represent each of these attributes as numerical measures, which are then added to our data. We show that this provides a considerable improvement over as-is Transformer approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 94, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The use of control tokens in Seq2Seq models for sentence simplification has been explored before . But this approach has shown to add data sparsity to the system. This is because the model is required to learn the distribution of the various control tokens and the expected outputs across the ranges of each control token. To mitigate this sparsity, we process our data to replace named entities with respective tags using an NER tagger. We show that this reduces the model vocabulary and allows for greater generalization. To further curb the data sparsity, we make use of pre-trained embeddings as initial input embeddings for model training. Our code is publicly available here. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Past approaches towards sentence simplification have dealt with it as a monolingual machine translation(MT) task (specifically Seq2Seq MT (Sutskever et al., 2014) ). This meant training MT architectures over complex-simple sentence pairs, either aligned manually (Alva-Manchego et al., 2020; Xu et al., 2016) or automatically (Zhu et al., 2010; Wubben et al., 2012) using large complex-simple repository pairs such as the English Wikipedia and the Simple English Wikipedia.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 162, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 291, |
|
"text": "(Alva-Manchego et al., 2020;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 308, |
|
"text": "Xu et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 344, |
|
"text": "(Zhu et al., 2010;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 365, |
|
"text": "Wubben et al., 2012)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Some implementations also utilize reinforcement learning (Zhang and Lapata, 2017) over the MT task, with automated metrics such as SARI (Xu et al., 2016) , information preservation, and grammatical fluency constituting the training reward.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 81, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 153, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A recent approach towards sentence simplification involves using control tokens during machine translation Our model makes use of control tokens similar to to tailor the generated simplifications according to the extent of changes in the following attributes: character length, extent of paraphrasing, and lexical & syntactic complexity. These attributes are represented by their respective numerical measures (see 3.1), and then pre-pended to the complex sentences using in specific formats (Table 1) . Alongside this, we use NER tagging and pre-trained input embeddings as a method to curb data sparsity and unwanted named entity (NE) replacements.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 492, |
|
"end": 501, |
|
"text": "(Table 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Controllable Text Generation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3 System Overview", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Controllable Text Generation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Following , we encode the following attributes during training and attempt to control them during inference time. Eg: Complex: \"<NbChars 0.80> <LevSim 0.76> <WordRank 0.79> it is particularly famous for the cultivation of kiwifruit .\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Control Attributes", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Simple: \"It is mostly famous for the growing of kiwifruit .\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Control Attributes", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Compression in sequence length has been shown to be correlated with the simplicity and readability of text (Martin et al., 2019) . Since compression as an operation directly involves deletion, controlling its extent plays a crucial role in the extent of information preservation. We make use of the compression ratio (control token: 'NbChars') between the character lengths of the simple and complex sentences to encode for this attribute.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 128, |
|
"text": "(Martin et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Amount of compression", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "The extent of paraphrasing between the complex and simple sentences ranges from a near replica of the source sentence to a very dissimilar and possibly simplified one. The measure used for this attribute is Levenshtein similarity (Levenshtein, 1966) (control token: 'LevSim') between the complex and simple sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrasing", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "For a young reader or a second language learner, complex words can decrease the overall readability of the text substantially. The average word rank (control token: 'WordRank') of a sequence has been shown to correlate with the lexical complexity of the sentence (Paetzold and Specia, 2016) . Therefore, similar to Martin et al. 2020, we use the average of the third-quartile of log-ranks of the words in a sentence (except for stop-words and special tokens), to encode for its lexical complexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 290, |
|
"text": "(Paetzold and Specia, 2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical Complexity", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "Complex syntactic structures and multiple nested clauses can decrease the readability of text, especially for people with reading disabilities. To partially account for this, we make use of the maximum syntactic tree depth (control token: 'DepTreeDepth') of the sentence as a measure of its syntactic complexity. We use SpaCy's English dependency parser (Honnibal et al., 2020) to extract the depth. The deeper the syntax tree of a sentence, the more likely it is that it involves highly nested clausal structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Complexity", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "Using control tokens contribute to the overall performance of the model, but it also gives rise to an added data sparsity. It divides the sentences of the train set into different ranges of the control tokens. This results in some control values having little to no examples, which adds the task of learning and generalizing over the control token values for the model. Additionally, the model can learn to ad-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NER Replacement", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\"Sergio P\u00c3 rez Mendoza ( born January 26 , 1990 in Guadalajara , Jalisco ) , also known as \"Checo\" P\u00c3 rez , is a Mexican racing driver .\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw (Complex)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NER Replaced \"person@1 ( born date@1 in gpe@1 ) , also known as \" person@2 \" , is a norp@1 racing driver .\" here to the control requirement, while still failing to correctly simplify the sentence. Eg: Source: <NbChars 0.95> <LevSim 0.75> <WordRank 0.75> oxygen is a chemical element with symbol o and atomic number 8 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw (Complex)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Prediction: It has the chemical symbol o . It has the atomic number 8 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw (Complex)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, the proper noun \"Oxygen\" is replaced by the pronoun \"it\". Although the model follows the requirement of bringing down the word rank of the sentence and remains grammatically sound, it doesn't help with the simplification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw (Complex)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To address the issue of data sparsity as well that of unwanted NE-replacement, we propose NER mapping the data before training, and replacing the NE-tokens back after generation. We make use of the Ontonotes NER tagger (Yu et al., 2020) in the Flair toolkit (Akbik et al., 2019) . We identify named entities in the complex halves of all three of the data splits and replace them with one of 18 tags (from the NER tagger) with a unique index (Table 2) . NER replacement for simplification was previously explored by Zhang and Lapata (2017) , but consisted of fewer classes. The large number of tags allow for a fine division between different named-entity types, which helps the model to encode the contexts of each of the types better while still reducing the NE-vocabulary size substantially.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 236, |
|
"text": "(Yu et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 278, |
|
"text": "(Akbik et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 538, |
|
"text": "Zhang and Lapata (2017)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 441, |
|
"end": 450, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Raw (Complex)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The tagged data is then used for training and subsequent generation on the test set. Then any tags in the simplified output are located in the saved NER-mapping and reverted back to the original token or phrase. This step not only prevents proper nouns from getting replaced, but also greatly reduces the model vocabulary (allowing for greater generalizability).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Raw (Complex)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The vocabulary of a model trained on a corpus like WikiLarge is quite small, which prevents the model from predicting better fitting tokens. To address this, we use FastText's pre-trained embeddings (Bojanowski et al., 2016) (dimensionality: 300) as input embeddings for our model. The embeddings significantly boost the vocabulary size of usable content words for the model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-Trained Embeddings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our architecture is a Transformer Model (Vaswani et al., 2017) , and we make use of the Transformer Seq2Seq implementation from FairSeq (Ott et al., 2019) . To understand the impact of each of the proposed methods, we train a total of four models:", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 62, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 154, |
|
"text": "(Ott et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 T: Vanilla Transformer (Vaswani et al., 2017) , with control tokens, used as a baseline model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 47, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 T+Pre: Transformer trained with FastText's pretrained embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 T+NER: Transformer trained on NER mapped data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 SimpleNER (T+Pre+NER): Transformer trained on NER mapped data with FastText's pretrained embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For ease of comparison, all four models were trained with an input embedding dimensionality of 300, fully connected layers with a dimensionality of 2048, 6 layers and 6 attention heads on both, the encoder and the decoder. During training , we are using Adam optimizer (Kingma and Ba, 2015) (\u03b2 1 = 0.9, \u03b2 2 = 0.999, = 10 \u22128 ), with a learning rate of 0.00011 and 4000 warm-up updates, while dropout is set at 0.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For training, we make use of the WikiLarge dataset (Zhang and Lapata, 2017) , with 296,402 automatically aligned complex-simple sentence pairs obtained from the English Wikipedia and Simple English Wikipedia. For validation and testing, we use the evaluation sets of the two tracks we participated in, namely: ASSET (Alva-Manchego et al., 2020) and TurkCorpus (Xu et al., 2016 1. Source \"orton and his wife were happy to have alanna marie orton on july 12 , 2008.\" Baseline (T) \"orton and his wife , dorothy marie orton on july 12 , 2007 .\" SimpleNER \"orton and his wife supported alanna marie orton on july 12 , 2008.\" 2. Source \"aracaju is the capital of the state.\" Baseline (T) \"it is the capital city of the country .\" SimpleNER \"aracaju is the capital city of the country .\" 3. Source \"yoghurt or yogurt is a milk-based food made by bacterial fermentation of milk.\" SimpleNER \"yogurt is a type of food that is made by bacterial fermentation of product@1.\" 4. Source \"entrance to tsinghua is very very difficult.\" SimpleNER \"the entrance to tsinghua is very very simple .\" Table 4 : Sample outputs of the baseline(T) and SimpleNER models on the TurkCorpus-testset 10 human-annotated simplifications for each of the 2359 source sentences, whereas TurCorpus provides 8.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 75, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 376, |
|
"text": "(Xu et al., 2016", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1078, |
|
"end": 1085, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Apart from lower-casing all three splits of the data, the data pairs of the trainset with token length lower than 3 were removed, and sentence pairs with compression ratio (len(target)/len(source)) beyond the bounds [0.2, 1.5] were omitted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our model is evaluated on both BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016) . But as points out, BLEU favours directly replicating the source sentence because of a high N-Gram similarity between the source and target sentences in most sentence simplification datasets. Therefore we only use SARI to rate and compare the models. We also make use of SARI to choose the best performing checkpoints on the validation sets of each of the tracks for evaluation on their respective test sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 59, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 69, |
|
"end": 86, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "All models were trained on 4 Nvidia GeForce GTX 1080 Ti GPUs with 64 GB of vRAM. Training was carried out for 20 epochs, and took roughly 11 hours for each model. For all four models, we set the control tokens to NbChars: 0.95, LevSim: 0.75, and WordRank: 0.75. We have omitted DepTreeDepth as shows that using all four tokens brings down the overall performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We report the BLEU and SARI scores on the test and validation splits of the ASSET & TurkCorpus datasets for each of the four models (Table 3) . All three variants outperform the baseline model (T) across evaluation sets. Using pretrained embeddings (T+Pre) and NER tagged data (T+NER) individually boosts the baseline SARI scores substantially, with the latter approach providing a larger increment in the performance. Using both methods together, further improves the overall SARI score (SimpleNER). Also note how the general BLEU score of the models reduce as the SARI score improves, indicating an increasingly dissimilar and simplified generation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 141, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "SimpleNER shows a better retention of named entities from the source sentence than the baseline model (Example 1, Table 4 ). The contrast is clearer between T+Pre and SimpleNER, as the standalone use of pretrained embeddings in T+Pre allows for unwanted switching between two named entities with similar vector representations (eg. \"2007\" & \"2008\") . Also, NER tagging prevents the unwanted shift from proper nouns to pronouns as observed in the baseline model (Example 2, Table 4 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 348, |
|
"text": "(eg. \"2007\" & \"2008\")", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 121, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 480, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We also noted that using NER tagging can hamper certain outputs: While decoding, if the model generates an NER-tag that either has a type or index mismatch with the original NE token, then the tag remains in the output even after NER-untagging (Example 3, Table 4 ). Also, using pretrainedembeddings can result in instances where a source gets replaced with another token having a similar vector representation. This was particularly observed when some tokens were replaced by their exact antonyms (Example 4, Table 4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 263, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 517, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The following is a summary of the response submitted with our output and model card submission to the GEM 2021 modelling shared task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social Impact", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our model can be utilized to produce point-to-point simplifications for people with cognitive disabilities, to read and understand text. Additionally, it proves helpful for second language learners, especially in public service centres such as airports or health clinics. Although the use of NER-mapping improves our model performance, it can lead to certain pitfalls. Masking NERs before training assumes that named entities don't need to undergo simplification or elaboration. This may be true for most evaluation datasets like ASSET and TurkCorpus, however this isn't the case for many real world cases. High-ranked named entities are often part of domain specific texts, which may require further explanation to be clearly understood by the general public.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Real World Use", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Elaboration and replacement of NEs are both crucial for simplification and also the pitfalls of our model. This shows that there is more linguistic information and knowledge of the named entities required to build the model to perfection or evaluate its results. Thus, the best suited method would be a manual evaluation and it could be as simple as a filling a likert scale on how well the simplification and elaboration were.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Measuring Impact", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Since this method is inefficient with respect to time and resources, there is a need for automated evaluation methods to approximate human judgment. A rudimentary measure to work on could take into account the NE's word rank (WR) and its average similarity (AS) to the other words in its sentence. Here, a high WR and a low AS would imply that the sentence does not contextualize the NE even when it might require elaboration. The other case would be when the NE has a relatively low WR and a high AS implying that the sentence contextualizes the NE aptly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Measuring Impact", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "In this paper, we report the performance of four Seq2Seq Transformer models on the sentence simplification task of GEM 2021 under two tracks: AS-SET and TurkCorpus. We show that individually using pre-trained embeddings and NER-replaced data substantially boosts the performance of a Transformer model assisted by control tokens. The NER tagging prevents the model from replacing important NEs with low rank tokens Also, using pretrained embeddings lets the model access a larger and fine-grained content-word vocabulary for simplification, despite training the model on relatively small data. When put together, the two approaches give rise to a much higher overall performance on the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Some pitfalls to be addressed are: The mismatch between the NER tags generated at the simplified end and the original NE tokens could be due to the exact string matching for NEs, the use of static embeddings (FastText) may have caused the unwanted swaps between highly similar tokens. Using finedtuned contextual embeddings may help. Additionally, since simplification datasets like TurkCorpus and ASSET might utilize different summarization styles, adding a control token to encode and control the output style could be explored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "https://github.com/kvadityasrivatsa/ gem_2021_simplification_task", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "FLAIR: An easy-to-use framework for state-of-theart NLP", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanja", |
|
"middle": [], |
|
"last": "Bergmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kashif", |
|
"middle": [], |
|
"last": "Rasul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schweter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--59", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-4010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the- art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations", |
|
"authors": [ |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Alva-Manchego", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4668--4679", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.424" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fernando Alva-Manchego, Louis Martin, Antoine Bor- des, Carolina Scarton, Beno\u00eet Sagot, and Lucia Spe- cia. 2020. ASSET: A dataset for tuning and evalu- ation of sentence simplification models with multi- ple rewriting transformations. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4668-4679, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.04606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An evaluation of syntactic simplification rules for people with autism", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Orasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iustin", |
|
"middle": [], |
|
"last": "Dornescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/W14-1215" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Evans, Constantin Orasan, and Iustin Dor- nescu. 2014. An evaluation of syntactic simplifica- tion rules for people with autism.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Controllable abstractive summarization", |
|
"authors": [ |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--54", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-2706" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceed- ings of the 2nd Workshop on Neural Machine Trans- lation and Generation, pages 45-54, Melbourne, Australia. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Controlled hallucinations: Learning to generate faithfully from noisy data", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "864--870", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.76" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Filippova. 2020. Controlled hallucinations: Learning to generate faithfully from noisy data. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 864-870, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "spaCy: Industrial-strength Natural Language Processing in Python", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5281/zenodo.1212303" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Binary codes capable of correcting deletions, insertions, and reversals", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vladimir I Levenshtein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1966, |
|
"venue": "Soviet physics doklady", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "707--710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710. Soviet Union.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Controllable sentence simplification", |
|
"authors": [ |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00c9ric", |
|
"middle": [], |
|
"last": "De La Clergerie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4689--4698", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis Martin,\u00c9ric de la Clergerie, Beno\u00eet Sagot, and Antoine Bordes. 2020. Controllable sentence sim- plification. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4689- 4698, Marseille, France. European Language Re- sources Association.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "\u00c9ric Villemonte de la Clergerie, and Beno\u00eet Sagot. 2019. Reference-less quality estimation of text simplification systems", |
|
"authors": [ |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Humeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Emmanuel", |
|
"middle": [], |
|
"last": "Mazar\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis Martin, Samuel Humeau, Pierre-Emmanuel Mazar\u00e9, Antoine Bordes,\u00c9ric Villemonte de la Clergerie, and Beno\u00eet Sagot. 2019. Reference-less quality estimation of text simplification systems. CoRR, abs/1901.10746.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "fairseq: A fast, extensible toolkit for sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexei", |
|
"middle": [], |
|
"last": "Baevski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Semeval 2016 task 11: Complex word identification", |
|
"authors": [ |
|
{ |
|
"first": "Gustavo", |
|
"middle": [], |
|
"last": "Paetzold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "560--569", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gustavo Paetzold and Lucia Specia. 2016. Semeval 2016 task 11: Complex word identification. In Pro- ceedings of the 10th International Workshop on Se- mantic Evaluation (SemEval-2016), pages 560-569.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Renata Pontin de Mattos Fortes, Thiago Alexandre Salgueiro Pardo, and Sandra Maria Alu\u00edsio", |
|
"authors": [ |
|
{ |
|
"first": "Arnaldo", |
|
"middle": [], |
|
"last": "Willian Massami Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vin\u00edcius", |
|
"middle": [], |
|
"last": "Candido Junior", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rodriguez Uz\u00eada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 27th ACM International Conference on Design of Communication, SIGDOC '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--36", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1621995.1622002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Willian Massami Watanabe, Arnaldo Candido Junior, Vin\u00edcius Rodriguez Uz\u00eada, Renata Pontin de Mat- tos Fortes, Thiago Alexandre Salgueiro Pardo, and Sandra Maria Alu\u00edsio. 2009. Facilita: Reading as- sistance for low-literacy readers. In Proceedings of the 27th ACM International Conference on Design of Communication, SIGDOC '09, page 29-36, New York, NY, USA. Association for Computing Machin- ery.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Sentence simplification by monolingual machine translation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sander Wubben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Bosch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1015--1024", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sander Wubben, Antal van den Bosch, and Emiel Krah- mer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015- 1024, Jeju Island, Korea. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Optimizing statistical machine translation for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quanze", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "401--415", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00107" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Named entity recognition as dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Juntao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6470--6476", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.577" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6470- 6476, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Sentence simplification with deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "584--594", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1062" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584-594, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A monolingual tree-based translation model for sentence simplification", |
|
"authors": [ |
|
{ |
|
"first": "Zhemin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Delphine", |
|
"middle": [], |
|
"last": "Bernhard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1353--1361", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1353-1361, Bei- jing, China. Coling 2010 Organizing Committee.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"text": "NER Tagging input sentence", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Baseline) 68.815 36.707 72.561 35.992 71.167 37.801 74.339 37.604 T + Pre 62.488 38.845 71.536 37.700 63.861 38.139 73.627 38.196 T + NER 59.215 39.380 70.433 37.985 58.985 38.996 72.181 38.375 SimpleNER 59.324 39.551 70.202 38.897 59.586 39.777 68.622 38.231", |
|
"num": null, |
|
"content": "<table><tr><td>Model</td><td>test asset BLEU SARI BLEU SARI BLEU SARI BLEU SARI val asset test turk val turk</td></tr><tr><td>T (</td><td/></tr><tr><td/><td>). Both have the same source</td></tr><tr><td/><td>sentences in their test (359 sentence pairs) and vali-</td></tr><tr><td/><td>dation sets (2000 sentence pairs). ASSET provides</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Scores obtained by the trained models on different test and validation sets (best scores are bolded)", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |