id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5200
We observe a −0.1% degrade in performance on the partial adaptation and −4.3% degrade on the full adaptation of the Quaternion Transformer.
we note that the drop in performance with respect to parameter savings is still quite decent, e.g., saving 32% parameters for a drop of only 0.1 BLEU points.
contrasting
train_5201
Appendix A.2): For using the sparsemax mapping in an attention mechanism, Martins and Astudillo 2016show that it is differentiable almost everywhere, with where At first glance, sparsemax appears very different from softmax, and a strategy for producing other sparse probability mappings is not obvious.
the connection becomes clear when considering the variational form of softmax (Wainwright and Jordan, 2008): where H S (p) := − j p j log p j is the well-known Gibbs-Boltzmann-Shannon entropy with base e. Likewise, letting H G (p) := 1 2 j p j (1 − p j ) be the Gini entropy, we can rearrange Eq.
contrasting
train_5202
(2018) found non-monotonic models (with either soft attention or hard alignment) outperform popular monotonic models (Aharoni and Goldberg, 2017) in the three above mentioned tasks.
the inductive bias of monotonicity, if correct, should help learn a better model or, at least, learn the same model.
contrasting
train_5203
This query chain can be decomposed into two parts: a key represented by i k and a query represented by The former is modulated through the weight matrix W k , and tightly associated with the corresponding input token.
information carried by the key remains intact during the evolution of time step t. the latter, induced by the weight matrix W q , highly depends on the position and length of this chain, which dynamically changes between different token pairs.
contrasting
train_5204
However, for LSTM, GRU and ATR, LN results in significant computational overhead (about 27%∼71%).
quasi recurrent models like SRU and LRN only suffer a marginal speed decrease.
contrasting
train_5205
In recent years, deep neural networks have achieved outstanding success in natural language processing (NLP), computer vision and speech recognition.
these deep models are datahungry and generalize poorly from small datasets, very much unlike humans (Lake et al., 2015).
contrasting
train_5206
This finding also agrees with the observation we found in multi-label classification: our approach has superior generalization capability in low-resource setting with few training examples.
to the strongest baseline HD-LSTM with 34.51M and 0.03 seconds for one batch, our approach has 17.84M parameters and takes 0.06 seconds in an acceleration setting, and 0.12 seconds without acceleration.
contrasting
train_5207
For example, ASP with three-source tasks achieves 82.23% and 66.92% accuracy, respectively, in SNLI and MNLI, which are lower than 82.28% and 67.39% accuracy with its best performance with two-source tasks.
tARS overcomes such challenges, for example, 83.12% > 82.67% and 68.24% > 67.79% in SNLI and MNLI, except for QQP, which can be further improved by asymmetric MtL techniques (Lee et al., 2016).
contrasting
train_5208
Early works mainly focused on the shared representation methods (Liu et al., 2017;Tong et al., 2018;Lin et al., 2018), using a single shared encoder between all tasks while keeping several task-dependent output layers.
the sparseness of the shared space, when shared by K tasks, was observed (Sachan and Neubig, 2018).
contrasting
train_5209
Our method is based on the observation that high-dimensional multimodal time series data often exhibit correlations across time and modalities which leads to low-rank tensor representations.
the presence of noise or incomplete values breaks these correlations and results in tensor representations of higher rank.
contrasting
train_5210
facial behaviors, and body postures (Mihalcea, 2012;Rossiter, 2011).
as much as more modalities are required for improved performance, we now face a challenge of imperfect data where data might be 1) incomplete due to mismatched modalities or sensor failure, or 2) corrupted with random or structured noise.
contrasting
train_5211
This leads to redundancy in these overparametrized tensors which explains their low rank (Figure 1).
the presence of noise or incomplete values breaks these natural correlations and leads to higher rank tensor representations.
contrasting
train_5212
Recently, neural models such as cascaded residual autoencoders (Tran et al., 2017), deep adversarial learning (Cai et al., 2018), or translation-based learning (Pham et al., 2019) have also been proposed.
these methods often require knowing which of the entries or modalities are imperfect beforehand.
contrasting
train_5213
Given our intuitions above, it would then seem natural to augment the discriminative objective function with a term to minimize the rank of M. In practice, the rank of an order-M tensor is computed using the nuclear norm X * which is defined as (Friedland and Lim, 2014), When M = 2, this reduces to the matrix nuclear norm (sum of singular values).
computing the rank of a tensor or its nuclear norm is NP-hard for tensors of order ≥ 3 (Friedland and Lim, 2014).
contrasting
train_5214
Thus one pinyin may correspond to ten or more Chinese characters on the average.
pinyin IME may benefit from decoding longer pinyin sequence for more efficient inputting.
contrasting
train_5215
This sensitivity helps language users segment a continuous stream of speech (Vitevitch et al., 1997), incorporate new words into the lexicon (Storkel et al., 2006), and reconstruct parts of an utterance that may have been obscured by noise.
the details of how language learners infer these phonotactic generalizations from incoming acoustic data are still unclear.
contrasting
train_5216
(2011) that were trained on a comparable, but smaller, amount of unsyllabified data (20,000 vs 30,000 words).
a Wilcoxon rank sum test on ρ yielded no significant difference between feature-naive and feature-aware models in this regard (W = 282; p = 0.56).
contrasting
train_5217
These attributes suggest that CLMs can model regularities that exist within words, such as morphological inflection.
even large language modeling (LM) corpora have sparse coverage of inflected forms for morphologically-rich languages, which has been shown to make word and character language modeling more difficult (Gerz et al., 2018b;Cotterell et al., 2018).
contrasting
train_5218
In machine translation, subword tokenization with byte pair encoding (BPE) addresses the problem of unknown words and improves performance (Sennrich et al., 2016).
segmentation is potentially ambiguous, and it is unclear whether preset tokenization offers the best performance for target tasks.
contrasting
train_5219
As the word embedding model is a fundamental component in many NLP systems, mitigating bias in embeddings plays a key role in the reduction of bias that is propagated to downstream tasks (e.g., (Zhao et al., 2018a)).
it is debatable if debiasing word embeddings is a philosophically right step towards mitigating bias in NLP.
contrasting
train_5220
Future work can look to apply existing methods or devise new techniques towards mitigating gender bias in other languages as well.
such a task is not trivial.
contrasting
train_5221
For example, profession-related nouns such as professor, doctor, programmer have shown to be stereotypically male-biased, whereas nurse, homemaker to be stereotypically female-biased, and a debiasing method must remove such biases.
one would expect 1 , beard to be associated with male nouns and bikini to be associ-ated with female nouns, and preserving such gender biases would be useful, for example, for a recommendation system (Garimella et al., 2017).
contrasting
train_5222
For both datasets, we uncover strong associations between 3 Our findings also hold for the widely used data from Waseem and Hovy (2016).
because of severe limitations of that dataset (see Schmidt inferred AAE dialect and various hate speech categories, specifically the "offensive" label from DWMW17 (r = 0.42) and the "abusive" label from FDCL18 (r = 0.35), providing evidence that dialect-based bias is present in these corpora.
contrasting
train_5223
Past work on bias in hate speech datasets has exclusively focused on finding and removing bias against explicit identity mentions (e.g., woman, atheist, queer; Park and Fung, 2017; Dixon et al., 2018).
our work shows how insensitivity to dialect can lead to discrimination against minorities, even without explicit identity mentions.
contrasting
train_5224
In the English source sentence, the nurse's gender is unknown, while the coreference link with "her" identifies the "doctor" as a female.
the Spanish target sentence uses morphological features for gender: "el doctor" (male), versus "la enfermera" (female).
contrasting
train_5225
One of the weaknesses of these approaches, however, is that they do not take word ordering into account during the learning process.
word-based approaches based on RNNs that consider sequence information have been presented, but they are not competitive in terms of speed or quality of the embeddings (Mikolov et al., 2010;Mikolov and Zweig, 2012;Mesnil et al., 2013).
contrasting
train_5226
Finally, Press and Wolf (2017) introduced a model, based on word2vec, where the embeddings are extracted from the output topmost weight matrix, instead of the input one, showing that those representations are also valid word embeddings.
to the above approaches, each of which aims to learn representations of lexical items, sense embeddings represent individual word senses as separate vectors.
contrasting
train_5227
SGNS does not, on average, make the vast majority of words any more gendered in the vector space than they are in the training corpus; individual words may be slightly more or less gendered due to reconstruction error.
for words that are genderstereotyped (e.g., 'nurse') or gender-specific by definition (e.g., 'queen'), SGNS amplifies the gender association in the training corpus.
contrasting
train_5228
To allow a fair comparison with prior work, our experiments in this paper focus on gender association.
our claims extend to other types of word associations as well, which we leave as future work.
contrasting
train_5229
In practice, this issue often goes unnoticed because each word in the attribute set, at least for gender association, has a counterpart that appears with roughly equal frequency in most training corpora (e.g., 'man' vs. 'woman', 'boy' vs. 'girl').
this is not guaranteed to hold, especially for more nebulous attribute sets (e.g., 'pleasant' vs. 'unpleasant' words).
contrasting
train_5230
Given that much of the vocabulary falls into this category, this means that the embedding model does not systematically change the genderedness of most words.
because of reconstruction error, individual words may be more or less gendered in the embedding space, simply due to chance.
contrasting
train_5231
In our debiased embedding space, 94.9% of gender-appropriate analogies with a strength of at least 0.5 are preserved in the embedding space while only 36.7% of gender-biased analogies are.
the Bolukbasi et al.
contrasting
train_5232
Using RIPA, we found that SGNS does not, on average, make most words any more gendered in the embedding space than they are in the training corpus.
for words that are gender-biased or gender-specific by definition, SGNS amplifies the genderedness in the corpus.
contrasting
train_5233
Arguably, the gendered use of pregnant is benign-it is not due to cultural bias that women are more often described as pregnant, but rather because women bear children.
differences in the use of other adjectives (or verbs) may be more pernicious.
contrasting
train_5234
D S may span many diverse topics, while D T focuses on one or few, so there may be large overall drift from D S to D T too.
a judicious subset D S ⊂ D S may exist that would be excellent for augmenting D T .
contrasting
train_5235
Howard and Ruder (2018) propose to fine-tune a whole language model using careful differential learning rates.
epoch-based termination may be inadequate.
contrasting
train_5236
It is clear that pre-training the contextual embeddings on relevant target corpus helps in the downstream classification task.
the gains of SrcSel:R over Tgt is not clear.
contrasting
train_5237
For example, the angle between embeddings of (car, hypernymy, vehicle) and (car, synonymy, auto) is large.
that of (car, synonymy, automobile) and (car, synonymy, auto) is small.
contrasting
train_5238
(2017) present the model Attract-Repel to improve qualities of word embeddings for synonymy recognition.
they focus on one particular lexical relation, not capable of distinguishing multiple types of lexical relations.
contrasting
train_5239
Based on J m , if (x i , y i ) has the lexical relation type r m , the norm of M m x i − y i is likely to be small.
the norms of M n x i − y i (1 ≤ n ≤ |R|, n = m) are likely to be large.
contrasting
train_5240
This makes it a suitable objective for aligning the vector spaces of x, y in the latent space.
to the discriminative directed methods in (Mikolov et al., 2013a;Smith et al., 2017;Xing et al., 2015), IBFA has the capacity to model noise.
contrasting
train_5241
Parallel corpora are rare, and even when they do exist, they often only exist between specific pairs of languages.
the documentation of a language often begins with the creation of several important documents, including a dictionary of key terms, and translations of religious texts.
contrasting
train_5242
Also beginning with aligned Bible data, they recover verbal lemmas by leveraging multi-lingual alignments.
where they are only interested in recovering the lemma, we simultaneously induce detailed morphological features of the words in the target language, over a wider range of verbal and nominal morphology, and deploy a new set of machine learning techniques to do so.
contrasting
train_5243
For example, German nouns ending in "-ung" are very likely to pluralize with an "-en" suffix, but the projection baseline discovers no correct "-ung" pairs.
"-en" is a common plural suffix in German, and the systems systematically strip the "-en" from "-ungen" nouns, although often lower in the hypothesis list.
contrasting
train_5244
Both baselines struggle to produce the correct lemma -nouns are about 4 times as likely to observe null-inflection as verbs, and even plural nouns tend to drift significantly from their lemmas, to the point that another citation form has a smaller edit-distance.
we note little difference between nouns and verbs for any of our systems -in fact, our verbal system prior to reranking is slightly better than the nominal system.
contrasting
train_5245
We claim that the Bible is a suitable resource for learning the morphology of low-resource languages, but due to the necessity of gold morphological dictionaries, many of our evaluation languages cannot be considered low-resource.
the only available resources we assume to exist are a translated Bible and a bilingual dictionary.
contrasting
train_5246
The Transformer is adept at parallelizing of performing (multi-head) and stacking (multi-layer) SANs to learn the sentence representation to predict translation, and has delivered state-of-the-art performance on various translation tasks (Bojar et al., 2018;Marie et al., 2018).
these positional embeddings focus on sequentially encoding order relations between words, and does not explicitly consider reordering information in a sentence, which may degrade the performance of Transformer translation systems.
contrasting
train_5247
Word ordering is an important issue in translation.
it has not been extensively studied in NMT.
contrasting
train_5248
It can be seen that pre-norm Transformer obtains the same BLEU score as TA without the requirement of complicated attention design.
dLCL in both post-norm and pre-norm cases outperform TA.
contrasting
train_5249
Nevertheless, our system with a 30-layer encoder is still faster than Transformer-Big, because the encoding process is independent of beam size, and runs only once.
the decoder suffers from severe autoregressive problems.
contrasting
train_5250
We attribute it to post-norm Transformer being more sensitive to the large learning rate.
in the case of either a 6-layer encoder or a 20-layer encoder, the pre-norm Transformer benefits from the larger batch and learning rate.
contrasting
train_5251
The model using FastText embeddings is shown to be more robust across the datasets, although it also fails to outperform the diverse decoding baseline in WMT14 dataset.
syntax-based models achieve much higher diversity in both datasets.
contrasting
train_5252
We can see that the candidate translations produced by beam search has only minor grammatical differences.
the translation results sampled with the syntactic coding model have drastically different grammars.
contrasting
train_5253
The cosine similarity of a sentence pair is calculated as the dot product of their representations: which is bounded in the [-1, 1] range.
the threshold to decide when to accept a pair is not straightforward and might depend on the language pair and the corpus Artetxe and Schwenk, 2018).
contrasting
train_5254
Previous works have reduced sequence lengths to make training more tractable through fixed-length downsampling.
phonemes are variable lengths.
contrasting
train_5255
Over the last two decades, there has been extensive study targeting unsupervised constituency parsing (Klein and Manning, 2002, 2004, 2005Bod, 2006a,b;Ponvert et al., 2011) and dependency parsing (Klein and Manning, 2004;Smith and Eisner, 2006;Spitkovsky et al., 2010;Han et al., 2017).
all of these approaches are based on linguistic annotations.
contrasting
train_5256
It is worth noting that the PRPN baseline reaches this performance without any information from images.
the performance of PRPN is less stable than that of VG-NSL across random initializations.
contrasting
train_5257
h h h h h h h h h h @ @ @ @ @ @ @ @ @ @ Three white sinks in a bathroom Figure 7: A failure example by Benepar, where it fails to parse the noun phrase "three white sinks in a bathroom under mirrors" -according to human commonsense, it is much more common for sinks, rather than a bathroom, to be under mirrors.
most of the constituents (e.g., "three white sinks" and "under mirrors") are still successfully extracted by Benepar.
contrasting
train_5258
(2018) improved performance for several models.
while many of these augmented instructions have clear starting or ending descriptions, the middle portions are often disconnected from the path they are paired with (see for in depth analysis of augmented path instructions).
contrasting
train_5259
To avoid the intensive labor involved in dense annotations, (Huang et al., 2018) and (Zhou et al., 2018) considered the problem of weaklysupervised video grounding where only aligned video-sentence pairs are provided without any fine-grained regional annotations.
they both ground only a noun or pronoun in a static frame of the video.
contrasting
train_5260
Also, to avoid affecting previously learned Step-2 knowledge, we constrain θ 1,k and ψ 2,k to be orthogonal (Condition 2.2).
strictly imposing this condition into the objective function is not feasible (Bousmalis et al., 2016), hence we add a penalizing term into the objective function as an approximation to the orthogonality condition: Both Condition 2.1 and 2.2 are mutually dependent, because for two matrices' product to be zero, they share basis vectors between them, i.e., for an n-dimensional space, there are n basis vectors and if p of those vectors are assigned to one matrix, then the rest of the n − p vectors (or subset) should be assigned to the other matrix.
contrasting
train_5261
Modern entity linking systems rely on large collections of documents specifically annotated for the task (e.g., AIDA CoNLL).
we propose an approach which exploits only naturally occurring information: unlabeled documents and Wikipedia.
contrasting
train_5262
For instance, in Figure 3, for document "Brexit", we link entity Brexit to all other entities.
we do not link United Kingdom to Greek withdrawal from the eurozone as they are more than l entities apart.
contrasting
train_5263
11 Our model is accurate for PER, achieving accuracy of about 97%, only 0.53% lower than the supervised model.
annotated data appears beneficial for other named-entity types.
contrasting
train_5264
This result may be interpreted as suggesting that humanannotated data is not beneficial for entity linking, given that we have Wikipedia and web links.
we believe that the two sources of information are likely to be complementary.
contrasting
train_5265
Misra et al., 2018;Chaplot et al., 2018;Mei et al., 2016).
neural networks' powerful abilities to induce complex representations have come at the cost of data efficiency.
contrasting
train_5266
To solve the problem, training data selection (TDS) has been proven to be a prospective solution for domain adaptation in leveraging appropriate data.
conventional TDS methods normally requires a predefined threshold which is neither easy to set nor can be applied across tasks, and models are trained separately with the TDS process.
contrasting
train_5267
More recently, Generative Adversarial Net (GAN) based methods (Zang and Wan, 2017;Yu et al., 2017;Xu et al., 2018a) have been proposed to enhance the generation of long, diverse and novel text.
they still focus on word-level generation, and neglect the importance of topical and syntactic characteristics from natural languages.
contrasting
train_5268
to incorporate topic information for essay generation.
the model performance is not satisfactory.
contrasting
train_5269
Previous work (Feng et al., 2018) only adopts BLEU (Papineni et al., 2002) score based on ngram overlap to perform evaluation.
it is unreasonable to only use BLEU for evaluation because TEG is an extremely flexible task.
contrasting
train_5270
This limitation leads to poor coherence and topic-consistency.
the proposed model succeeds in generating novel high-quality text that closely surrounds the semantics of all input topics.
contrasting
train_5271
(2018) present a planand-write framework with two planning strategies to fully leverage storyline.
story generation and the TEG task focus on different goals.
contrasting
train_5272
For example, the word "okay" has a positive intensity around 0.6, "good" is around 0.7, and "great" is around 0.8.
when involving to the previous generated words, the sentiment intensity of current generated word may be totally different.
contrasting
train_5273
(2) Our model can more precisely control the sentiment intensity from human scores on sentiment, and it can also obtain both best results in sentiment mean absolute error (MAE) and relative sentiment rank (MRRR).
automatic SC-Seq2Seq gets the second best MaE score while Revised-VaE + L extra gets the second best MRRR score.
contrasting
train_5274
This task aims to reverse the sentiment polarity of a sentence but keep its content unchanged without parallel data (Fu et al., 2018;Tsvetkov et al., 2018;Xu et al., 2018;Lample et al., 2019).
there are few researches focus on the fine-grained control of sentiment.
contrasting
train_5275
Recently with the rise of neural networks, many methods generate text in an endto-end manner Wiseman et al., 2017;Bhowmik and de Melo, 2018).
they pay little attention to the grammatical structure of the output which may be ignored in generating long sentences, but it is crucial in generating short noun compounds like type descriptions.
contrasting
train_5276
A possible solution is to apply beam search to enlarge the searching space at the first stage.
in our preliminary experiments, when the beam size is small, the diversity of predicted key facts is low, and also does not help to improve the accuracy.
contrasting
train_5277
In the unsupervised paradigm, Paetzold and Specia 2016proposed an unsupervised lexical simplification technique that replaces complex words in the input with simpler synonyms, which are extracted and disambiguated using word embeddings.
this work, unlike ours only addresses lexical simplification and cannot be trivially extended for other forms of simplification such as splitting and rephrasing.
contrasting
train_5278
Pre-trained word embeddings are often seen to have positive impact on sequence-to-sequence frameworks (Cho et al., 2014a;.
traditional embeddings are not good at capturing relations like synonymy (Tissier et al., 2017), which are essential for simplification.
contrasting
train_5279
The latent representations of an input sentence z x and a syntactic tree template z y are fed into SIVAE-i, and the syntax of the generated sentence conforms with the explicitly selected target template.
linearized syntactic sequences are relatively long (as shown in Table 1) and long templates are more likely to mismatch particular input sentences, which may result in nonsensical paraphrase outputs.
contrasting
train_5280
(2018) generate paraphrases in a deep generative architecture.
all these methods assume the existence of some parallel paraphrase corpora while unsupervised paraphrase generation has been little explored.
contrasting
train_5281
Previous work used multi-level LSTM encoders (Yang et al., 2016) or hierarchical autoencoders (Li et al., 2015a) to learn hierarchical representations for long text or defined a stochastic latent variable for each sentence at decoding time (Serban et al., 2017).
our model encodes the entire paragraph into one single latent variable.
contrasting
train_5282
As shown in Table 9, the original sentences have been successfully manipulated to positive sentiment with the simple attribute vector operation.
the specific contents of the reviews are not fully retained.
contrasting
train_5283
Here, we present the method for optimizing I p e (x,y) , which can also be applied to I p d (x,y) .
to the parameter sharing techniques in most multi-task learning work (Collobert et al., 2011;Ando and Zhang, 2005), parameter θ for the parser and parameter φ for generator are independent in our framework.
contrasting
train_5284
If the selected entity e t is new (e t ∈ E t−1 ), the hidden state of the tracking model is updated with the embeddingē of entity e t as input.
if entity e t has already appeared in the past (e t ∈ E t−1 ) but is not identical to the previous one (e t = e t−1 ), we use h ENT s (i.e., the memory state when this entity last appeared) to fully exploit the local history of this entity.
contrasting
train_5285
(2019)'s model has no tracking module unlike our model, which mitigates redundant references and therefore rarely contains erroneous relations.
when complicated expressions such as parallel structures are used our model also generates erroneous relations as illustrated by the underlined sentences describing the two players who scored the same points.
contrasting
train_5286
In recent years, automatic question generation (QG), which aims to generate natural questions based on a certain type of data sources including structured knowledge bases (Serban et al., 2016b;Guo et al., 2018) and unstructured texts (Rus et al., 2010;Heilman and Smith, 2010;Du et al., 2017;Du and Cardie, 2018), has been widely studied.
previous works mainly focus on generating standalone and independent questions based on a given passage.
contrasting
train_5287
Figure 3: Example questions generated by human (i.e., original questions denoted as OQ), NQG and our ReDR on CoQA.
nQG generates much more questions consisting of implicit coreference markers like "Where?"
contrasting
train_5288
Automatic Evaluation Table 2 summarizes the results: both GCN CITED TEXT SPANS and TALKSUMM-ONLY models, are not able to obtain better performance than ABSTRACT 8 .
for the Hybrid approach, where the abstract is augmented with sentences from the summaries emitted by the models, our TALKSUMM-HYBRID outperforms both GCN HYBRID 2 and ABSTRACT.
contrasting
train_5289
Pasunuru and Bansal (2018) develop a loss-function based on whether salient segments are included in a summary.
the optimization of RL-based models can be difficult to tune and slow to train.
contrasting
train_5290
Abstractive summary generated by baseline Bottom-Up Summarization is much more better, which indicates the effectiveness of modifications.
the generated summary only contains partial salient information of the document.
contrasting
train_5291
Under these circumstances, supervised neural network models have achieved wide success, using a large number of reference summaries (Wang and Ling, 2016;Ma et al., 2018).
a model trained on these summaries cannot be adopted in other domains, as salient phrases are not common across domains.
contrasting
train_5292
Because the abstractive approach generates a concise summary by omitting trivial phrases, it can lead to a better performance than those of the extractive ones.
for Movies & TV, our model is competitive with other unsupervised extractive approaches; TextRank and Opinosis.
contrasting
train_5293
Because our full model generates summaries via learning the latent discourse tree, it sometimes fails to construct a tree, and thus experiences a decline in performance for relatively short reviews.
for datasets with the number of sentences exceeding 30, our model achieves competitive or better performance than the supervised model.
contrasting
train_5294
(2018a) proposed to retrieve a related summary from the training set as soft template to assist with the summarization.
their approach tends to oversimplify the role of the template, by directly concatenating a template after the source article encoding.
contrasting
train_5295
Other extractive methods apply sequence tagging models (Luan et al., 2017;Gollapalli et al., 2017; to identify keyphrases.
extractive methods cannot produce absent keyphrases.
contrasting
train_5296
Joint models for sentence selection and fusion implicitly perform content planning (Martins and Smith, 2009;Berg-Kirkpatrick et al., 2011;Bing et al., 2015;Durrett et al., 2016) and there is limited control over which sentences are merged and how.
this work attempts to teach the system to determine if a sentence singleton or a pair should be selected to produce a summary sentence.
contrasting
train_5297
Moreover, these methods tend to have limited content coverage by selecting salient words.
recent years have witnessed the success of Natural Language Generation (NLG) models to generate abstractive summaries.
contrasting
train_5298
Vocabulary expansion may be used to address the different vocabularies in source and target domains, and adversarial domain adaptation (ADA) may be used to merge the embedded feature representations across domains.
aDa does not adapt the decoder in an encoder-decoder generation model.
contrasting
train_5299
We adapt the domain-adversarial method for feature alignment in an encoder proposed by (Ganin et al., 2016).
for text generation, a domain-independent representation from the encoder, as used in domain adaptation for classification, is not adequate.
contrasting