id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5400
Machine learning research has been focusing more and more on interpretability (Gilpin et al., 2018).
there are many nuances to interpretability (Lipton, 2016), and amongst them we focus on model transparency.
contrasting
train_5401
Empirically, previous work has found that LSTM language models use 200 context words on average (Khandelwal et al., 2018), indicating room for further improvement.
the direct connections between long-distance word pairs baked in attention mechanisms might ease optimization and enable the learning of long-term dependency (Bahdanau et al., 2014;Vaswani et al., 2017).
contrasting
train_5402
Previous work in the context of phrase-based statistical machine translation (Daumé III and Jagarlamudi, 2011) has noted that unseen (OOV) words account for a large portion of translation errors when switching to new domains.
this problem of OOV words in cross-domain transfer is under-examined in the context of NMT, where both training methods and experimental results will differ greatly.
contrasting
train_5403
Specifically, in this paper we tackle the task of data-based, unsupervised adaptation, where representative methods include creation of a pseudoparallel corpus by back-translation of in-domain monolingual target sentences (Sennrich et al., 2016a), or construction of a pseudo-parallel indomain corpus by copying monolingual target sentences to the source side (Currey et al., 2017).
while these methods have potential to strengthen the target-language decoder through addition of in-domain target data, they do not explicitly provide direct supervision of domainspecific words, which we argue is one of the major difficulties caused by domain shift.
contrasting
train_5404
The basic ideas are to store a handful of previous source or target sentences with context vectors (Jean et al., 2017;Wang et al., 2017a) or memory components (Maruf and Haffari, 2018;.
these methods have several limitations.
contrasting
train_5405
And to achieve localization, the coefficients corresponding to anchor points out of the neighbors of h M are set to zero.
it is hard to train in deep neural network using stochastic gradient methods.
contrasting
train_5406
The possible reason is that all the related work only leverage a small range of the documentlevel information, limited by model complexity and time consuming.
our models are capable to express all information with more abstract representations.
contrasting
train_5407
Translation given by NMT is not readable.
m-RefNet generates the core verb "strengthened" and B-RefNet provides a more accurate collocation "stepped up patrols".
contrasting
train_5408
propose to modify the NMT with light-weight key-value memory to store the translation history.
due to the limitation of the memory size, the very short view on the previous (25 timesteps) is not sufficient to model the document-level contextual information.
contrasting
train_5409
The Reinforce-NAT is proposed on the basis that the top-k words can occupy the central part of the probability distribution.
it remains unknown which k is appropriate for us.
contrasting
train_5410
We can see that while on BLEU-vs-AL plots, their models perform similarly to our test-time wait-k for de→en and zh→en, and slightly better than our test-time wait-k for en→zh, which is reasonable as both use a full-sentence model at the very core.
on BLEU-vs-CW plots, their models have much worse CWs, which is also consistent with results in their paper (Gu, p.c.).
contrasting
train_5411
In the ideal case where the input and output sentences have equal length, the translation will finish k steps after the source sentence finishes, i.e., the tail length is also k. This is consistent with human interpreters who start and stop a few seconds after the speaker starts and stops.
input and output sentences generally have different lengths.
contrasting
train_5412
1 and gradient descent algorithm.
estimating θ h is difficult given their discrete nature.
contrasting
train_5413
(2016) to explain the decision of text classifier.
here we focus on selecting a few relevant tokens from a source sequence in a translation task.
contrasting
train_5414
Not surprisingly, data augmentation significantly improves the robustness of NMT models to homophone noises.
the noises in training data seem to hurt the performance of the baseline model (from 45.97 to 43.94), and its effect on our model seems to be much smaller, probably because our model mainly uses the phonetic information.
contrasting
train_5415
Meanwhile, there have been many successes of transfer learning for NLP: models such as CoVe (McCann et al., 2017), ELMo (Peters et al., 2018), OpenAI GPT (Radford et al., 2018), ULM-FiT (Howard and Ruder, 2018), and BERT (Devlin et al., 2019) obtain powerful representations by training large-scale language models and use them to improve performance in many sentence-level and word-level tasks.
a language generation task such as APE presents additional challenges.
contrasting
train_5416
They also used segment embeddings (along with word and position embeddings) to differentiate between a pair of sentences in different languages.
this is only used in one of the pre-training phases of the language model (translation language modelling) and not in the downstream task.
contrasting
train_5417
Once we have the translation options for tokens in the source vocabulary we can perform a word by word translation of the source into Translationese.
a naive translation of each source token to its top translation option without considering the context is not the best way to go.
contrasting
train_5418
We performed experiments in a zero-shot setting, showing that the copy behaviour is triggered at test time with terms that were never seen in training.
to constrained decoding, we have also observed that the method exhibits flexible use of terminology as in some cases the terms are used in their provided form while other times inflection is performed.
contrasting
train_5419
By default, the hidden states of each word are hierarchically calculated by attending to all words in the sentence, which assembles global information.
several studies pointed out that taking all signals into account may lead to overlooking neighboring information (e.g.
contrasting
train_5420
As seen, TRANSFORMER which seeks more global information outperforms other models in both the "WC" and "CoIn" tasks.
modeling locality is beneficial to "SeLn", "ToCo", "BShif" and "Tense" tasks.
contrasting
train_5421
It is based on the estimation of a probability distribution over all input words for each target word.
source and target words are in different representation space, and they still have to go through a long information processing procedure that may lead to the source words are incorrectly translated into the target words.
contrasting
train_5422
(2017) incorporated a reconstructor module into NMT, which reconstructs the input source sentence from the hidden layer of the output target sentence to enhance source representation.
in previous studies, the training objective function was usually based on word-level and lacked explicit sentencelevel relationships .
contrasting
train_5423
Compared with Row 1 and 4, loss mse + enhanced (Row 3 and 6) increases little parameters about 0.6M and 2.1M, train and decode speed drop very little.
it has greatly improved the translation performance.
contrasting
train_5424
In absence of such huge amount of parallel data, NMT systems tend to perform poorly (Koehn and Knowles, 2017).
nMT without using any parallel data such as bilingual translations, bilingual dictionary or comparable translations, has recently become reality and opened up exciting opportunities for future research Artetxe et al., 2018;Yang et al., 2018).
contrasting
train_5425
To translate between many languages using bilingual version of unsupervised NMT, we require an encoder and one or two (Artetxe et al., 2018) decoders for each pair of languages.
we may not need separate decoders depending on the source language.
contrasting
train_5426
In supervised multilingual NMT, specifically for one-to-many translation directions, this consistency is absent in some existing works (Dong et al., 2015;Firat et al., 2016;Johnson et al., 2017).
in this work, we find that using shared encoder with fixed cross-lingual embedding improves performance in all the translation directions.
contrasting
train_5427
In English it means "The vice president of Trade Development Council" NMT can be factorized in character (Costa-Jussa and Fonollosa, 2016), word (Sutskever et al., 2014), or subword (Sennrich et al., 2015) level.
only using 1-best segmentation as inputs limits NMT encoders to express source sequences sufficiently and reliably.
contrasting
train_5428
(2016) propose languageadversarial training that does not directly depend on parallel corpora, but instead only requires a set of bilingual word embeddings (BWEs).
the multilingual transfer setting, although less explored, has also been studied (McDonald et al., 2011;Naseem et al., 2012;Täckström et al., 2013;Hajmohammadi et al., 2014;Zhang and Barzilay, 2015;Guo et al., 2016), showing improved performance compared to using labeled data from one source language as in bilingual transfer.
contrasting
train_5429
The idea is to have a set of language expert networks, one per source language, each responsible for learning language-specific features for that source language during training.
instead of hard-switching between the experts, each sample uses a convex combination of all experts, dictated by an expert gate.
contrasting
train_5430
In particular, the trainon-trans(lation) method translates the entire English training set into each target language which are in turn used to train a supervised system on the target language.
the test-ontrans(lation) method trains an English sequence tagger, and utilizes MT to translate the test set of each target language into English in order to make predictions.
contrasting
train_5431
On the Amazon dataset, it can be seen that when transferring to German or French (from the remaining three), the Japanese expert is less utilized compared to the European languages.
it is interesting that when transferring to Japanese, the French and English experts are used more than the German one, and the exact reason remains to be investigated.
contrasting
train_5432
The expert gate is a linear transformation (matrix) of size 128 × N , where N is the number of source languages.
the architecture of the task specific predictor C depends on the task.
contrasting
train_5433
The LSTMs − → f and ← − f are shared among multiple languages and capture a common language structure.
the word embeddings E and linear projection W are specific to each language .
contrasting
train_5434
In particular, many of the German words are mapped near the centre of the figure and make a large cluster.
the word embeddings trained by our model are not clustered by language, indicating that our model successfully maps word embeddings into a common space.
contrasting
train_5435
For POS tagging, the two most important features are dataset size and the TTR distance.
the lack of rich dataset-dependent features for the EL task leads to the geographic and syntactic distance being most influential.
contrasting
train_5436
On the one hand, cognate identification has been studied within linguistic typology and historical linguistics.
computational linguists have been researching methods for cognate production.
contrasting
train_5437
Then, we take these similarities as the weights to calculate an attentive vector for the entire graph G 2 by weighted summing all the entity embeddings of G 2 .ē We calculate matching vectors for all entities in both G 1 and G 2 by using a multi-perspective cosine matching function f m at each matching step (See Appendix A for more details): Graph-Level (Global) Matching Layer Intuitively, the above matching vectors (m att s) capture how each entity in G 1 (G 2 ) can be matched by the topic graph in the other language.
they are local matching states and are not sufficient to measure the global graph similarity.
contrasting
train_5438
It is motivated by the triangular NMT systems with pseudo target in the teacher-student networks.
we use pseudo source and apply different teacher-student networks.
contrasting
train_5439
In both cases, we first solve the translation problem, and the task is transformed to the monolingual setting.
while conceptually simple, the performance of this modular approach is fundamentally limited by the quality of machine translation.
contrasting
train_5440
(2018) showed that these spaces are, in general, far from being isomorphic, and thus they result in suboptimal or degenerated unsupervised mappings.
supervised methods that jointly train BWE from scratch (Upadhyay et al., 2016), on parallel or comparable corpora, do not have such limits since no pre-existing embedding spaces and no mapping function are involved.
contrasting
train_5441
These methods jointly train BWE by exploiting bilingual and monolingual contexts of words, materialized by sentence or document pairs, to learn a single BWE space.
they require large bilingual resources for training.
contrasting
train_5442
Bilingual lexicon induction (BLI) is by far the most popular evaluation task for BWE used by previous work in spite of its limits (Glavas et al., 2019).
to previous work, we used much larger test sets 10 for each language pair.
contrasting
train_5443
Again, BIVEC and SENTID performed similarly.
note that here USMT is merely an evaluation task: the improvement observed at step 0 are practically useless for USMT, since we can often gain much larger improvements through refinement as described in Section 2.2.
contrasting
train_5444
We also observed a lower accuracy when using original English, presumably due to the use of much smaller data than for training VECMAP.
when training monolingual word embeddings using fastText on the same English data used for training BIVEC, we observed that fastText underperforms BIVEC.
contrasting
train_5445
For instance, in our Hearst Graph the relation (male horse, is-a, equine) is missing.
since we correctly model that (male horse, is-a, horse) and (horse, is-a, equine), by transitivity, we also infer (male horse, is-a, equine), which SVD fails to do.
contrasting
train_5446
It is because the OOV issue is not severe for the char-based model and thus does not affect the performance much.
as we remove more and more training examples, the shrinking training dataset creates a bigger problem.
contrasting
train_5447
Assuming that the words have a uniform distribution, the paraphrase of C can then be written as an unweighted sum of its context vectors.
this uniformity assumption is unrealistic -word frequencies obey a Zipf distribution, which is Pareto (Piantadosi, 2014).
contrasting
train_5448
Therefore, we must also require that x 1 , y 1 , x 2 , y 2 be coplanar.
we do not need the word embeddings themselves to verify coplanarity; when there is no reconstruction error, we can express it as a constraint over M, the matrix that is implicitly factorized by the embedding model (see Definition 5).
contrasting
train_5449
This claim has since been repeated in other work (Arora et al., 2016).
for example, according to this conjecture, the analogy (king,queen)::(man,woman) holds iff for every word w in the vocabulary as noted earlier, this idea was neither derived from empirical results nor rigorous theory, and there has been no work to suggest that it would hold for models other than GloVe, which was designed around it.
contrasting
train_5450
Compositionality is one of the strongest assumptions in semantics, stating that the meaning of larger units can be derived from their smaller parts and their contextual relation.
for idiomatic phrases, this assumption does not hold true as the meaning of the whole phrase may not be related to their parts in a straightforward fashion.
contrasting
train_5451
In addition, this supervised approach requires an additional resource of ∼ 70k known noun phrases from Wikipedia for training.
compare their best models with all these baseline models and show that their models outperform across all the respective datasets.
contrasting
train_5452
These embeddings, therefore, are not favorable in applications that demand strong synonym identification.
supervised or semi-supervised representation leaning requires annotated corpus, such as paraphrastic sentences or natural language inference data (Conneau et al., 2017;Wieting and Gimpel, 2017;Subramanian et al., 2018;Cer et al., 2018).
contrasting
train_5453
The difference is that we focus on learning representation for multi-word concept names, hence the contextual and conceptual constraints are essential, in addition to the synonymous similarity.
most retrofitting approaches mainly aim to improve word representations.
contrasting
train_5454
The knowledge can be used to infer quality representations for new synonyms.
similar to skip-gram baselines, BNE faces serious challenges if the names are unpopular and contain words that do not reflect their conceptual meanings.
contrasting
train_5455
As shown in Table 2, SG W +WMD outperforms Jaccard baseline (in MAP score), mainly because of its ability to capture semantic matching.
both baselines are non-parametric.
contrasting
train_5456
However, both baselines are non-parametric.
bNE+SG W learns additional knowledge about the synonym matching by using synonyms sets in UMLS as training data.
contrasting
train_5457
Hand crafted lexical databases, such as Word-Net (Miller, 1995), have been built and maintained to be used in NLP and other fields containing antonyms, synonyms and other lexical semantic relations.
its construction and maintenance takes a considerable human effort and it is difficult to achieve a broad coverage.
contrasting
train_5458
At first glance, synonymy and antonymy can be seen as binary relations between words.
based on empirical results 2 Edmundson defined synonymy and antonymy as ternary relations in order to consider the multiple senses of the words, as follows: xS i y ≡ x synonym of y according to sense i xA i y ≡ x antonym of y according to sense i Note that the senses of the words are represented in the relationship rather than in the words themselves.
contrasting
train_5459
For instance in Figure 1, a relevant context word discover for Mars is missed if the chosen window size is less than 3.
a large window size might allow irrelevant words to influence word embeddings negatively.
contrasting
train_5460
(2016) are restricted to handling symmetric relations like synonymy and antonymy.
although recently proposed (Alsuhaibani et al., 2018) is capable of handling asymmetric information, it still requires manually defined relation strength function which can be labor intensive and suboptimal.
contrasting
train_5461
This allowed us to compare our model with existing approaches that use von Mises-Fisher distributions for document modelling.
to our method, these models are based on topic models (e.g.
contrasting
train_5462
Unsupervised word embeddings have become a popular approach of word representation in NLP tasks.
there are limitations to the semantics represented by unsupervised embeddings, and inadequate fine-tuning of embeddings can lead to suboptimal performance.
contrasting
train_5463
Models such as skip-gram (Mikolov et al., 2013a) and Glove (Pennington et al., 2014) capture the statistics of a large corpus and have good properties that corresponds to the semantics of words (Mikolov et al., 2013b).
there are certain problems with unsupervised word embeddings, such as the difficulty in modeling some fine-grained word semantics.
contrasting
train_5464
There have been many follow-ups on this concept of identifying and utilizing patterns to identify hypernym pairs (Caraballo, 1999;Mann, 2002;Snow et al., 2005Snow et al., , 2006.
by restricting the sentences of interest to only those which match patterns, even very large datasets with very loose pattern matching will often return small co-occurrence numbers, especially for more indirectly connected hypernym pairs.
contrasting
train_5465
The most consistently used evaluation approach is comparison of the summaries produces against reference summaries via automatic measures such as ROUGE (Lin, 2004) and its variants.
automatic measures are unlikely to be sufficient to measure performance in summarization (Schluter, 2017), also known for other tasks in which the goal is to generate natural language (Novikova et al., 2017).
contrasting
train_5466
The relative assessment is often done using the paired comparison (Thurstone, 1994) or the best-worst scaling (Woodworth and G, 1991;Louviere et al., 2015), to improve inter-annotator agreement.
absolute assessment of summarization (Li et al., 2018b;Song et al., 2018;Kryściński et al., 2018;Hsu et al., 2018;Hardy and Vlachos, 2018) is often done using the Likert rating scale (Likert, 1932) where a summary is assessed on a numerical scale.
contrasting
train_5467
(2018b,c) also falls in this category, as the questions were written using the reference summary.
summarization datasets are limited to a single reference summary per document (Sandhaus, 2008;Hermann et al., 2015;Grusky et al., 2018;Narayan et al., 2018b) thus evaluations using them is prone to reference bias (Louis and Nenkova, 2013), also a known issue in machine translation evaluation (Fomicheva and Specia, 2016).
contrasting
train_5468
The second component, which evaluates the notions of "Precision" and "Recall" requires the highlights from the first one to be conducted.
the highlight annotation needs to happen only once per document, and it can be reused to evaluate many system summaries, unlike the Pyramid approach (Nenkova and Passonneau, 2004) that requires additional expert annotation for every system summary being evaluated.
contrasting
train_5469
Perhaps unsurprisingly human-authored summaries were considered best, whereas, TCONVS2S was ranked 2nd, followed by PT-GEN.
the performance difference in TCONVS2S and PTGEN is greatly amplified when they are evaluated against document with highlights (6.48 and 5.54 Precision and Recall points) compared to when evaluated against the original documents (3.98 and 1.83 Precision and Recall points).
contrasting
train_5470
To solve the out-of-vocabulary (OOV) problem, conventional Seq2Seq models utilize a copy mechanism (Gu et al., 2016) that selects a word from source (complex) sentence directly with a trainable pointer.
editNTS has the ability to copy OOV words into the simplified sentences by directly learning to predict KeeP on them in complex sentences.
contrasting
train_5471
These external rules can provide reliable guidance about which words to modify, resulting in higher add/keep F1 scores (Table 5-a).
our model is inclined to generate shorter sentences, which leads to high F1 scores on delete operations 8 .
contrasting
train_5472
The proposed model can be trained end-to-end by maximizing the conditional probability (3).
learning from scratch may not be informative for the separator and aggregator to disentangle the paraphrase patterns in an optimal way.
contrasting
train_5473
Based on the observation that the sentence templates generated by DNPG tend to be more general and domaininsensitive, we consider directly performing the sentential paraphrase in the target domain as a solution.
the language model of the source and target domains may differ, we therefore finetune the separator of DNPG so that it can identify the granularity of the sentence in the target domain more accurately.
contrasting
train_5474
Meanwhile, we notice that there is a considerable amount of work on domain adaptation for neural machine translation, another classic sequence-to-sequence learning task.
most of them require parallel data in the target domain (Wang et al., 2017a,b).
contrasting
train_5475
Its goal is to improve the readability of a text, making information easier to comprehend for people with reduced literacy, such as non-native speakers (Paetzold and Specia, 2016), aphasics (Carroll et al., 1998), dyslexics (Rello et al., 2013) or deaf persons (Inui et al., 2003).
not only human readers may benefit from TS.
contrasting
train_5476
As noted iň Stajner and Glavaš (2017), data-driven approaches outperform rule-based systems in the area of lexical simplification (Glavaš andŠtajner, 2015;Paetzold and Specia, 2016;Nisioi et al., 2017;Zhang and Lapata, 2017).
the state-of-the-art syntactic simplification approaches are rule-based (Siddharthan and Mandya, 2014;Ferrés et al., 2016;Saggion et al., 2015), providing more grammatical output and covering a wider range of syn-tactic transformation operations, however, at the cost of being very conservative, often to the extent of not making any changes at all.
contrasting
train_5477
A complex input sentence is transformed into a semantic hierarchy of simplified sentences in the form of minimal, self-contained propositions that are linked via rhetorical relations.
to above-mentioned end-to-end neural approaches, we followed a more systematic approach.
contrasting
train_5478
When training the best-performing model of Aharoni and Goldberg (2018) on this new split-and-rephrase dataset, they achieve a strong improvement over prior best results from Aharoni and Goldberg (2018).
due to the uniform use of a single split per source sentence in the training set, each input sentence is broken down into two output sentences only.
contrasting
train_5479
Since the latter commonly express minor information, we denote them context sentences.
the former are of equal status and typically depict the key information contained in the input.
contrasting
train_5480
With a score of 1.30 on the Wikilarge sample sentences, it is far ahead of the baseline approaches, with HYBRID (0.86) coming closest.
this system receives the lowest scores for G and M. RegenT obtains the highest score for G (4.64), while YATS is the best-performing approach in terms of M (4.60).
contrasting
train_5481
There are some examples in MNLI that contradict the heuristics in ways that are not easily explained away by other heuristics; see Appendix A for examples.
such cases are likely too rare to discourage a model from learning these heuristics.
contrasting
train_5482
Such successes suggest that BERT is able to learn from some specific subcases that it should rule out the broader heuristics; in this case, the nonwithheld cases plausibly informed BERT not to indiscriminately follow the constituent heuristic, encouraging it to instead base its judgments on the specific adverbs in question (e.g., certainly vs. probably).
the models did not always transfer successfully; e.g., BERT had 0% accuracy on entailed passive examples when such examples were withheld, likely because the training set still included many non-entailed passive examples, meaning that BERT may have learned to assume that all sentences with passive premises are cases of non-entailment.
contrasting
train_5483
Candidate generation (Recall@64) is poor in the Low Overlap category.
the ranking model performs in par with other hard categories for these mentions.
contrasting
train_5484
The reliance on this trick illustrates the point we make in the Introduction: syntactic distance has the advantage of being a continuous value, which can be computed as an attention score in a differentiable model.
this comes at a price: the PRPN does not model trees or tree-building operations directly.
contrasting
train_5485
In the setting without punctuation, the PRPN sets an initial policy that agrees fairly well with right-branching, and this rightbranching bias is reinforced by imitation learning and policy refinement.
in the setting with punctuation, the agreement with rightbranching changes in the opposite way.
contrasting
train_5486
It has been argued that NLI as currently formulated is not a difficult task (Poliak et al., 2018); this is presumably why models can perform well across a range of different tree structures, only some of which are syntactically plausible.
this does not imply that the Tree-LSTM will learn nothing when trained with NLI.
contrasting
train_5487
In order to address gender bias in part-of-speech (POS) tagging and dependency parsing, we first require an adequate size data set labeled for a) syntax along with b) gender information of the authors.
existing data sets fail to meet both criteria: data sets with gender information are either too small to train on, lack syntactic information, or are restricted to social media; sufficiently large syntactic data sets are not labeled with gender information and rely (at least in part) on news genre corpora such as the Wall Street Journal (WSJ).
contrasting
train_5488
Inducing Chinese sentiment lexicons (Wang and Ku, 2016) needs properly tokenized corpora, which is not a hard requirement in Swedish.
we aim to design a method applicable to typologically diverse languages and we apply it to 1500+ languages.
contrasting
train_5489
Columns (i) and (ii) of Table 6 show that REG ( §3.4) delivers results comparable to Densifier (ORTH) when using the same set of generic training words (GEN) in lexicon induction.
our method is more efficient -no need to compute the expensive SVD after every batch update.
contrasting
train_5490
There have been methods that classify each phrase independently (Li et al., 2015;McCann et al., 2017).
sentiments over hierarchical phrases can have dependencies.
contrasting
train_5491
In the first sentence, the phrase "seemed static" itself bares the neutral sentiment.
it has a negative sentiment in the context.
contrasting
train_5492
Some of these models can be used to generate sentiment-tagged synthetic text.
most of them are not directly suitable for generating bilingual code-mixed text, due to the unavailability of sufficient volume of gold-tagged codemixed text.
contrasting
train_5493
We can also use Euclidean distance as d i,j .
this method requires multilingual word embeddings for every word to calculate the distance.
contrasting
train_5494
As a fine-grained sentiment analysis task, Aspect Sentiment Classification (ASC) aims to predict sentiment polarities (e.g., positive, negative, neutral) towards given particular aspects from a text and has been drawing more and more interests in natural language processing and computational linguistics over the past few years (Jiang et al., 2011;Tang et al., 2016b;.
most of the existing studies on ASC focus on individual non-interactive reviews, such as customer reviews (Pontiki et al., 2014) and tweets (Mitchell et al., 2013;Vo and Zhang, 2015;Dong et al., 2014).
contrasting
train_5495
A well-behaved approach to ASC-QA should match each question and answer bidirectionally so as to correctly determine the sentiment polarity towards a specific aspect.
different from common QA matching tasks such as question-answering (Shen et al., 2018a), ASC-QA focuses on extracting sentiment information towards a specific aspect and may suffer from much aspect-irrelevant noisy information.
contrasting
train_5496
Automatic and human evaluations show that an abstractive model trained with a multi-task objective outperforms conventional Seq2Seq, language modeling, as well as a strong extractive baseline.
our best model is still far from human performance since raters prefer gold responses in over 86% of cases, leaving ample opportunity for future improvement.
contrasting
train_5497
We leverage evidence queried from the web for each question.
to previous datasets where the human written answer could be found with lexical overlap methods (Weissenborn et al., 2017), ELI5 poses a significant challenge in siphoning out important information, as no single sentence or phrase contains the full answer.
contrasting
train_5498
We show this approach outperforms conventional Seq2Seq and language modeling, as well as a strong extractive baseline based on BidAF (Seo et al., 2017) but generalized to multi-sentence output.
our best-performing model is still far from the quality of human written answers, with raters preferring the gold answers 86% of the time.
contrasting
train_5499
Language models are trained to predict all tokens in the question, web source, and answer.
the standard Seq2Seq model only receives training signal from predicting the answer which is much less than the language model gets.
contrasting