id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_16800
Following (Serban et al., 2016;Elsahar et al., 2018), we adopt some word-overlap based metrics (WBMs) for natural language generation including BLEU-4 (Papineni et al., 2002), ROUGE L (Lin, 2004) and METEOR (Denkowski and Lavie, 2014).
such metrics still suffer from some limitations (Novikova et al., 2017).
contrasting
train_16801
One might think that if a lot of words in a question overlap with those in a passage, the answer may be easily detected from the matched sentence in the passage.
this is not the case at all in BiPaR.
contrasting
train_16802
Monolingual vs. Multilingual: For the DrQA model, we observe that the multilingual training significantly improves the performance on English comparing with the monolingual training with only the English dataset.
we do not observe this trend on BERT.
contrasting
train_16803
Existing work focuses on understanding linguistic and semantic properties of word representations or how well pretrained sentence representations and language models transfer linguistic knowledge to downstream tasks.
our investigation seeks to answer to what extent pretrained language models store factual and commonsense knowledge by comparing them with symbolic knowledge bases populated by traditional relation extraction approaches.
contrasting
train_16804
2018leveraged deep Q-network to solve the AWP problems, achieving a good balance between effectiveness and efficiency.
all the existing AWP systems are only trained and validated on small benchmark datasets.
contrasting
train_16805
Theoretically, − → E and ← − E are complement to each other .
as a number may occur several times and represent different facts in a document, we add a distinct node for each occurrence in the graph to prevent potential ambiguity.
contrasting
train_16806
By doing this, it learns the mappings among different languages and performs good on the XNLI dataset.
xLM only uses a single cross-lingual task during pre-training.
contrasting
train_16807
Similar to translation language model in XLM, this task is based on the bilingual sentence pairs as well.
as it doesn't use the original words as input, we can train this task by recovering all words at the same time.
contrasting
train_16808
1 In contrast to the rapid progress shown in Question Answering (QA) tasks (Rajpurkar et al., 2016;Yang et al., 2018), the task of Question Generation (QG) remains understudied and challenging.
as an important dual Context: ...during the age of enlightenment, philosophers such as john locke advocated the principle in their writings, whereas others, such as thomas hobbes, strongly opposed it.
contrasting
train_16809
Sachan and Xing (2018) proposed a self-training cycle between QA and QG.
these works either reduced the ground-truth data size or simplified the span-prediction QA task to answer sentence selection.
contrasting
train_16810
Generate from Existing Articles In SQuAD (Rajpurkar et al., 2016), each context-answer pair only has one ground-truth question.
usually, multiple questions can be asked.
contrasting
train_16811
Previous works (Fried et al., 2018;Dhingra et al., 2018) proposed to first pre-train the model with synthetic data and then fine-tune it with ground-truth data.
we find when the synthetic data size is small (e.g., similar size as the ground-truth data), catastrophic forgetting will happen during fine-tuning, leading to similar results as using ground-truth data only.
contrasting
train_16812
On the one hand, the copy mechanism provides detailed background information for generating a question.
if not copying correctly, the question could be syntactically incorrect.
contrasting
train_16813
For instance, in the first example, the passage suggests that people fled from Sudan to Chad, while the generated question describes the wrong direction.
overall we think that the current question generator provides reasonable synthesized questions, yet there is still large room to improve.
contrasting
train_16814
(2018) took a generative approach where they added a decoder on top of their extractive model to leverage the extracted evidence for answer synthesis.
this model still relies heavily on the extraction to perform the generation and thus needs to have start and end labels (a span) for every QA pair.
contrasting
train_16815
A relevant example of such a question is the following: An important characteristic of all solutions developed for this task is that they are not given explicitly any external information in the form of documents supporting the correct answer or semistructured information.
external information is highly desirable, especially domain and common-sense knowledge.
contrasting
train_16816
When a valid document is provided -guaranteed to contain the correct answer -the exact match (EM) score obtained by DrQA is 69.5.
when the supporting document has to be retrieved from Wikipedia, by an information retrieval engine, the EM score drops to 27.1.
contrasting
train_16817
ing has focused on integrating external knowledge (linguistic and/or knowledge-based) into recurrent neural network models using Graph Neural Networks (Song et al., 2018), Graph Convolutional Networks (Sun et al., 2018;De Cao et al., 2019), attention (Das et al., 2017;Mihaylov and Frank, 2018;Bauer et al., 2018) or pointers to coreferent mentions (Dhingra et al., 2017).
in this work we examine the impact of discourse-semantic annotations ( Figure 1) in a self-attention architecture.
contrasting
train_16818
Therefore our method is most related to LISA (Strubell et al., 2018), which uses joint multi-task learning of POS and Dependency Parsing to inject syntactic information for Semantic Role Labeling.
we do not use multi-task learning, but directly encode semantic information extracted by pre-processing with existing tools.
contrasting
train_16819
We denote this reasoning-free setting.
the annotator 2 cannot use the long answer, so reasoning over the context is required for 3 Both are qualified M.D.
contrasting
train_16820
Strictly speaking, most yes/no/maybe research questions can be answered by "maybe" since there will always be some conditions where one statement is true and vice versa.
the task will be trivial in this case.
contrasting
train_16821
This resembles the transfer learning discussed by Howard and Ruder (2018), where the source domain would be the BM25 sentences, and the target domain the ROCC justifications.
one important distinction is that, in our case, all this knowledge comes solely from the resources provided within each dataset, and is retrieved using unsupervised method (BM25).
contrasting
train_16822
We conjecture this happens because the BERT language model was trained on a large text corpus that comes from these two do- mains.
importantly, AutoROCC is more robust across domains that are different from these two, since it is an unsupervised approach that is not tuned for any specific domain.
contrasting
train_16823
Designed to be more challenging than SQuAD-like datasets, they feature questions that require context of more than one document to answer, testing QA systems' abilities to infer the answer in the presence of multiple pieces of evidence and to efficiently find the evidence in a large pool of candidate documents.
since these datasets are still relatively new, most of the existing research focuses on the few-document setting where a relatively small set of context documents is given, which is guaranteed to contain the "gold" context documents, all those from which the answer comes (De Cao et al., 2019;Zhong et al., 2019).
contrasting
train_16824
Recent work on open-domain question answering largely follow this retrieve-and-read approach, and focus on improving the information retrieval component with question answering performance in consideration (Nishida et al., 2018;Kratzwald and Feuerriegel, 2018;Nogueira et al., 2019).
these one-step retrieve-and-read approaches are fundamentally ill-equipped to address questions that require multi-hop reasoning, especially when necessary evidence is not readily retrievable with the question.
contrasting
train_16825
Both datasets feature a few-document setting where the gold supporting facts are provided along with a small set of distractors to ease the computational burden.
researchers have shown that this sometimes results in gameable contexts, and thus does not always test the model's capability of multi-hop reasoning (Chen and Durrett, 2019;Min et al., 2019a).
contrasting
train_16826
Further inspection reveals that despite Elasticsearch improving overall recall of gold documents, it is only able to retrieve both gold documents for 36.91% of the dev set questions, in comparison to 28.21% from the IR engine in (Yang et al., 2018).
gOLDEN Retriever improves this percentage to 61.01%, almost doubling the recall over the single-hop baseline, providing the QA component a much better set of context documents to predict answers from.
contrasting
train_16827
Since the prediction result for each query substructure is independent, the score for query structure S i is measured by joint probability, which is Assume that should be 1 in the ideal condition.
∀S * j S i , Pr[S * j | y] should be 0.
contrasting
train_16828
Alike FP, our model also attains better results in LP, and the SOTA results on FB15K-237 dataset 2 .
we also notice that AttnPath doesn't attain the best result under a small part of query relations, even lower than TransE / R. By analyzing triples related to these relations, we found that: 1) they have more outgoing edges of the other relations pointing to the entities which are not the Figure 2: Total SR and SR-10 for two relations of NELL-995.
contrasting
train_16829
IE-gold + R-GCN combines the advantages of IE-gold and R-GCN, and performs best among the baselines.
in the 1hop-subgraphs, the Added Acc and Deleted Acc are only 0.4681 and 0.2448, which are still quite low.
contrasting
train_16830
However, the targets may be very far and hard to reach, it is the shortcuts that make the successful information delivery possible.
if there are too many shortcuts, the messages are easy to get to wrong targets.
contrasting
train_16831
On the one hand, for existing RL-based methods, their results on FB15K-237 are generally lower than those on NELL-995 since FB15K-237 is more complex and arguably more difficult to design proper reward functions manually.
our framework reliefs this problem to some extent by dynamically learning superior reward functions, thus we make greater improvements on challenging FB15K-237.
contrasting
train_16832
As a result, semantics to be paid attention are uncertain and unstable for matching because semantics are changed at different layers.
the intermediate representations tend to be affected by error propagation in multi-layered attentions, in which if the first attention aligns the wrong position, the second attention will now have the incorrect information as input for alignment.
contrasting
train_16833
These methods try to incorporate information of relation paths to get better performance.
they pay less attention to the order of relations in a path when learning representations of the path.
contrasting
train_16834
In recent years, there has been a surge of interests in interpretable graph reasoning methods.
these models often suffer from limited performance when working on sparse and incomplete graphs, due to the lack of evidential paths that can reach target entities.
contrasting
train_16835
Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on.
it still remains a challenging task for NLP systems.
contrasting
train_16836
This is useful for reasons such as marketing, overall enjoyment of interaction, and mental health therapy.
due to limited data with emotional content in specific semantic contexts, the generated text may contain incorrect semantic content.
contrasting
train_16837
Adjectives typically carry less semantic meaning and SMERTI likely has more trouble figuring out how best to infill the text.
nouns typically carry more, and phrases the most (since they consist of multiple words).
contrasting
train_16838
Constraining the number of training examples is effective for part-of-speech, suggesting that learning each linguistic task requires fewer samples than our control task.
for dependency edge prediction, this leads to significantly reduced linguistic task accuracy.
contrasting
train_16839
We could estimate these by sampling t from the token encoder p ✓ (t | x) and then evaluating all q , p ✓ , r , and s ⇠ probabilities.
in fact we use the sampled t only to estimate the first expectation (by computing the decoder probability q (y | t) of the gold tree y); we can compute the KL terms exactly by exploiting the structure of our distributions.
contrasting
train_16840
In all experiments so far, we did not model this constraint explicitly to investigate whether the model is able by default to predict rank correctly.
in exploring model configurations we also report on whether adding this constrant leads to better performance .
contrasting
train_16841
The authors of BERT recommend not to mask words randomly with [MASK] when fine-tuning the network.
we discovered that masking often reduces the tendency of the classifiers to overfit to BERT by forcing the network to rely on the context of surrounding words.
contrasting
train_16842
We surmise that this is due to using mixed batches on an unbalanced training set, which skews the model towards predicting larger treebanks more accurately.
we find that fine-tuning on the treebank individually with BERT weights saved from UDify eliminates most of these gaps in performance.
contrasting
train_16843
To speed up training, we employ bucketed batching, sorting all sentences by their length and grouping similar length sentences into each batch.
to ensure that most sentences do not get grouped within the same batch, we fuzz the lengths of each sentence by a maximum of 10% of its true length when grouping sentences together.
contrasting
train_16844
We primarily focus on the OpenBookQA dataset since it is the only dataset currently available that provides partial context.
we believe such an approach is also applicable to the broader setting of multi-hop RC datasets, where the system could start reasoning with one sentence and fill remaining gap(s) using sentences from other passages.
contrasting
train_16845
This allows using an attention-based approach of indirectly combining information (Dhingra et al., 2018;Cao et al., 2019;Song et al., 2018).
open domain question answering datasets come with no context, and require first retrieving relevant knowledge before reasoning with it.
contrasting
train_16846
Some RC systems (Mihaylov and Frank, 2018;Kadlec et al., 2016) and Textual Entailment (TE) models (Weissenborn et al., 2017;Inkpen et al., 2018) incorporate external KBs to provide additional context to the model for better language understanding.
we take a different approach of using this background knowledge in an explicit inference step (i.e.
contrasting
train_16847
The OpenBookQA dataset provides the core science fact used to create the question.
in 20% of the cases, while the core science fact inspired the question, it is not needed to answer the question .
contrasting
train_16848
It has been shown that simply fine-tuning large, pre-trained language models such as GPT (Radford et al., 2018) and BERT (Devlin et al., 2019) can be a very strong baseline method.
there still exists a large gap between performance of said baselines and human performance.
contrasting
train_16849
(2018) follow this line of works to incorporate the embeddings of related knowledge triples at the word-level and improve the performance of natural language understanding tasks.
to our work, they do not explicitly impose graph-structured knowledge into models , but limit its potential within transforming word embeddings to concept embeddings.
contrasting
train_16850
For RCQA, a well-known resource is SQuAD (Rajpurkar et al., 2016) with 100K QA data created by human, followed by NarrativeQA (Kočiskỳ et al., 2018), SQuAD 2.0 (Rajpurkar et al., 2018), and CoQA (Reddy et al., 2018).
as these datasets support only English, supporting other languages requires either annotation efforts in a comparable scale (Lim et al., 2018), or modeling efforts to overcome the limitation of training resources in terms of quantity or quality.
contrasting
train_16851
The objective of original QA model minimizes the sum of the negative log probabilities: where N is the number of examples in training set, s i and e i are the start and end indices of the i-th example, respectively.
for down-weighting noise, we combine this QA loss function with weighted confidence score, based on function f (CS(p, q)): in initial steps of training, the confidence score is not reliable, and may cause unstable training.
contrasting
train_16852
Not only for question answering, such method has been successful in sentiment classication (Zhou et al., 2016), relation extraction (Faruqui and Kumar, 2015), and causal commonsense (Yeo et al., 2018).
these approaches are dependent on quality of translation.
contrasting
train_16853
model avg seen ± sd avg unseen ± sd CNN+LSTM 0.608 ± 0.01 0.4036 ± 0.003 CNN+LSTM+SA 0.7813 ± 0.009 0.235 ± 0.006 FiLM 0.8489 ± 0.014 0.153 ± 0.02 chance 0.5 0.5 We tackle the modeling of size GAs as a relational problem in the domain of visual reasoning, and show (in contrast with Kuhnle et al., 2018) that FiLM is able to learn the function underlying the meaning of these expressions.
none of the models develop an abstract representation of GAs that can be applied compositionally, an ability that even 4-year-olds master (Barner and Snedeker, 2008).
contrasting
train_16854
Consistent with results from previous experiments, detecting the presence of a licensor is slightly more challenging for models fine-tuned with CoLA or NPI data.
the overall lower performances in scope detection compared with detecting the presence of the licensor is not found in the minimal-pair experiments.
contrasting
train_16855
Based on this method alone, we might conclude that BERT's knowledge of this domain is near perfect.
the other methods show a more nuanced picture.
contrasting
train_16856
By considering results from several evaluation methods, we demonstrate that BERT has systematic knowledge of NPI licensing.
this knowledge is unequal across the different features relevant to this phenomenon, and does not reflect the Boolean effect that these features have on acceptability.
contrasting
train_16857
Neural language models have achieved stateof-the-art performances on many NLP tasks, and recently have been shown to learn a number of hierarchically-sensitive syntactic dependencies between individual words.
equally important for language processing is the ability to combine words into phrasal constituents, and use constituent-level features to drive downstream expectations.
contrasting
train_16858
The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl or sg conditions, as expected.
the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indi- cating that they have not learned the humanlike generalizations.
contrasting
train_16859
By comparing the BARE and OEST models' columns in Table 1, the non-conditional baseline BARE is superior for 71 / 77 languages (the exceptions being Chamorro, Croatian, Italian, Swazi, Swedish, and Tuareg).
the same columns in Table 3 and Table 2 reveal an opposite pattern: OEST outperforms the BARE baseline in 70 / 77 languages.
contrasting
train_16860
In particular, we find that identifying generic frames benefits from removing topic features, which are actually the hardest case.
removing topic features cannot help in identifying topic-specific frames.
contrasting
train_16861
agreement) given that a relationship exists.
we predict both argument components as well as the existence of a relation between them.
contrasting
train_16862
(2014) introduce a scheme to annotate inter-turn relations between two posts as "target-callout", and intraturn relations as "stance-rationale".
their empirical study is reduced to predicting the type of inter-turn relations as agree/disagree/other.
contrasting
train_16863
Predicting an argumentative relation is made more difficult by the fact that we need to consider all possible relation pairs.
some argumentative components may contain linguistic properties that allow us to predict when they are targets even without the full relation pair.
contrasting
train_16864
Various statistical methods have also been proposed to filter out the mistakes or (spamming) random responses of the † Corresponding author crowdsource workers (Liu et al., 2012;Hovy et al., 2013;Nguyen et al., 2017).
the way to filter out the mistakes or the random responses through statistical means is difficult to utilize for a fundamentally subjective annotation task.
contrasting
train_16865
They also showed that baseline classifiers could achieve moderate performance on the task.
we focus on local acceptability of arguments with restricted options for reason selection.
contrasting
train_16866
It focused on just two pronouns, it and they, and was applied to a single language pair.
we have a fully automated evaluation measure, we handle many English pronouns, and we cover multiple source languages.
contrasting
train_16867
This limits the evaluation measure both in terms of the language and also of the pronouns it is applicable to.
our framework requires only two candidate translations of the same text as input for comparison: this could be a reference vs. a system translation, or a comparison between two candidate translations (see Section 5.5).
contrasting
train_16868
First, we can easily verify such causalities, because Wikipedia articles tend to credibly attest them; for example, the Tobacco article states that inhaling its smoke can cause Lung cancer.
3 knowledge from other sources such as the web text tends to be difficult to verify, owing to a deluge of false information.
contrasting
train_16869
For example, in Wikidata, World War I (Q361) and the Paris Peace Conference (Q199820) show causality, as the former has the has effect relation with the latter.
world war I and the German invasion of Belgium (Q5551414) do not show causality, because they have a significant event relation between them.
contrasting
train_16870
Classical review encoder utilizes h i to represent review x and ends here.
we find that users with different backgrounds pay attention to different content of a review.
contrasting
train_16871
The classical summary decoder combines the context vector c t and the decoder state s t , and then feeds the merged vector into a linear layer to produce the vocabulary distribution.
when generating a summary, users with different attributes may have their own vocabulary.
contrasting
train_16872
Since the summary generated by S2SATT contains more overlapping words with the gold summary than the one generated by ASN contains, S2SATT obtains higher ROUGE scores than ASN.
from the view of aspects, ASN may be better.
contrasting
train_16873
Most recent works on neural extractive summarization have been rather successful in generating summaries of short news documents (around 650 words/document) (Nallapati et al., 2016) by applying neural Seq2Seq models (Cheng and Lapata, 2016).
when it comes to long documents, these models tend to struggle with longer sequences because at each decoding step, the decoder needs to learn to construct a context vector capturing relevant information from all the tokens in the source sequence (Shao et al., 2017).
contrasting
train_16874
However, the only information about sections fed into their sentence classifier is a categorical feature with values like Highlight, Abstract, Introduction, etc., depending on which section the sentence appears in.
in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary Our main contributions are as follows: (i) In order to capture the local context, we are the first to apply LSTM-minus to text summarization.
contrasting
train_16875
They often assume that writing knowledge can be acquired from the training data alone.
when people are writing, they not only rely on the data but also consider related knowledge.
contrasting
train_16876
They often assume that all writing knowledge can be learned from the training data.
when people are writing, they will not only rely on the data contents themselves but also consider related knowledge, which is neglected by previous methods.
contrasting
train_16877
One may argue that neural models can learn such knowledge when enough parallel cooccurrence pairs such as (Guelma, Algerian) and (MC El Eulma, Algerian Ligue Professionnelle 1) are available.
even in such case, neural models still tend to make mistakes for sparse co-occurrence pairs as we will show in the experiments section.
contrasting
train_16878
Intuitively, incorporating an external KB should improve data-to-text generation performance.
to pinpoint the effect of the additional knowledge is not trivial since we know that (a) not all external knowledge is relevant and (b) neural models may memorize certain inference patterns when parallel data is big enough.
contrasting
train_16879
The reason why the absolute improvement is relative small is that the full WikiBio dataset consists of 728,321 parallel data-to-text pairs which are enough for neural models to memorize certain inference patterns for high frequency pairs.
as will be shown in the Section 4.5, the baseline fails on low co-occurrence frequency pairs, but the KBAtt avoids this problem with the contributions from the external knowledge.
contrasting
train_16880
In this case, KBAtt makes a plausible inference on nationality from birth place, so it generates "american".
this inference pattern doesn't hold for this case, because he is British.
contrasting
train_16881
Recently, Perez-Beltrachini and Lapata (2018) generalize this task to multi-sentence text generation, where they focus on bootstrapping generators from loosely aligned data.
most of the work mentioned above assume all the writing knowledge can be learned from massive parallel pairs of training data.
contrasting
train_16882
Note that the rewards usually reflect the quality of extracted summary and measured by standard evaluation protocol.
they still sequentially process text and tend to extract earlier sentences over later ones due to the sequential nature of selection (Dong et al., 2018).
contrasting
train_16883
It shows that all the three models can focus on different parts of the context to form summary at first and BANDITSUM performs the best after training 10k steps.
with training steps growing, BANDITSUM and HER w/o Local begin to prefer earlier sentences.
contrasting
train_16884
Traditional approaches to CLS are based on the pipeline paradigm, which either first translates the original document into target language and then summarizes the translated document (Leuski et al., 2003) or first summarizes the original document and then translates the summary into target language (Lim et al., 2004;Orasan and Chiorean, 2008;Wan et al., 2010).
the current machine translation (MT) is not perfect, which results in the error propagation problem.
contrasting
train_16885
In conclusion, TNCLS can generate more informative summaries, but it is difficult to improve the conciseness and fluency.
with the help of MT and MS tasks, conciseness and fluency scores can be significantly improved.
contrasting
train_16886
Generating headlines with many clicks is especially important in this digital age, because many of the revenues of journalism come from online advertisements and getting more user clicks means being more competitive in the market.
most existing websites 2 naively generate sensational headlines using only keywords or templates.
contrasting
train_16887
Thus, we train another baseline classifier on a crawled balanced sensationalism corpus of 84k headlines where the positive headlines have at least 28 comments and the negative headlines have less than 5 comments.
the results on the test set show that the baseline classifier gets 60% accuracy, which is worse than the proposed classifier (which achieves 65%).
contrasting
train_16888
In addition, many papers (Nallapati et al., 2017;Zhou et al., 2018b; use extractive methods to directly select sentences from articles.
none of these work considered the sensationalism of generated outputs.
contrasting
train_16889
Implicit methods (Shen et al., 2017b;Fu et al., 2018;Prabhumoye et al., 2018) transfer the styles by separating sentence representations into content and style, for example using backtranslation (Prabhumoye et al., 2018).
these methods cannot guarantee the content consistency between the original sentence and transferred output (Xu et al., 2018a).
contrasting
train_16890
Explicit methods (Zhang et al., 2018b;Xu et al., 2018a) transfer the style by directly identifying style related keywords and modifying them.
sensationalism is not always restricted to keywords, but the full sentence.
contrasting
train_16891
Pointer generator networks have also been extensively accepted by the ABS community due to their efficacy with long document summaries (Chen and Bansal, 2018;Hsu et al., 2018), title summarization , etc.
the current power of abstractive summarization falls short of their potential.
contrasting
train_16892
Further, pointer generator models can effectively adaptive to both extractor and abstractor networks (Chen and Bansal, 2018), and summaries can be generated by incorporating a pointer-generator and multiple relevant tasks (Guo et al., 2018), such as question or entailment generation, or multiple source texts .
work particularly targets the problem of the abstraction is rare.
contrasting
train_16893
Overall, the scores favour the delexicalised approach (negative delta in the all deprels column for all languages) supporting the results given by the automatic metric.
for some dependency relations, the lexicalised baseline shows usefulness of word information, for ex- Table 4: Contraction Generation Results (BLEU scores).
contrasting
train_16894
Existing approaches try to explicitly disentangle content and attribute information, but this is difficult and often results in poor content-preservation and ungrammaticality.
we propose a simpler approach, Iterative Matching and Translation (IMaT), which: (1) constructs a pseudoparallel corpus by aligning a subset of semantically similar sentences from the source and the target corpora; (2) applies a standard sequence-to-sequence model to learn the attribute transfer; (3) iteratively improves the learned transfer function by refining imperfections in the alignment.
contrasting
train_16895
by aggregating system summaries' ROUGE scores across multiple input documents, we can reliably rank summarisation systems by their quality.
rOUGE performs poorly at summary level: given multiple summaries for the same input document, rOUGE can hardly distinguish the "good" summaries from the "mediocre" and "bad" ones (Novikova et al., 2017).
contrasting
train_16896
We believe this is the reason why ExtAbsRL has higher ROUGE scores.
extAbsRL extracts more redundant sentences: four out of 30 summaries by extAbsRL include redundant sentences, while Refresh and NeuralTD do not generate summaries with two identical sentences therein.
contrasting
train_16897
For instance, paraphrasing or machine translation exhibit a one-to-one relationship because the source and the target should carry the same meaning.
summarization or question generation exhibit one-to-many relationships because * Most work done during internship at Clova AI.
contrasting
train_16898
We model this kind of information in time dimension via selfattention.
unlike the unordered nature of rows and columns, the history information is sequential.
contrasting
train_16899
(2010) use the theme structure to define the representation for each sentence.
their solutions only consider statistical knowledge.
contrasting