id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_98000
Yelp reviews: As a primary example of a large dataset with intrinsic labels, we make use of the Yelp10 dataset, treating the source location of the review as the label of interest.
estimating label proportions in a target corpus is a type of measurement that is useful for answering certain types of social-scientific questions.
neutral
train_98001
Although they are not typically thought of in the context of estimating proportions, several methods have been proposed to deal directly with the problem of covariate shift, including kernel mean matching and its extensions (Huang et al., 2006;Sugiyama et al., 2011).
we select regularization strength via grid search, choosing the value that leads to the lowest average calibration error across training / held-out splits.
neutral
train_98002
For our DCA models, we initialize the number of agents before training, and partition the document among the agents (i.e., three agent → three paragraphs).
similar to single agent model, it missed to capture the player who scored the only goal in the game.
neutral
train_98003
For both Twitter and Weibo dataset, we randomly sample 80% for training, 10% for development, and the rest 10% for test.
avgEmb is the worst encoder on Twitter dataset, as word order is crucial in English.
neutral
train_98004
Also, they only use internal information in the input text and ignore external knowledge in conversation context.
features learned for keyphrases and non-keyphrases are different, which can thus benefit keyphrase tagger to correctly distinguish keyphrases from non-keyphrases.
neutral
train_98005
Understandably, the proposed approach is comparable in the preference to both the baselines for the major topic (0.5667) -since most standard summaries will cover the primary topic of the input.
early works on abstractive summarization were focused on sentence compression based approaches (Filippova, 2010;Berg-Kirkpatrick et al., 2011;Banerjee et al., 2015) which connected fragments from multiple sentences to generate novel sentences for the summary or template based approaches that generated summaries by fitting content into a template (Wang and Cardie, 2013;Genest and Lapalme, 2011).
neutral
train_98006
Formally, we write the step-wise reward s(y t |y 1:t 1 , Y ⇤ ) as follows: where N (Y,Ỹ ) represents the occurrence of sub-sequenceỸ in whole sequence Y .
the uniform bridge adopts a uniform distribution U (Y ) as constraint.
neutral
train_98007
Together with the generator network, three GBN systems are constructed: 1).
we follow Norouzi et al.
neutral
train_98008
The average compression ratio of the gold compression for input sentence was 39.8.
since seq2seq is more expressive than Tagger, we built HisAN on the baseline seq2seq model.
neutral
train_98009
For the last storyline "Egypt election", it starts in Day 20 and continues beyond the end of May.
to explore the efficiency of the proposed approach, we conducted an experiment by comparing the proposed approach NSEM with DSEM.
neutral
train_98010
If further increasing S, the precision/recall have slight change and the F-measure become relatively stable.
it is reasonable to assume that the title h and its main body d of a document share a similar storyline distribution.
neutral
train_98011
This problem can be formulated as an STKP on a star graph, whose vertex and edge sets are {r, r 1 , .
the results on approximation ratios and ROUGE 1 scores imply that our method compares favorably with the ILP-based method in terms of empirical performance.
neutral
train_98012
Different from the naive greedy algorithm explained in Section 3, the above greedy algorithm is performed on the set of all rooted paths, P. Thus, even if beneficial vertices are far from r, rooted paths that include such beneficial vertices are considered as candidates to be chosen in each iteration.
the constraint that appears in DR-submodular maximization is somewhat easier to deal with than that of our problem; exploiting this, Soma and Yoshida (2017) developed a polynomial-time algorithm that achieves roughly 1 2 -approximation.
neutral
train_98013
This approach is known to have three advantages: its applicability to many useful submodular objective functions, the efficiency of the greedy algorithm, and the provable performance guarantee.
, R K ⊆ V and function C e (S), which counts the number of times that n-gram e occurs in summary S ⊆ V , the ROUGE n score function is defined as We applied our method to compressive summarization tasks with the three kinds of objective functions: the coverage function, the one with rewards, and ROUGE 1 .
neutral
train_98014
The more questions a system can answer, the better it is at summarizing the document as a whole.
sentence Encoder A core component of our model is a convolutional sentence encoder which encodes sentences into continuous representations.
neutral
train_98015
We found that there are many candidate summaries with high ROUGE scores which could be considered during training.
we are not aware of any attempts to use reinforcement learning for training a sentence ranker in the context of extractive summarization.
neutral
train_98016
For instance, after losing power in a second, 2004 coup Haiti's Jean Bertrand Aristide was forced into exile in South Africa.
8 Filtering the original dataset in this manner 9 yields 17,529 positive and 30,266 negative sentences.
neutral
train_98017
We refer to these 3-tuples as "matching".
the details of how textual summaries might actually be presented to users are often ignored.
neutral
train_98018
Graph-based Features Our graph-based features are similar to those described in Gorinski and Lapata (2015).
as can be seen, MLE outperforms Lib across attributes, and is superior to ZeroR by a large margin.
neutral
train_98019
Biasing the sentence selection slightly to longer sentences is therefore promising.
the results are a first confirmation of the previously described intuition that predicting precision scores can be better than predicting recall scores.
neutral
train_98020
The word drop probability is normally set to 0.25, since using a higher probability may degrade the model performance (Bowman et al., 2016).
the VHRED is trained to predict a next target utterance x t conditioned on the context h cxt t which encodes information about previous utterances {x 1 , .
neutral
train_98021
The variational posterior is a factorized Gaussian distribution where the mean and the diagonal variance are predicted from the target utterance and the context as follows: where A known problem of a VAE that incorporates an autoregressive RNN decoder is the degeneracy that ignores the latent variable z.
the VHRED will not encode any information in the latent variable, i.e.
neutral
train_98022
We followed the methodology presented in (Jovita et al., 2015) claiming that the best answer given by the system has the highest similarity value between the customer turn and the agent answer.
if, though, we can automatically detect the worst conversations (in our experience, typically under 10% of the total), the focus can be on fixing the worst problems.
neutral
train_98023
We did not encounter any unsupported intent errors leading to customer rephrasing, which affected the ability of the rule-based model to classify those conversations as egregious.
in (Sarikaya, 2017) they reported on how the different reasons affect the users' satisfaction.
neutral
train_98024
The effectiveness of CNNs for representing text has already been addressed in previous studies.
sHCNN outperforms most of the baselines even if only low-level (L) or high-level (H) representations are exploited.
neutral
train_98025
This graphic model views the paths as discrete latent variables and relation as the observed variables with a given entity pair as the condition, thus the path-finding module can be viewed as a prior distribution to infer the underlying links in the KG.
during testing, we use a beam of 5 to approximate the whole search space for path finder.
neutral
train_98026
DeepPath is trained to find paths more efficiently between two given entities while being agnostic to whether the entity pairs are positive or negative, whereas MIN-ERVA learns to reach target nodes given an entityquery pair while being agnostic to the quality of the searched path 1 .
in practice, we found that the KL-reward term log p β qϕ causes severe instability during training, so we finally leave this term out by setting w KL as 0.
neutral
train_98027
The samples of FB15k-237 (Toutanova et al., 2015) are sampled from FB15k (Bordes et al., 2013), here we follow DeepPath (Xiong et al., 2017) to select 20 relations including Sports, Locations, Film, etc.
chains of reasoning and compositional reasoning only learn to predict relation given paths while being agnostic to the path-finding procedure.
neutral
train_98028
For solving this, many dense annotation schemata are proposed to force annotators to annotate more or even complete graph pairs.
it can possibly provide useful information for the future processes of classification or time inference.
neutral
train_98029
If the observation of a part of vague TLINKs induced as non-VAGUE TORDERs in the new data can be found, it will be the evidence.
tORDERs show extremely rare VAGUE labels in Event-DCt pairs.
neutral
train_98030
ELDEN then adds edges connecting entities in E + to entities in E. This is done by processing a web text corpus looking for mentions of entities in E + , and linking the mentions to entities in KG G = (E + , F ).
densification is used for relation inference in the former methods whereas it is used for entity coherence measurement in EL-DEN.
neutral
train_98031
TEMPLATEBASED combines the content with the target attribute markers in the retrieved sentence by slot filling.
figure 1 shows an example in which the sentiment of a sentence can be altered by changing a few sentiment-specific phrases but keeping other words fixed.
neutral
train_98032
RE-TRIEVEONLY scores well on grammaticality and having the target attribute, since it retrieves sentences with the desired attribute directly from the corpus.
retrieve: 3 of the 4 systems look through the corpus and retrieve a sentence x tgt that has the target attribute v tgt and whose content is similar to that of x.
neutral
train_98033
They employ the embedding vector of the target word as features.
while research has focused on polarity for quite a long time, meanwhile this early focus has been shifted to more expressive emotion representation models (such as Basic Emotions or Valence-Arousal-Dominance).
neutral
train_98034
Self-training has been successfully used in many NLP applications such as information extraction (Ding and Riloff, 2015) and syntactic parsing (McClosky et al., 2006).
for example, "I broke my arm" and "I got fired" are usually negative experiences, while "I broke a record" and "I went to a concert" are typically positive experiences.
neutral
train_98035
A co-training model that simultaneously trains event expression and event context classifiers produced substantial performance gains over the individual models.
for the RNN, we used the example LSTM implementation from Keras (Chollet et al., 2015) github, which was developed to build a sentiment classifier.
neutral
train_98036
Our gold standard data set is relatively small, so supervised learning that relies entirely on manually labeled data may not have sufficient coverage to perform well across the human need categories.
as with the other classifiers, the word embedding feature representations performed best, achieving an F1 score 54.4%, which is comparable to the F1 score of the LR Embed system.
neutral
train_98037
Whereas education had almost negligible influence on the performance, the more extensive formal training in reasoning the participants had, the higher their score was.
a possible approach would be to categorize warrants using, e.g., argumentation schemes (Walton et al., 2008) and break down errors accordingly.
neutral
train_98038
As arguments are highly contextualized, warrants are usually presupposed and left implicit.
if we treat a warrant as a simple premise, then the linked structure fits the intuition behind Toulmin's model, such that premise and warrant combined give support to the claim.
neutral
train_98039
Real-world deceptive situations are highstakes, where there is much to be gained or lost if deception succeeds or fails; it is hypothesized that these conditions are more likely to elicit strong cues to deception.
linguistic features such as n-grams and language complexity have been analyzed as cues to deception Yancheva and Rudzicz, 2013).
neutral
train_98040
The intuition behind this was that those turns were responses to follow up questions related to q1, and while the question detection and identification system discussed above did not identify follow up questions, we found that most of the follow up questions after an interviewer question q1 would be related to q1 in our hand annotation.
working with such data requires extensive research to annotate each utterance for veracity, so such datasets are often quite small and not always reliable.
neutral
train_98041
Action types and arguments are listed in the first two columns of Table 5.
for all domains, we compare the rational listener and speaker against the base listener and speaker, as well as against past state-of-the-art results for each task and domain.
neutral
train_98042
29.3% of S 0 's directions are correctly interpretable for Alchemy, vs. 83.3% of human directions).
in the SCONE domains, which have a larger space of possible outputs than SAiL, we extend the decoder by: (i) decomposing each action into an action type and arguments for it, (ii) using separate attention mechanisms for types and arguments and (iii) using state-dependent action embeddings.
neutral
train_98043
CHES scores are used only for evaluation and not during training.
our target pos (calibrated RILE) is a continuous variable 0, 1 , where 1 indicates that a manifesto occupies an extreme right position, 0 denotes an extreme left position, and 0.5 indicates center.
neutral
train_98044
In general, DAM and ESIM seem to be more robust than CE, with the latter's accuracy degrading to essentially random performance on the most challenging subsets.
given a sample E, we apply the transformation T in order to generate a transformed sample E T where the word pairs are swapped, similar to the procedure applied in Section 5.2 on SNLI control instances in order to generate their transformed instances counterpart.
neutral
train_98045
Our results show that the models are robust mainly where the semantics of the new instances do not change significantly with respect to the sampled instances and thus the class labels remain unaltered; i.e., the models are insensitive to our transformation to input data.
in this work we aim to study how certain factors affect the robustness of three pre-trained NLi models (a conditional encoder, the DAM model (Parikh et al., 2016), and the ESiM model (Chen et al., 2017)).
neutral
train_98046
On the other hand, much better performance is obtained for the DAM and ESIM models on I TH instances containing unseen word pairs, indicating these models have learned to infer hyper-nym/hyponym relations from information in the pre-trained word embeddings.
when the sample size is too small we apply Yate's correction or a Fisher test.
neutral
train_98047
This alteration, at the instance level, yields new sets of instances which range from easy (the semantics and the label of the new instance are the same to those of the original instance) to challenging (both semantics and label of the new instance change with respect to those of the original instance), but all of them remain easy to annotate for a human.
the CE model performs quite poorly at both samples (0.508 and 0.48 accuracy points on E A and E TA ); this drop in performance, with respect to the in situ condition, suggests that the repeated sentence context is too different from the structure of the training instances for the CE model to generalize effectively.
neutral
train_98048
It is currently estimated that over 1.5 billion people are learning English as a Second Language (ESL) worldwide.
it has been shown that key aspects of the reader's characteristics and cognitive state, such as mind wandering during reading (Reichle et al., 2010), dyslexia (Rello and Ballesteros, 2015) and native language (Berzak et al., 2017) can be inferred from their gaze record.
neutral
train_98049
In 1, assuming the function word that is always available when the speaker plans to produce a relative clause, the speaker will produce that when the upcoming relative clause or the first part of its contents are not yet available.
we did not find consistent evidence for the effect of noun frequency on classifier choice.
neutral
train_98050
The two frameworks, however, differ in their views on the theory of learning as we describe below: Curriculum learning is inspired by the learning principle that humans can learn more effectively when training starts with easier concepts and gradually proceeds with more difficult ones (Bengio et al., 2009).
to break the tie for these instances in the final ranking, we resort to their final loss values as follows: the Leitner System is inspired by the broad evidence in psychology that shows human ability to retain information improves with repeated exposure and exponentially decays with delay since last exposure (Cepeda et al., 2006).
neutral
train_98051
The importance of error-free language resources cannot be overstated as errors can inversely affect interpretations of the data, models developed from the data, and decisions made based on the data.
leitner system is inspired by spaced repetition (Dempster, 1989;Cepeda et al., 2006), the learning principle that effective and efficient learning can be achieved by working more on difficult concepts and less on easier ones.
neutral
train_98052
In the base-level learning equation, it can refer to both because of the decay parameter, d, which causes more recent presentations to be more important, with older presentations (signified by their time of presentation, t) becoming exponentially less relevant.
importantly, note that in both models, when we refer to the expected retrieval time or activation, we are referring to the target word, not any of the preceding words.
neutral
train_98053
This framework was selected rather than newer or more task-specific frameworks as it is the same underlying memory model of ACT-R, which has been used to explain a wide variety of language phenomena (e.g., Vasishth and Lewis, 2004;Reitter et al., 2011), but also has been used to explain everything from decision-making (e.g., Marewski and Mehlhorn, 2011) to visual attention in graphical user interfaces (e.g., Byrne et al., 1999).
further, this type of buffering strategy could be part of the strategy that ferreira and Swets (2002) refer to, when they propose the incrementality of language production is under "strategic control."
neutral
train_98054
We then experimented this model on our original length dataset.
is then queried to Elasticsearch that has indexed the selected scenes, and the scene with the highest relevance is retrieved.
neutral
train_98055
2 Each sentence in the plot summaries (a) A dialog from Friends: Season 8, Episode 12, Scene 2.
it is the largest, if not the only, corpus for the evaluation of passage completion on multiparty dialog that still gives enough instances to develop meaningful models using deep learning.
neutral
train_98056
Experimental results verified that the increased contexts are able to help producing more acceptable and diverse responses.
this idea is included in the "Reasoning box" in Figure 2.
neutral
train_98057
The domain adaptation from document comprehension to dialog generation is feasible by taking the rich context of the speakers as a "document", current user's query as a "question" and the chatbot's response as an "answer".
in this attention-based model, the conditional probability in Equation 4 is defined as: where is a RNN hidden state in the decoder part for time j and c j = i h i is a context vector, a weighted combination of the annotation set memory (h 1 , ..., h Ts ) produced by encoding a source sentence s with length T s .
neutral
train_98058
The validation costs begun converging in the third epoch for the three models.
the sentences for which the probabilities are being predicted as M r (response memory, specially corresponds to the chatbot's side) and the question set as M q (query memory, specially corresponds to the user's side).
neutral
train_98059
The different metrics in this figure will be introduced in the following sections.
the data in Reddit movie category is originally high-quality.
neutral
train_98060
Recent advances in extreme multi-label classification have proven to work well for large label spaces.
the support set S is chosen based on nearest neighbors and its selection process is discussed in Section 3.4.
neutral
train_98061
EMR data is generally not available for public use especially if it involves textual notes.
fastXML (Prabhu and Varma, 2014) partitions the feature space using the nDCG measure as the splitting criterion.
neutral
train_98062
Baker and Korhonen (2017) improve neural network training by incorporating hierarchical label information to create better weight initializations.
in this section we compare our work with prior state-of-the-art medical coding methods.
neutral
train_98063
We then train a ridge regression model on these instances.
this measures how much variance in the dependent variable y is captured by the independent variables x.
neutral
train_98064
The value is set as 1 at the start and gradually decreases to 0.1 after 3000 annealing steps.
we adopt Q-learning (Sutton and Barto, 1998) to optimize the neural agent, which uses a function Q(z, a) to represent Q-values and the recursive Bellman equation to perform Q-value iteration, when observing a new sample (z, a, z , r).
neutral
train_98065
We leverage a small amount of manually labeled data, downloaded from financial analysis websites, for guiding evidence mining.
we have investigated a reinforcement learning method to automatically mine evidences from large-scale text data for measuring the correlation between a concept and a list of stocks.
neutral
train_98066
Besides, MAP is roughly the average area under the precision-recall curve for a set of queries (Christopher et al., 2008).
formally, given a concept c, a set of m stocks {o i } m i=1 and n data sources and each D i j is a sequence of words w 1 , w 2 ...w |D i j | , we assume the relevant stocks of the concept are revealed in the data sources (e.g.
neutral
train_98067
The training set contains approximately 42K sentences with 887K words.
binarization can be considered as a special case of quantization, which quantizes the parameters to pairs of opposite numbers.
neutral
train_98068
Batch normalization is hard to apply to a recurrent neural network, due to the dependency over entire sequences.
it is important to find good binary embeddings.
neutral
train_98069
This shows CMN's capacity to model speaker-based emotions.
initially both A and B are emotionally driven by their own emotional inertia.
neutral
train_98070
To find the relevance of each memory m t λ 's context with q i , a match between both is computed.
the performance maintains a positive correlation with K. this trend supplements our intuition that the historical context acts as an essential resource to model emotional dynamics.
neutral
train_98071
Needless to say, existence of some false positive patterns at the label level is imminent.
this passive exploration is a label-based analysis which is performed as a sanity check.
neutral
train_98072
Previous work proposed an end-to-end time-aware attention network to leverage both contextual and temporal information for spoken language understanding and achieved the significant improvement, showing that the temporal attention can guide the attention effectively .
namely, in dialogue contexts, the recent utterances contain most information related to the current utterance, which is aligned with our intuition.
neutral
train_98073
In other words, our model automatically learns that the convex timedecay attention is more suitable for modeling contexts from the dialogue data than the other two types.
the further analysis is discussed in Section 4.3.
neutral
train_98074
We found them to be highly correlated: Pearson's r = .81.
since the intention is to measure the student's reading ability, any difference in performance that is not due to reading ability confounds the measurement.
neutral
train_98075
In short, previous research suggests that multiple factors may affect phone and pause duration in a reading of a story: from the phonetic properties of individual segments to where the passages falls within the narrative structure.
we also note that the direction of the correlation was opposite to our expectation: more complex passages were in fact read faster.
neutral
train_98076
To answer our first question, we looked at the distribution of the reading rate across the passages in the training set.
it has been long known that different segments have different intrinsic durations which account for a lot of variation in segmental durations (Peterson and Lehiste, 1960;Klatt, 1976;van Santen, 1992): for example, high vowels tend to be shorter than low vowels.
neutral
train_98077
The speaker and listener are placed in a game environment in which they both see the three colors of the context and a chatbox.
second, the Chinese monolingual model does not include a common word for orange, but the bilingual model identifies 橙色 chéngsè 'orange'.
neutral
train_98078
We identified the 10 most frequent color-related words in each language to translate.
as we mentioned earlier, our main goal with this work is to investigate the effects of bilingual training on pragmatic language use.
neutral
train_98079
Thus, perplexity ceases to be a measure of language modeling ability and assumes the role of a proxy for the out-of-vocabulary rate.
perplexity is a common intrinsic evaluation metric for generation models.
neutral
train_98080
However, this is an unfair comparison: the monolingual model's high perplexity is dominated by low probabilities assigned to rare tokens in the opposite-language data that it did not see.
both players can send messages through the chatbox at any time.
neutral
train_98081
may be found in Appendix A.
these kinds of reasoning problems have been well-studied in visual question answering settings (Johnson et al., 2017;Suhr et al., 2017).
neutral
train_98082
(w, h) a group of people standing around a bus stop .
table 4 shows the CIDEr scores using the same setup as Section 3, but using representations with spatial information about individual object instances.
neutral
train_98083
We note that the results with our simpler image representation are comparable to the ones reported in Yin and Ordonez (2017), which use more complex models to encode similar image information.
such approaches are hard to interpret and are dataset dependent (Vinyals et al., 2017).
neutral
train_98084
CCA-based methods are popular within the IR community for learning multimodal embeddings (Costa Pereira et al., 2014;Gong et al., 2014).
we introduce an adaptive algorithm for characterizing the visual concreteness of all the concepts indexed textually (e.g., "dog") in a given multimodal dataset.
neutral
train_98085
; if a segment gets assigned to any such terms, it is considered a misprediction.
if a character name was not mentioned in the dialogue, the annotators labeled it as "unknown."
neutral
train_98086
Other work removed the dependency on a true character list by determining all names through coreference resolution.
it also significantly outperforms the usage of the visual and acoustic features combined, which have been frequently used together in previous work, suggesting the importance of textual features in this setting.
neutral
train_98087
The metaclassifier then learns to predict whether a specific generated answer is correct or not.
we rank the pixels according to their spatial attention and then compute the correlation between the two ranked lists.
neutral
train_98088
More generally, we compute a task specific weighting of all biLM layers: (1) In (1), s task are softmax-normalized weights and the scalar parameter task allows the task model to scale the entire ELMo vector.
we find that including ELMo at the output of the biRNN in task-specific architectures improves overall results for some tasks.
neutral
train_98089
The relatively high performance of FULL-0 shows that substituting segment copying with attention maintains and even improves the system effectiveness.
figure 3 illustrates our architecture.
neutral
train_98090
The full model selects between generating query tokens and copying complete segments from previous queries.
attending on fewer utterances improves inference speed: FULL-0 is 30% faster than FULL.
neutral
train_98091
be an RNN, s an input string of length n and 0 ≤ t ≤ n a time step.
• Minimization: Can we minimize the number of neurons for a given RNN?
neutral
train_98092
Methods can then be trained on the union of the original and auxiliary dataset.
several problematic instances have been demonstrated, for example, word embeddings can encode sexist stereotypes (Bolukbasi et al., 2016;Caliskan et al., 2017).
neutral
train_98093
Overall, the process of corpus creation went through several stages -claim extraction, evidence extraction and stance annotation -, which we describe below.
while there exist some datasets for Arabic (Darwish et al., 2017b), the most popular ones are for English, e.g., from SemEval-2016 Task 6 (Mohammad et al., 2016) and from the Fake News Challenge (FNC).
neutral
train_98094
The dataset is publicly available at https://goo.gl/Bym2Vz.
some examples from Rajendran et al.
neutral
train_98095
1 AM is a relatively new field in NLP.
a natural question is how to handle new AM datasets in a new domain and with sparse data.
neutral
train_98096
The size of the hidden layer in the encoder and the decoder layers of the models based on the proposed hierarchical decoder (rows (b)-(h)) are 200 and 100, respectively.
some prior work applied additional technique such as reranker to select a better result among multiple generated sequences (Wen et al., 2015;Dušek and Jurčíček, 2016).
neutral
train_98097
In this paper we presented the first neural poetry translation system and provided two novel methods to improve the quality of the translations.
human reference: For where a grave had opened wide, There was no grave at all: Only a stretch of mud and sand By the hideous prison-wall, And a little heap of burning lime, That the man should have his pall.
neutral
train_98098
rating scales without numerical labels, similar to the visual analogue scales used by Belz and Kow (2011) or direct assessment scales of (Graham et al., 2013;Bojar et al., 2017), however, without given end-points.
nevertheless, human judges often fail to distinguish between these different aspects, which results in highly correlated scores, e.g.
neutral
train_98099
In addition, the results in Table 3 show that RR, in combination with Setup 2, results in more consistent ratings than DA.
its ranking of informativeness is less clear than that of plain ME, which might be due to the different results for missed and added information (see Sec.
neutral