id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5000
BERT is effective across domains, providing between 25% and 55% error reduction over the base neural parsers.
as for word embeddings, the pre-trained BERT representations do not generally provide a larger error reduction in out-of-domain settings than in in-domain (although a possible confound is that the BERT model is fine-tuned on the relatively small amount of in-domain treebank data, along with the other parser parameters).
contrasting
train_5001
When using BERT encoder representations, the Chart parser (with its unstructured decoder) and In-Order parser (with its conditioning on a representation of previously-constructed structure) obtain roughly comparable F1 (shown in the first two columns of Table 5), with In-Order better on seven out of nine corpora but often by slight margins.
these aggregate F1 scores decompose along the structure of the tree, and are dominated by the short spans which make up the bulk of any treebank.
contrasting
train_5002
Repeating this mechanism in consecutive layers allows for information to flow over long distances.
for each input token, each attention head scales linearly in memory and time in the context size, or attention span.
contrasting
train_5003
This indicates that lower layers in a Transformer model do not really require a long attention span in this particular task.
few attention heads in the higher layers have very long spans, exceeding several thousand.
contrasting
train_5004
Users usually have both long-term preferences and short-term interests.
existing news recommendation methods usually learn single representations of users, which may be insufficient.
contrasting
train_5005
They represented a session by using the topic distribution of browsed news in this session, and the representations of users were built from their session representations weighted by the time.
these methods heavily rely on manual feature engineering, which needs massive domain knowledge to craft.
contrasting
train_5006
We see that the explanatory power of L&M variables is lost after we additionally include H4N RE in a regression: all three L&M variables are not significant.
h4N RE continues to be significant in all experiments, with large standard coefficients.
contrasting
train_5007
The best L&M dictionary is again neg lm with standard coefficient 0.0472 and t = 3.30.
h4N RE has the highest explanatory value for volatility.
contrasting
train_5008
For example, Taylor (1953) studies the relation of cloze tests and readability.
to C-tests (Klein-Braley and Raatz, 1982), cloze tests remove whole words to produce a gap leading to more ambiguous solutions.
contrasting
train_5009
Another drawback of the naïve strategy is that it is difficult to control for the topic of the underlying text and in the worst case, the necessity to search through a whole corpus for selecting a fitting C-test.
to the naïve strategy, our proposed manipulation strategies are designed to be used in real time and manipulate any given C-test within 15 seconds at an acceptable quality.
contrasting
train_5010
In a more detailed analysis, we find two main sources of problems demanding further investigation: First, the difficulty prediction quality when deviating from DEF and second, the increasing ambiguity in harder C-tests.
it underestimates the d(T ) = 0.11 for T SEL,dec 4 (the same text used in figure 1), for which we found an actual error rate of 0.28.
contrasting
train_5011
In addition to the NTS system adapted to use our phrase table, we also tested a baseline which greedily applied the phrase table at all possible points in a sentence.
this system was ranked as least understandable more often than any other system.
contrasting
train_5012
Therefore, the result that audio-only MDRM performs better than textonly MDRM (1.412 vs. 1.431) may need careful interpretation as we have to stop audio-only model training early to prevent overfitting.
using both audio features and text features, the model usually converges in 20 epochs without over-fitting.
contrasting
train_5013
Our experiment results also consistently show that complex deep models such as bc-LSTM (Poria et al., 2017) or our proposed deep regression model outperform shallow models (such as SVR) by large margin in short-term prediction (τ =3 or 7).
the margin becomes smaller as we predict a relative long-term stock volatility (τ =15 or 30).
contrasting
train_5014
For example, comparing with tf-idf bag-of-words model at τ = 3, our MDRM reduces prediction error by 19.1% (1.371 vs. 1.695).
at τ = 30, the prediction error reduction is 12.8% (0.217 vs. 0.249).
contrasting
train_5015
At the time of pouring, there was no secret mechanism in place to inform the particular participant who brought the wine being poured.
given the fact that (1) it was common knowledge (each one knows it, if each one knows that the others know it, if each one knows that each one knows that the others know it, and so on) that each grape-region combo was assigned to no one or one individual for each session, and (2) all participants were required to bring a wine of the assigned grape-region that they know very well and ensure it of a classical style most representative of the grape and region, the task of detecting self-brought wines becomes trivial to our participants, and therefore the informing mechanism stands.
contrasting
train_5016
In practice, given a claim of interest, people may search for related articles from multiple sources and collect evidence for the claim; they can then determine the veracity of the claim by deciding whether the evidence found supports or refutes the claim.
most existing work that attempted to study trustworthiness of sources assumed that sources make assertions directly.
contrasting
train_5017
Here we allow P s can be different with H s , since providing true evidence for a true claim is more difficult than just providing a true claim.
considering that those all reflect the trustworthiness of s, we assume they share a similar distribution over sources in our problem.
contrasting
train_5018
It makes sense that evidence-based method (leveraging indirect assertions) can beat the claim-based method (leveraging direct assertions only) by using more information to reduce potential noise.
using sources and claims only is more noisy, especially with many bad information sources.
contrasting
train_5019
Based on the results of MJ-EVI, we can observe that simply calculating accuracy by estimated "correct" evidence cannot achieve a highly correlated estimation of sources: the entailment tool provides noisy evidence.
sim-Com, which directly counts estimated "correct" claims by MJ-EVI, can improve the estimation.
contrasting
train_5020
Among different factors, evidence contributes the most when estimating the veracity of claims, which can also help the estimation of the trustworthiness.
the usefulness of evidence highly depends on the quality of the NLP tool.
contrasting
train_5021
As noise increases, the accuracy, Pearson and Spearman score drop lower.
the JELTA method is consistently better than the alternatives.
contrasting
train_5022
(2016) use adversarial and information maximization objectives to produce interpretable latent representations that can be tweaked to adjust writing style for handwritten digits, as well as lighting and orientation for face models.
this problem is less explored in natural language processing.
contrasting
train_5023
(2017) propose to control the sentiment by using discriminators to reconstruct sentiment and content from generated sentences.
there is no evidence that the latent space would be disentangled by simply reconstructing a sentence.
contrasting
train_5024
The multi-task loss only ensures that the style space contains style information.
the content space might also contain style information, which is undesirable for disentanglement.
contrasting
train_5025
Our insight of combining the two auxiliary losses is a simple yet effective way of disentangling latent space.
j mul(s) and j adv(s) only regularize the style information, leading to gradual drop of content preserving scores.
contrasting
train_5026
Sophisticated sequence-to-sequence architectures (Gehring et al., 2017;Vaswani et al., 2017) 2018b), rescoring (Chollampatt and Ng, 2018a), iterative decoding strategies (Ge et al., 2018;Lichtarge et al., 2018), synthetic (Xie et al., 2018) and semi-supervised corpora (Lichtarge et al., 2018), and other task-specific techniques has achieved impressive results on this task.
all prior work ignores document-wide context for GEC, and uses sentence-level models.
contrasting
train_5027
They also use ROUGE to make extraction labels and to provide rewards in their RL training phase.
our model extracts multiple sentences and rewrites them together into a single subject line.
contrasting
train_5028
Therefore, an ecologically valid model of semantic change should show that the words change more as the time interval for comparison increases, for the vocabulary as a whole.
if a model captures stochastic fluctuations in the words' vectors instead of true semantic change, then such a shift in the distribution will be less prominent.
contrasting
train_5029
For the semantic change words (upper plot), all four models show a noticeable peak when the new sense was first injected (step 2), followed by a steady decrease in acd until step 6.
the stable words only show the steady decrease starting from step 1, without any noticeable peaks.
contrasting
train_5030
Compared with the texts in the Essays domain, some texts in the Travel Guides domain state much about the histories and anecdota of the tourist attractions.
besides the historical stories, for instance, the text "Good for the health is just one of the many magical qualities that are attributed to these beautiful emerald-green or turquoise stones."
contrasting
train_5031
Previous research on affective analysis focuses on text modality (Liu and Zhang, 2012;Cambria and Hussain, 2015), which is a hot research topic in the NLP community.
recent research suggests that information from text is not sufficient for mining opinion of humans (Poria et al., 2017a;D'Mello and Kory, 2015;Cambria, 2016), espe-cially under the situation where sarcasm or ambiguity occurs.
contrasting
train_5032
Previous work has shown that earnings calls disclose more information than company filings alone (Frankel et al., 1999) and influence investor sentiment in the short term (Bowen et al., 2002).
recently company executives and investors have questioned their value (Koller and Darr, 2017;Melloy, 2018).
contrasting
train_5033
This operation is performed iteratively, allowing the information to be modified and propagated across multiple links as the number of iterations increases.
to most multi-task learning schemes which share information through learning a common feature representation, IMN not only allows shared features, but also explicitly models the interactions between tasks through the message passing mechanism, allowing different tasks to better influence each other.
contrasting
train_5034
proposed to model the problem as a sequence labeling task with a unified tagging scheme.
their results were discouraging.
contrasting
train_5035
This again indicates that domain-specific knowledge has already been captured by domain embeddings, while knowledge obtained from DD and DS via parameter sharing could be redundant in this case.
#NAME?
contrasting
train_5036
As observed in example 2, both PIPELINE and INABSA extract "Pizza".
since no opinion is expressed in the given sentence, "Pizza" should not be considered as an aspect term.
contrasting
train_5037
There are also other multimodal emotion and sentiment analysis datasets, such as MOSEI , MOSI (Zadeh et al., 2016b), and MOUD (Pérez-Rosas et al., 2013), but they contain individual narratives instead of dialogues.
emotionLines (Chen et al., 2018) is a dataset that contains dialogues from the popular TV-series Friends with more than two speakers.
contrasting
train_5038
MOSI (Zadeh et al., 2016b), MO-SEI , and MOUD (Pérez-Rosas et al., 2013) are such examples that have drawn significant interest from the research community.
iEMOCAP and SEMAiNE are two popular dyadic conversational datasets where each utterance in a dialogue is labeled by emotion.
contrasting
train_5039
that are aligned to the components of MELD.
mELD is different in terms of both complexity and quantity.
contrasting
train_5040
Multimodal fusion helps in improving the emotion recognition performance by 3%.
multimodal classifier performs worse than the textual classifier in classifying sadness.
contrasting
train_5041
The PRET+MULT framework proposed in (He et al., 2018) is a successful attempt by adopting pre-training and multi-task learning approaches.
their model only shares shallow embedding and LSTM layers between ASC and DSC (document-level sentiment classification) tasks.
contrasting
train_5042
This is the closest work to ours.
the method in (He et al., 2018) is based on an existing AT-LSTM model (Wang et al., 2016b), whereas our framework is a totally new one which employs capsule network with carefully designed strategies for ASC tasks.
contrasting
train_5043
Humans can easily identify the positive polarity towards aspect [screen].
the single-task variant TransCap{S} and most baselines give a false negative prediction.
contrasting
train_5044
In aspect-level sentiment classification (ASC), it is prevalent to equip dominant neural models with attention mechanisms, for the sake of acquiring the importance of each context word on the given aspect.
such a mechanism tends to excessively focus on a few frequent words with sentiment polarities, while ignoring infrequent ones.
contrasting
train_5045
By doing so, the weights of words extracted first will be reduced, and those of words extracted later will be increased, avoiding the over-fitting of high-frequency context words with sentiment polarities and the under-fitting of lowfrequency ones.
for the words in s m (x) with misleading effects on the sentiment prediction of x, we want to reduce their effects and thus directly set their expected weights as 0.
contrasting
train_5046
Trabelsi and Zaïane (2015) used an augmented LDA to automatically extract coherent words and phrases describing arguing expressions and apply constrained cluster-ing to group similar viewpoints of topics.
to previous work, we apply argument clustering on a dataset containing both relevant and non-relevant arguments for a large number of different topics which is closer to a more realistic setup.
contrasting
train_5047
Agglomerative clustering is a strict partitioning algorithm, i.e., each object belongs to exactly one cluster.
an argument can address more than one aspect of a topic, therefore, arguments could belong to more than one cluster.
contrasting
train_5048
Humans are better than the proposed BERT-model at estimating the pairwise similarity of arguments.
when combined with a clustering method, the performances are on-par.
contrasting
train_5049
Our contributions are four-fold: (1) We propose a MTL approach to coherence assessment and compare it against a number of baselines.
we experimentally demonstrate that such a framework allows us to exploit more effectively the inter-dependencies between the two prediction tasks and achieve state-of-the-art results in predicting document-level coherence; (2) we assess the extent to which the information encoded in the network generalizes to different domains and prediction tasks, and demonstrate the effectiveness of our approach not only on standard binary evaluation tasks on the wall Street Journal (wSJ), but also on more realistic tasks involving the prediction of varying degrees of coherence in people's everyday writing; (3) to existing work that has only investigated the impact of a specific set of grammatical roles (i.e., subject and object) on coherence, we instead investigate a large set of GR types, and train the model to predict the type of role dependents participate in.
contrasting
train_5050
For dialogues with multiple interlocutors, extraction of their discourse structures could provide useful semantic information to the "downstream" models used, for example, in the production of intelligent meeting man-agers or the analysis of user interactions in online fora.
despite considerable efforts on computational discourse-analysis (Duverle and Prendinger, 2009;Joty et al., 2013;Ji and Eisenstein, 2014;Surdeanu et al., 2015;Yoshida et al., 2014;Li et al., 2016), we are still a long way from usable discourse models, especially for dialogue.
contrasting
train_5051
Taking into account this structure has shown to help many NLP end tasks, including summarization (Hirao et al., 2013;Durrett et al., 2016), machine translation (Joty et al., 2017), and sentiment analysis (Ji and Smith, 2017).
annotating discourse requires considerable effort by trained experts and may not always yield a structure appropriate for the end task.
contrasting
train_5052
Structured attention at the sentence level helps performance for all except WQ, where no form of attention helps.
structured attention at the document level yields mostly negative results, in contrast to the improvements reported in L&L.
contrasting
train_5053
In this paper, we frame zero-shot learning as a challenge for pragmatic modeling and explore zero-shot reference games, where a speaker needs to describe a novel-category object in an image to an addressee who may or may not know the category.
to standard reference games, this game explicitly targets a situation where relatively common words like object names are likely to be more inaccurate than other words like e.g.
contrasting
train_5054
it is not aware that it encounters a novel category and frequently generates names of known categories encountered during training.
even in this simple model, we find a certain portion of output expressions that do not contain any name (e.g.
contrasting
train_5055
Although recent end-to-end neural coreference models have advanced the state-of-the-art performance for coreference resolution, they are still trained with heuristic loss functions and make a sequence of local decisions for each pair of mentions.
as studied in Clark and Manning (2016a); Yin et al.
contrasting
train_5056
While it is true that coherence is a property of a passage as a whole, capturing long-term dependencies in sequences remains a fundamental challenge when training neural networks in practice (Trinh et al., 2018).
it is plausible that much of global coherence can be decomposed into a series of local decisions, as demonstrated by foundational theories such as Centering Theory.
contrasting
train_5057
Although Romanian and Moldavian are supposed to be hard to discriminate, since Romania and the Republic of Moldova share the same literary standard (Minahan, 2013), the empirical results seem to point in the other direction, to our surprise.
we should note that the high accuracy rates attained by the proposed classifiers can be explained through a combination of two factors.
contrasting
train_5058
We note that the goal of this work was not to achieve state-of-the-art results on English WSD compared to manually-annotated corpora.
performing competitively on standard benchmarks represents one step further towards getting rid of the limitation imposed by resources like SemCor.
contrasting
train_5059
Parallel corpora were exploited also in the more recent work of Taghipour and Ng (2015, OMSTI) 11 , who presented a semi-automatic approach that creates a novel semantically-annotated dataset by leveraging the manual effort made to align senses across different languages.
recent methods have been able to fully automatise the whole process while simulta-neously producing high-quality resources.
contrasting
train_5060
We find that overfitting to BLI may severely hurt downstream performance, warranting the coupling of BLI experiments with downstream evaluations in order to paint a more informative picture of CLE models' properties.
2 Projection-Based CLEs: Methodology to more recent unsupervised models, CLE models typically require bilingual signal: aligned words, sentences, or documents.
contrasting
train_5061
The results highlight VECMAP (Artetxe et al., 2018b) as the most robust choice among unsupervised models: besides being the only model to produce successful runs for all language pairs, it also significantly outperforms other unsupervised models-both when considering all language pairs and only the subset where other models produce successful runs.
vECMAP still performs worse (p ≤ 0.0002) than PROC-B (trained on only 1K pairs) and all supervised models trained on 3K or 5K word pairs.
contrasting
train_5062
Given the importance of SP, the automatic acquisition of SP has become a well-known research subject in the SP Evaluation Set #R #W #P (McRae et al., 1998) 2 641 821 (Keller and Lapata, 2003) 3 571 540 (Padó et al., 2006) 3 180 207 SP-10K 5 2.5K 10K NLP community.
current SP acquisition models are limited based on existing evaluation methods.
contrasting
train_5063
As shown in Table 7, almost 50% of SP pairs in the perfect group are covered by OMCS.
only about 6% of SP pairs from the impossible group are covered.
contrasting
train_5064
For the perfect group, we find that human-defined commonsense triplets often have neatly corresponding SP pairs.
for the impossible group, SP pairs are covered by OMCS either because of incidental overlap with a non-keyword, e.g., 'child' in 'child wagon', or because of the low quality of some OMCS triplets.
contrasting
train_5065
But as we only label 4,000 multihop pairs, the overall coverage is limited.
automatic SP acquisition method PP can cover more questions, but the precision also drops due to the noise of the collected SP knowledge.
contrasting
train_5066
Currently, the most popular evaluation method for SP acquisition is the pseudodisambiguation (Ritter et al., 2010;de Cruys, 2014).
pseudo-disambiguation can be easily influenced by the aforementioned noisiness of evaluation corpora and cannot represent ground truth SP.
contrasting
train_5067
If we only consider is distracted, without also considering correct predictions, we might conclude that the distractor hypothesis is correct: the 192 instances in the group are all cases where BiDAF predicts a wrong span that has the same entity type as the ground truth, and the group accounts for 5.7% of all BiDAF errors.
looking at the groups in succession reveals a different, and more complete story: BiDAF predicts the exact correct span (exact match) 68% of the time overall, which rises to 80% when the ground truth is an entity.
contrasting
train_5068
The author of D2 observed examples like Figure 8(c) in his samples, but decided ultimately that what mattered was just the returned short text, not the span index.
d1's author carefully refined his initial query precisely to rule out cases like Figure 8(c).
contrasting
train_5069
(2018) made a similar attempt to balance the trade-off in Slice Finder, a framework that uses statistical techniques to identify large and interpretable slices that models perform poorly on.
their purely automated data slicing does not allow users to customize groups based on their own hypotheses.
contrasting
train_5070
Two widely used metrics F1 and AUC are used in our experiments.
some relational facts present in both the training and dev/test sets, thus a model may memorize their relations during training and achieve a better performance on the dev/test set in an undesirable way, introducing evaluation bias.
contrasting
train_5071
On the one hand, jointly predicting the evidence provides better explainability.
identifying supporting evidence and reasoning relational facts from text are nat- urally dual tasks with potential mutual enhancement.
contrasting
train_5072
(2018b) further combine external recommendations with human annotation to build large-scale high-quality datasets.
these RE datasets limit relations to single sentences.
contrasting
train_5073
Recently, some document-level RE datasets have also been constructed.
these datasets are either constructed via distant supervision Peng et al., 2017) with inevitable wrong labeling problem, or limited in specific domain Peng et al., 2017).
contrasting
train_5074
However, these datasets are either constructed via distant supervision Peng et al., 2017) with inevitable wrong labeling problem, or limited in specific domain Peng et al., 2017).
docREd is constructed by crowd-workers with rich information, and is not limited in any specific domain, which makes it suitable to train and evaluate general-purpose document-level RE systems.
contrasting
train_5075
For each idiom, we picked up the 20 most similar idioms whose embedding similarity score to the input idiom is less than some threshold.
according 可是有一个时期大家#idiom-0#,不大敢露面, 只有她一个人倚在阳台上看排队的兵走过。 there was a period when everyone #idiom-0# and was scared to show up.
contrasting
train_5076
Indeed, perplexity is still frequently used to evaluate models, and each of the models mentioned in the previous section, including CopulaLDA-designed to improve local topic quality-uses perplexity to evaluate the model.
while held-out perplexity can test the generalization of predictive models, it is negatively correlated with human evaluations of global topic quality (Chang et al., 2009).
contrasting
train_5077
Leveraging this intuition, where rank(w i , z i ) is the rank of i th word w i in its assigned topic z i when sorted by probability, we define AVGRANK as With this evaluation the lower bound is 1, although this would require that every token be assigned to a topic for which its word is the mode.
this is only possible if the number of topics is equal to the vocabulary size.
contrasting
train_5078
Our players obtain an aggregated F of 90.5, which is very high.
collecting judgements from real players tends to be slower than using a crowdsourcing service.
contrasting
train_5079
We can see that operating out of their original domains, the automated pipelines, while still outperforming the Stanford pipeline by around 4 percentage points, do not outperform aggregated users.
they do appear to serve well as agents to train participants to perform annotations, as participants annotate to a high level of accuracy.
contrasting
train_5080
Confidence in annotators is not modelled.
it has been extended to incorporate the reliability of the annotator with a similar method that also combines Expectation Maximization with CRF in an NER and NP chunking task (Rodrigues et al., 2014 In this paper, we presented a hybrid mention detection method combining state-of-the-art automatic mention detectors with a gamified, twoplayer interface to collect markable judgements.
contrasting
train_5081
It is because this slot usually has a large number of possible values that is hard to recognize.
number-related slots such as arrive by, people, and stay usually have the lowest error rates.
contrasting
train_5082
use distributional representation learning to leverage semantic information from word embeddings to and resolve lexical/morphological ambiguity.
parameters are not shared across slots.
contrasting
train_5083
Xu and Hu (2018) use the index-based pointer network for different slots, and show the ability to point to unknown values.
many of them require a predefined domain ontology, and the models were only evaluated on single-domain setting (DSTC2).
contrasting
train_5084
Learning features of the Smart Home category helps overcome such conflicts.
a word in different tasks in the same domain can still have different slot types.
contrasting
train_5085
(2012) developed a nonparametric Bayesian model to learn task subspaces and features jointly.
with the advent of deep learning, MTL with deep neural networks has been successfully applied to different applications (Zhang et al., 2018;Masumura et al., 2018;Fares et al., 2018;Guo et al., 2018).
contrasting
train_5086
A flat MR will struggle to represent 1) the correspondence of arguments to dialog acts; 2) what attributes to group and contrast and 3) semantic equivalence of arguments like date time1 and date time2.
our MRs ease discourse-level learning and encourage reuse of arguments across multiple dialog acts.
contrasting
train_5087
Recently, a joint model (Zhang et al., 2019b) was proposed to connect the contextual information and human-designed features together for pronoun coreference resolution task (with gold mention support) and achieved the state-of-the-art performance.
their model still requires the complex features designed by experts, which is expensive and difficult to acquire, and requires the support of the gold mentions.
contrasting
train_5088
Considering that neural models are intensive datadriven and normally restricted by data nature, they are not easily applied in a cross-domain setting.
if a model is required to perform in real applications, it has to show promising performance on those cases out of the training data.
contrasting
train_5089
Their results proved that such knowledge is helpful when appropriately used for coreference resolution.
external knowledge is often omitted in their models.
contrasting
train_5090
2 The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI.
recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts (Gururangan et al., 2018;Poliak et al., 2018b;Tsuchiya, 2018).
contrasting
train_5091
In a sense this is counter-intuitive, since p θ is being trained to unlearn bias, while p φ,θ is being trained to learn it.
if the models are trained separately, they may learn to co-adapt with each other (Elazar & Goldberg, 2018).
contrasting
train_5092
The only exception seems to be SCITAIL, where we do not improve despite it having different biases than SNLI.
when we strengthen α and β (below), Method 1 outperforms the baseline.
contrasting
train_5093
However, removing such attributes from text representations may be difficult (Elazar & Goldberg, 2018).
to this line of work, our final goal is not the removal of such attributes per se; instead, we strive for more robust representations that better transfer to other datasets, similar to Li et al.
contrasting
train_5094
(2018) incorporate external linguistic resources and use a GAN-style framework to adversarially train robust NLI models.
we do not use external resources and we are interested in mitigating hypothesisonly biases.
contrasting
train_5095
ADD-ONE-RTE and the recast datasets also model NLI as a binary prediction task.
their label sets are ENTAILED and NOT-ENTAILED.
contrasting
train_5096
Existing FV methods formulate FV as a natural language inference (NLI) (Angeli and Manning, 2014) task.
they utilize simple evidence combination methods such as concatenating the evidence or just dealing with each evidence-claim pair.
contrasting
train_5097
Some workers still had frequent low agreement with the majority.
in most cases we obtained a clear majority annotation.
contrasting
train_5098
The "voted" version corrects the reference, when one of the three scribes misses the annotation.
when two scribes pick different valid labels and the third misses them, the "voted" reference is not better than the single reference.
contrasting
train_5099
A counselor might therefore seek to adapt training principles to their own personality, finding a voice that distinguishes them from other counselors (betweencounselor diversification).
change is far from guaranteed, and several forces potentially counteract the two vectors of change outlined above.
contrasting