id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_4400
In both our method and GI, the crowd workers are trained before enrolling in the main task.
gI trains annotators using gold Standard data, which involves a higher level of supervision with respect to our method.
contrasting
train_4401
3 There have been other works with different experimental setups (Gambäck et al., 2011;Webb and Ferguson, 2010) that obtained accuracies ranging from 77.85% to 80.72%.
these results are not directly comparable to ours.
contrasting
train_4402
We believe that the difference in performance is not due to the attention mechanism being ineffective, but because Shen and Lee (2016) treat the classification of each utterance independently.
kalchbrenner and Blunsom (2013) take the sequential nature of dialog acts into account, and run an RNN across the conversation, which conditions the generation of a dialogue act on the dialogue acts and utterances in all the previous dialogue turns.
contrasting
train_4403
the top keyphrases SR selects, which are highly common words in scientific papers.
when α = 1 these keyphrases are not presented among the top.
contrasting
train_4404
Other methods, such as probabilistic latent semantic indexing (Hofmann, 1999), nonnegative matrix factorization (Sra and Inderjit, 2006), are viable alternatives.
it is hard to tell in general if the keyphrase quality improves with these alternatives.
contrasting
train_4405
EL also plays an important role in mining customer opinions from data generated on social platforms and ecommerce websites, thereby helping companies better understand the needs and expectations of their customers.
the target products are often not covered by general KBs.
contrasting
train_4406
Next, we proceed to score each candidate e i,j and determine which one m i should be linked to.
we have no knowledge of the target entities except for names and thus can't directly compare m i with them.
contrasting
train_4407
An ever increasing amount of data is becoming available for NMT training.
only the in-domain or relateddomain corpora tend to have a positive impact on NMT performance.
contrasting
train_4408
Each word x i is represented by concatenating the forward hidden state − → h i and the backward one In this way, the source sentence X = {x 1 , ..., x Tx } can be represented as annotations H = {h 1 , ..., h Tx }.
in the decoder, an RNN hidden state s j for time j is computed by: The context vector c j is then, computed as a weighted sum of these annotations H = {h 1 , ..., h Tx }, by using alignment weight α ji : 3 Sentence Embedding and Selection A source sentence can be represented as the annotations H. the length of H depends on the sentence length T x .
contrasting
train_4409
Next we observe that both back-translation and our proposed TDA method significantly improve translation quality.
tDA obtains the best results overall and significantly outperforms backtranslation in all test sets.
contrasting
train_4410
It is a task-independent technique.
when focusing on our specific task (MT), we can employ translation-related heuristics to prune the run-time vocabulary precisely and efficiently.
contrasting
train_4411
LSH achieves better BLEU than decoding with top frequent words of the same run-time vocabulary size C on attention models.
it in-troduces too large an overhead (50 times slower), especially when softmax is highly optimized on GPU.
contrasting
train_4412
The decoder state stores translation information at different granularities, determining which segment should be expressed (phrasal), and which word should be generated (lexical), respectively.
due to the extensive existence of multiword phrases and expressions, the varying speed of the lexical component is much faster than the phrasal one.
contrasting
train_4413
Much previous work propose to improve the NMT model by adopting fine-grained translation levels such as the character or sub-word levels, which can learn the intermediate information inside words (Ling et al., 2015;Costa-jussà and Fonollosa, 2016;Chung et al., 2016;Luong et al., 2016;Lee et al., 2016;García-Martínez et al., 2016).
high level structures such as phrases has not been explicitly explored in NMT, which is very useful for machine translation (Koehn et al., 2007).
contrasting
train_4414
We find that the chunk boundary could be predicted well, with an average accuracy of 89%, which shows that our model could capture the phrasal boundary information in the translation process.
our model could not predict chunk labels as well as chunk boundaries.
contrasting
train_4415
For example, the CRF-BiLSTM POS tagger obtained the state-of-theart performance on Penn Treebank WSJ corpus (Huang et al., 2015).
in low-resource languages, these models are seldom used because of limited labelled data.
contrasting
train_4416
In fact, the development of supervised disambiguation systems depends crucially on the availability of re-liable sense-annotated corpora, which are indispensable in order to provide solid training and testing grounds (Pilehvar and Navigli, 2014).
hand-labeled sense annotations are notoriously difficult to obtain on a large scale, and manually curated corpora (Miller et al., 1993;Passonneau et al., 2012) have a limited size.
contrasting
train_4417
Nevertheless, the few approaches that have been proposed so far are either focused on treating each individual language in isolation (Otegi et al., 2016), or limited to short and concise definitional text (Camacho-Collados et al., 2016a).
the use of parallel text to perform WSD (Ng et al., 2003;Lefever et al., 2011;Yao et al., 2012;Bonansinga and Bond, 2016) or even Word Sense Induction (Apidianaki, 2013) has been widely explored in the literature, and has demonstrated its effectiveness in producing high-quality sense-annotated data (Chan and Ng, 2005).
contrasting
train_4418
Compared to fully characterlevel encoding, the encoder gets word-level embeddings as in the case of unsegmented words (see Figure 1).
the word embedding is intuitively richer than the embedding learned over unsegmented words because of the convolution over characters.
contrasting
train_4419
This is expected, since most nouns are outof-vocabulary terms, and therefore get segmented by BPE into smaller, possibly known fragments, which then get confused with other tags.
since the accuracies are quite close, the overall errors are very few and similar between the various systems.
contrasting
train_4420
Neural models with minimal feature engineering have achieved competitive performance against traditional methods for the task of Chinese word segmentation.
both training and working procedures of the current neural models are computationally inefficient.
contrasting
train_4421
Most existing methods made Markov assumptions to keep the exact search tractable.
2 such assumptions cannot be made in our model as the LSTM component takes advantage of the full segmentation history.
contrasting
train_4422
We found 45 cases where INDEP makes an error (and DOMAINENCODING does not) by predicting a wrong comparative or superlative structure (e.g., > instead of ≥).
the opposite case occurs only 29 times.
contrasting
train_4423
For instance, Tweet POS tags do not differentiate modal verbs, past tense verbs, and other types of verbs, but categorize all of them as 'V'.
in many forms of counterfactuals, the distinction between modal verbs and past particles from other types of verbs are critical (e.g., in Should / Could / Would Have forms).
contrasting
train_4424
(2015), the authors predict user income based on different demographic and psychological features of users.
the process of extracting these features is computationally complex.
contrasting
train_4425
We separate this comparison from the rest because of a language-dependent set up.
for Turkish DNN models outperform BiLSTM on seen tokens and yield an almost equal 92.2% accuracy regardless of using the rightmost morphological context.
contrasting
train_4426
Replacing the MATE parser with the transition-based YARA does not change the outcome of our monolingual parsing experiment, save for the average 0.58-1.65 drop in UAS.
in cross-lingual parsing, YARA highlights the benefits of not tagging the training data, as the GOLD↝PROJ parsers are there the best choice for parsing 17/26 languages.
contrasting
train_4427
In practice we may wish to modify the regression step in an attempt to learn a better transformation matrix A.
the standard first approach of using 2 -regularized (Ridge) regression instead of simple linear regression gives little benefit, even when we have more parameters than word embeddings (i.e.
contrasting
train_4428
These experiments consolidate our intuitions from Section 3 that removing common components and frequent words is important and that learning a data-dependent transformation is an effective way to do this.
if we train Figure 2: Spearman correlation between cosine similarity and human scores for pairs of words in the CRW dataset given an increasing number of contexts per rare word.
contrasting
train_4429
We have proposed an unsupervised method which uses co-occurrences statistics to represent the relationship between a given pair of words as a vector.
to neural network models for relation extraction, our model learns relation vectors in an unsupervised way, which means that it can be used for measuring relational similarities and related tasks.
contrasting
train_4430
The gap is especially visible for FASTTEXT and SGNS-W2 vectors.
since ATTRACT-REPEL specializes only words seen in linguistic constraints, 5 its performance crucially depends on the coverage of test set words in the constraints.
contrasting
train_4431
The two works mentioned above both use a single shared encoder to guarantee the shared latent space.
a concomitant defect is that the shared encoder is weak in keeping the uniqueness of each language.
contrasting
train_4432
In Table 4, we can draw the similar conclusion.
different from MultiUN, in the EN-FR-HE group of IWSLT, (X, Z) and (Y, Z) are severely overlapped in Z.
contrasting
train_4433
Neural Machine Translation (NMT) models (Bahdanau et al., 2014;Luong et al., 2015;Wu et al., 2016;Vaswani et al., 2017) often operate with fixed word vocabularies, as their training and inference depend heavily on the vocabulary size.
limiting vocabulary size increases the amount of unknown words, which makes the translation inaccurate especially in an open vocabulary setting.
contrasting
train_4434
Therefore, we expect to see the same benefit as BPE with the unigram language model.
the unigram language model is more flexible as it is based on a probabilistic language model and can output multiple segmentations with their probabilities, which is an essential requirement for subword regularization.
contrasting
train_4435
In order to increase the robustness, they inject noise to input sentences by randomly changing the internal representation of sentences.
these previous approaches often depend on heuristics to generate synthetic noises, which do not always reflect the real noises on training and inference.
contrasting
train_4436
On top of the gains with subword regularization, n-best decoding yields further improvements in many language pairs.
we should note that the subword regularization is mandatory for n-best decoding and the BLEU score is degraded in some language pairs without subword regularization (l = 1).
contrasting
train_4437
The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989), and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence.
lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g.
contrasting
train_4438
RNMT+ is slower to train than the Transformer Big model on a per-GPU basis.
since the RNMT+ model is quite stable, we were able to offset the lower per-GPU throughput with higher concurrency by increasing the number of model replicas, and hence the overall time to convergence was not slowed down much.
contrasting
train_4439
We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.
no such pattern was observed for multiplicative models and Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3.
contrasting
train_4440
From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.
increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.
contrasting
train_4441
From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.
entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.
contrasting
train_4442
The reference summaries get the best score on conciseness since the recent abstractive models tend to copy sentences from the input articles.
our model learns well to select important information and form complete sentences so we even get slightly better scores on informativity and readability than the reference summaries.
contrasting
train_4443
Most previous seq2seq models purely depend on the source text to generate summaries.
as reported in many studies (Koehn and Knowles, 2017), the performance of a seq2seq model deteriorates quickly with the increase of the length of generation.
contrasting
train_4444
(2017) also proposed to encode human-written sentences to improvement the performance of neural text generation.
they handled the task of Language Modeling and randomly picked an existing sentence in the training corpus.
contrasting
train_4445
In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.
simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).
contrasting
train_4446
In the task settings for CrowdFlower, we specified that we needed annotations from six people for each word.
11 because of the way the gold questions work in CrowdFlower, they were annotated by more than six people.
contrasting
train_4447
14 Men, women, and other genders are substantially more alike than they are different.
they have encountered different socio-cultural influences for thousands of years.
contrasting
train_4448
Similar approaches have been used to 'distantly supervise' annotation of full-text articles describing clinical trials .
to the corpora discussed above, these automatically derived datasets tend to be relatively large, but they include only shallow annotations.
contrasting
train_4449
As mentioned, TrueSkill has been used for NLP tasks to infer continuous values for instances.
it is important to note that the support of a Gaussian distribution is unbounded, namely R = (−∞, ∞).
contrasting
train_4450
TrueSkill can induce a continuous spectrum of instances (such as skill level of game players) by assuming that each instance is represented as a Gaussian distribution.
the Gaussian distribution has unbounded support, namely R = (−∞, ∞), which does not satisfy the property of absolute bounds for appropriate scalar annotation (i.e., ratio scale in the level of measurement).
contrasting
train_4451
Thus, given an annotated score s i which is normalized between [0,1], we change the update functions as follows: This procedure may look similar to DA, where s i is simply accumulated and averaged at the end.
there are two differences.
contrasting
train_4452
In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.
this may lead to suboptimal output for text generation (Wiseman and Rush, 2016), e.g., one beam often dominates and thus inhibits hypothesis diversity.
contrasting
train_4453
Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.
8 we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.
contrasting
train_4454
The annotations were published alongside the article.
3 this data only comprises URLs to the original Facebook posts.
contrasting
train_4455
As our model is based on populating a latent "event space" into boxes (products of intervals), it is especially reminiscent of the Mondrian process (Roy and Teh, 2009).
the Mondrian process partitions the space as a high dimensional tree (a non-parametric kd-tree), while our model allows the arbitrary box placement required for DAG structure, and is much more tractable in high dimensions compared to the Mondrian's Bayesian non-parametric inference.
contrasting
train_4456
They can remember sentence lengths, word identity, and word order (Adi et al., 2017), can capture some syntactic structures such as subject-verb agreement (Linzen et al., 2016), and can model certain kinds of semantic compositionality such as negation and intensification (Li et al., 2016).
all of the previous work studies LSTMs at the sentence level, even though they can potentially encode longer context.
contrasting
train_4457
In this paper, we present results only on the dev sets, in order to avoid revealing details about the test sets.
we have confirmed that all results are consistent with those on the test sets.
contrasting
train_4458
(2017) have also shown that while LSTMs are aware of which words appear in their context, this awareness degrades with increasing length of the sequence.
the success of copy mechanisms such as attention and caching (Bahdanau et al., 2015;Hill et al., 2016;Merity et al., 2017;Grave et al., 2017a,b) suggests that information in the distant context is very useful.
contrasting
train_4459
Thus, the cache is, in a sense, complementary to the standard model, since it especially helps regenerate words from the long-range context where the latter falls short.
the cache also hurts about 36% of the words in PTB and 20% in Wiki, which are words that cannot be copied from context (C none ), as illustrated by bars for "none" in Figure 7.
contrasting
train_4460
Some previous work (Srivastava et al., 2017) has explored using language explanations for feature space construction in concept learning tasks, where the problem of learning to interpret language, and learning classifiers is treated jointly.
this approach assumes availability of labeled data for learning classifiers.
contrasting
train_4461
While its performance (0.679) suggests that simple binary feedback is a substantial signal, the difference from the full model indicates value in using soft probabilities.
in a sensitivity study, we found the performance of the approach to be robust to small changes in the probability values of quantifiers.
contrasting
train_4462
Our approach is surprisingly effective in learning from free-form language.
it does not address linguistic issues such as modifiers (e.g., very likely), nested quantification, etc.
contrasting
train_4463
Empirically, S-LSTM can give effective sentence encoding after 3 -6 recurrent steps.
the number of recurrent steps necessary for BiLSTM scales with the size of the sentence.
contrasting
train_4464
On the other hand, convolution features embody only fix-sized local ngram information, whereas sentence-level feature aggregation via pooling can lead to loss of information (Sabour et al., 2017).
s-LsTM uses a global sentence-level node to assemble and back-distribute local information in the recurrent state transition process, suffering less information loss compared to pooling.
contrasting
train_4465
In light of the benefits of pretraining (Erhan et al., 2010), we should be able to do better than randomly initializing the remaining parameters of our models.
inductive transfer via finetuning has been unsuccessful for NLP (Mou et al., 2016).
contrasting
train_4466
Dai and Le (2015) also fine-tune a language model, but overfit with 10k labeled examples and require millions of in-domain documents for good performance.
uLMFiT leverages general-domain pretraining and novel finetuning techniques to prevent overfitting even with only 100 labeled examples and achieves state-ofthe-art results also on small datasets.
contrasting
train_4467
The error then increases as the model starts to overfit and knowledge captured through pretraining is lost.
uLMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule.
contrasting
train_4468
Later work considered generating queries based on relations extracted by a syntactic parser (Giordani and Moschitti, 2012) and applying techniques from logical parsing research (Poon, 2013).
none of these earlier systems are publicly available, and some required extensive engineering effort for each domain, such as the lexicon used by PRECISE.
contrasting
train_4469
We call this a questionbased data split.
many English questions may correspond to the same SQL query.
contrasting
train_4470
We favor to the intuitive and efficient way in this work.
the column-cell relation could improve the prediction of the WHERE val-ue.
contrasting
train_4471
Characterlevel models (CLM), being a cheaper and accessible alternative to morphology, have been reported as performing competitively on various NLP tasks (Ling et al., 2015;Plank et al., 2016;.
the extent to which these tasks depend on morphology is small; and their relation to semantics is weak.
contrasting
train_4472
Our experiments and analysis reveal insights such as: • CLMs provide great improvements over whole-word-level models despite not being able to match the performance of morphology-level models (MLMs) for indomain datasets.
their performance surpass all MLMs on out-of-domain data, • Limitations and strengths differ by morphological typology.
contrasting
train_4473
For this reason, we used a model that can be considered small when compared to recent neural SRL models and avoided parameter search.
we wonder how the models behave when given a larger network.
contrasting
train_4474
Our results lead to the following conclusions: • For in-domain data, character-level models cannot yet match the performance of morphology-level models.
they still provide considerable advantages over whole-word models, • Their shortcomings depend on the morphology type.
contrasting
train_4475
Therefore, given more training data their performance will improve faster than morphology-level models, • They perform substantially well on out of domain data, surpassing all morphology-level models.
relatively less improvement is expected when model complexity is increased, • They generally perform better than models that only have access to predicted/silver morphological tags.
contrasting
train_4476
4 Formally, (1) and (2) together with the uniform prior over alignments P (a) form the generative model of AMR graphs.
the alignment model Q ψ (a|c, R, w), as will be explained below, is approximating the intractable posterior P θ,φ (a|c, R, w) within that probabilistic model.
contrasting
train_4477
We use this approach also to relax the one-hot encoding of the predicate position (p, see Section 2.4).
the concept prediction model log P θ (c|S t (Φ ψ , Σ), w) relies on the pointing mechanism, i.e.
contrasting
train_4478
Some recent work has focused on building aligners specifically for training their parsers (Werling et al., 2015;.
those aligners are trained independently of concept and relation identification and only used at pre-processing.
contrasting
train_4479
Such translation models have also been successfully applied to semantic parsing tasks (e.g., (Andreas et al., 2013)), where they rivaled specialized semantic parsers from that period.
they are considerably less accurate than current state-of-the-art parsers applied to the same datasets (e.g., (Dong and Lapata, 2016)).
contrasting
train_4480
In particular, van Noord and Bos (2017) used character level seq2seq model and achieved the previous state-of-the-art result.
their model is very data demanding as they needed to train it on additional 100K sentences parsed by other parsers.
contrasting
train_4481
variables ("the first/second number").
an equation system does not encode these information explicitly.
contrasting
train_4482
They belong to the same type of problems asking about the summation of consecutive integers.
their meaning representations are very different in the Dolphin language and in equations.
contrasting
train_4483
The baseline and shallow models do not perform well on short sentences which despite containing fewer words, can still represent complex meaning which is challenging to capture sequentially.
the performance of the deep model is relatively stable.
contrasting
train_4484
Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations.
there has not been a rigorous evaluation regarding the added value of sophisticated compositional functions.
contrasting
train_4485
Models with more expressive compositional functions, e.g., RNNs or CNNs, have demonstrated impressive results; however, they are typically computationally expensive, due to the need to estimate hundreds of thousands, if not millions, of parameters (Parikh et al., 2016).
models with simple compositional functions often compute a sentence or document embedding by simply adding, or averaging, over the word embedding of each sequence element obtained via, e.g., word2vec (Mikolov et al., 2013), or GloVe (Pennington et al., 2014).
contrasting
train_4486
SWEMs bear close resemblance to Deep Averaging Network (DAN) (Iyyer et al., 2015) or fast-Text (Joulin et al., 2016), where they show that average pooling achieves promising results on certain NLP tasks.
there exist several key differences that make our work unique.
contrasting
train_4487
For instance, all words in the fifth column are Chemistry-related.
we do not have a chemistry label in the dataset, and regardless they should belong to the Science topic.
contrasting
train_4488
friendly, nice, okay, great and likes.
the most vital features for predicting the sentiment of this sentence could be the phrase/sentence 'is just okay', 'not great' or 'makes me wonder why everyone likes', which cannot be captured without MR SST-1 SST-2 Subj TREC RAE (Socher et al., 2011b) 77.7 43.2 82.4 --MV-RNN (Socher et al., 2012) 79.0 44.4 82.9 --LSTM (Tai et al., 2015) -46.4 84.9 --RNN (Zhao et al., 2015) 77.2 --93.7 90.2 Constituency Tree-LSTM (Tai et al., 2015) -51.0 88.0 --Dynamic CNN (Kalchbrenner et al., 2014) -48.5 86.8 -93.0 CNN (Kim, 2014) 81 As demonstrated in Section 4.2.1, word-order information plays a vital role for sentiment analysis tasks.
contrasting
train_4489
However, the most vital features for predicting the sentiment of this sentence could be the phrase/sentence 'is just okay', 'not great' or 'makes me wonder why everyone likes', which cannot be captured without MR SST-1 SST-2 Subj TREC RAE (Socher et al., 2011b) 77.7 43.2 82.4 --MV-RNN (Socher et al., 2012) 79.0 44.4 82.9 --LSTM (Tai et al., 2015) -46.4 84.9 --RNN (Zhao et al., 2015) 77.2 --93.7 90.2 Constituency Tree-LSTM (Tai et al., 2015) -51.0 88.0 --Dynamic CNN (Kalchbrenner et al., 2014) -48.5 86.8 -93.0 CNN (Kim, 2014) 81 As demonstrated in Section 4.2.1, word-order information plays a vital role for sentiment analysis tasks.
according to the case study above, the most important features for sentiment prediction may be some key n-gram phrase/words from the input document.
contrasting
train_4490
• Sentiment analysis tasks are more sensitive to word-order features than topic categorization tasks.
a simple hierarchical pooling layer proposed here achieves comparable results to LSTM/CNN on sentiment analysis tasks.
contrasting
train_4491
PPDB has been used to improve word embeddings (Faruqui et al., 2015;Mrkšić et al., 2016).
ppDB is less useful for learning sentence embeddings .
contrasting
train_4492
Also, pragmatic inference is a necessary step toward automatic narrative understanding and generation (Tomai and Forbus, 2010;Ding and Riloff, 2016;Ding et al., 2017).
this type of social commonsense reasoning goes far beyond the widely studied entailment tasks (Bowman et al., 2015;Dagan et al., 2006) and thus falls outside the scope of existing benchmarks.
contrasting
train_4493
To improve the performance of Japanese PAS analysis, it is straightforward to increase the size of corpora annotated with PAS.
since it is prohibitively expensive, it is promising to take advantage of a large amount of raw corpora.
contrasting
train_4494
ing pre-trained external knowledge in the form of word embeddings has also been ubiquitous.
such external knowledge is overwritten in the task-specific training.
contrasting
train_4495
This is especially true for nominals and pronouns, two common types of entity mentions, where the nearest preceding mention that is also compatible in basic properties (e.g., gender, person and number) is likely to co-refer with the current mention.
coreferential event mentions are rarely from the same sentence ( 10%) and are often sentences apart.
contrasting
train_4496
Event mentions that occur in the first few paragraphs are more likely to initiate an event chain.
event mentions in later parts of a document may be coreferential with a previously seen event mention but are extremely unlikely to begin a new coreference chain.
contrasting
train_4497
It is also important to understand that positionbased features used in entity coreference resolution (Haghighi and Klein, 2007) are usually defined for an entity pair.
we model the distributional patterns of an event chain in a document.
contrasting
train_4498
Differently, distant supervision (Mintz et al., 2009;Hoffmann et al., 2011;Surdeanu et al., 2012) is to efficiently generate relational data from plain text for unseen relations with distant supervision (DS).
it naturally brings with some defects: the resulted distantly-supervised training samples are often very noisy (shown in Figure 1), which is the main problem of impeding the performance (Roth et al., 2013).
contrasting
train_4499
Most of the current state-of-the-art methods (Zeng et al., 2015;Lin et al., 2016) make the denoising operation in the sentence bag of entity pair, and integrate this process into the distant supervision relation ex-traction.
indeed, these methods can filter a substantial number of noise samples; they overlook the case that all sentences of an entity pair are false positive, which is also the common phenomenon in distant supervision datasets.
contrasting