id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_9300
For example, 85% questions of WEBQUESTIONS can be directly answered via a single Freebase predicate.
all questions of QALD-6 involve at least one DBpedia predicate and one textual relation, thus can not be accurately answered using DBpedia only.
contrasting
train_9301
Only the words that have dependency relations with the center word are used as the context words, as illustrated in Figure 1.
syntactic parsing is a more difficult and time-consuming task than finding word embeddings.
contrasting
train_9302
Bamman and Smith (2015) report an accuracy of 75.4% on a balanced dataset, which is lower than our result.
they performed evaluation on a different set of data, thus the results are not directly comparable.
contrasting
train_9303
As shown in the figure, most "+"s are in the top-right area, and most "•"s are in the bottom-left area, which indicates that the accuracies of both models are reasonably high.
the samples are more scattered along the x-axis.
contrasting
train_9304
Our intuition was that, since our snippets usually include more than one sentence, resolving pronouns and coreferential expressions may make the content more explicit, thus enabling a better agreement detection.
results show that this information causes a slight performance drop.
contrasting
train_9305
Based on Freebase, two benchmark data-sets, WebQuestions (Berant et al., 2013) and SimpleQuestions (Bordes et al., 2015) are constructed and used in most of KBQA work (Berant and Liang, 2014;Bordes et al., 2014a;Fader et al., 2014;Yang et al., 2014;Bao et al., 2014;Reddy et al., 2014;Dong et al., 2015;Yih et al., 2015).
mark Rydell about 85% of questions (Yao, 2015) of WebQuestions and all questions in SimpleQuestions are 'simple' questions, where a 'simple' question denotes that it can be answered based on a single KB relation.
contrasting
train_9306
We trained a sentence level classifier, on the training partition of the corpus, as defined in Section 4.1 -considering the question-level annotations as gold standard.
the latter choice introduces a lot of false positives as sentences in relevant questions may be unrelated.
contrasting
train_9307
Our experiments verify that the part-of-speech embeddings used by us contain rich semantic information.
our proposed Attention-CNN model can still yield higher F1 without prior NLP knowledge.
contrasting
train_9308
Multi-task training is performed via switching across multiple tasks in a block of training steps.
we perform switches between ER and RC subtasks based on the performance of each task on the common validation set and update learning rate only when task is switched from RC to ER (Figure 8).
contrasting
train_9309
A relation for a word pair is marked correct if the NE boundaries and relation type are correct.
in separate approach, a relation for a word pair is marked correct if the relation type is correct.
contrasting
train_9310
In contrast, the latter are relatively simple and scalable.
they often heavily rely on only one descriptor of the bag-of-words embeddings (e.g.
contrasting
train_9311
APE assumes the availability of source texts (S ip ), corresponding MT output (T mt ) and the human post-edited (T pe ) version of T mt , and APE systems can be modelled as an MT system between S ip T mt (i.e., a joint representation of S ip and T mt ) and T pe .
statistical APE (SAPE) systems can also be built without the availability of S ip using only sufficient amounts of parallel "target-side" T mt -T pe text within the statistical MT (SMT) framework.
contrasting
train_9312
This may be partly due to the fact that the human postedited reference translations are biased towards GT output.
manual analysis revealed that some of these 39 translations are indeed worse than the GT output.
contrasting
train_9313
The productivity changes vary from 46.6% to -40%, which indicates that the utility of SAPE also varies from person to person.
even taking into account the decrease in productivity of T4, average productivity increases 12.96% with SAPE.
contrasting
train_9314
Improving SMT for the dialogue acts under consideration resembles cross-lingual question answering (Tiedemann, 2009).
when considering finer dialogue act granularities, it may be profitable to exploit context information, which is not used in our current SMT setup.
contrasting
train_9315
We can see that our method can recognize support sentences well.
the thesis and main ideas are identified with moderate performance.
contrasting
train_9316
This proves that vocabulary gap exists between queries and factual story descriptions of anecdotes.
great improvements are obtained when measuring relatedness between anecdotes and queries with anecdote implications.
contrasting
train_9317
Studies in the past decade on discourse parsing, such as (Ji and Eisenstein, 2014), (Feng and Hirst, 2014), and (Joty et al., 2015), greatly improved the performance of discourse parsing in general.
it has been observed that the performance across the discourse relations varies significantly (Joty et al., 2015), and that poor performance may be linked to underfitting, i.e., a lack of training data (Feng and Hirst, 2014).
contrasting
train_9318
This method also reports a good overall performance with linear running time.
all these state-of-the-art discourse parsers still perform badly on infrequent relations due to insufficient training examples.
contrasting
train_9319
One solution is, of course, to create more labeled data, ideally for all the relations.
given the resources required for manually creating labeled data for discourse parsing, we explore in this paper a training data enrichment framework that relies on co-training of the CODRA parser and the SR-parser on unlabeled documents.
contrasting
train_9320
The construction of our discourse structure looks similar to the building of an RST tree (Duverle and Prendinger, 2009), and there are also prior efforts in combining the benefits of RST and PDTB (e.g., when building a Chinese discourse corpus (Li et al., 2014)).
the focus of our work is different.
contrasting
train_9321
The sentiment associated with any aspect can mostly be inferred by checking the terms associated with it (e.g., awesome -location).
this may not be very beneficial if the statement is ambiguous.
contrasting
train_9322
The grouping method discussed later in this paper is inspired by their work.
these works are still constrained to correlations between adjacent labels only.
contrasting
train_9323
L{x} : Undefined links, where a clear connection between two units may not exist.
sentiment flow (or transition) can still exist.
contrasting
train_9324
In this paper, we have focused on method to incorporate non-local context information into input representation of aspect units.
such method makes i.i.d assumption for output labels.
contrasting
train_9325
For CRF (Level-2), features f 1 to f 10 are used.
prediction is made over full sequence of output labels and features F(U 1 ) to F(U N ) are fed together.
contrasting
train_9326
This may seem counter-intuitive since a CRF should be able to model inter-label dependencies well.
we wish to emphasize that the experiment with CRF is not aimed at comparison against SVM, but to check the performance of available CRF tool on review data.
contrasting
train_9327
For crfsuite, the full sequence of aspect unit features is fed as input, and full sequence of output labels is predicted at once.
in the SVM (level-2) model we form new groups (or modify existing group information) as new labels are predicted.
contrasting
train_9328
Therefore, the source text and the response text are casted respectively as two views in a co-training algorithm to perform semi-supervised learning.
the success of co-training largely depends on two strong underlying assumptions, i.e., sufficiency and independence, of the two views (Blum and Mitchell, 1998), which are actually violated in reader emotion classification when the source text and response text are utilized as two views.
contrasting
train_9329
Even worse, as an extreme example, the source text (e.g., some newly posted news) sometimes has no response at all.
the response text normally depends on the source text, since both the response text and the source text talk about the same topics.
contrasting
train_9330
Generic sentiment classification is formulated as determining whether a piece of text is positive, negative, or neutral.
in SC, systems must detect favorability toward a given (pre-chosen) target of interest.
contrasting
train_9331
Our discriminative model works effectively for supervised stance classification tasks.
manual annotation requires painstaking work by researchers, which can be even more difficult for tasks such as sentiment annotation .
contrasting
train_9332
A baseline that completely ignores sentence structure, as well as words that have no intrinsic polarity, is shown in Figure 3(b): the only two words left are negative and, hence, the total polarity is negative.
the syntactic tree can be re-interpreted in the form of a 'circuit' where the 'signal' flows from one element (or subtree) to another, as shown in Figure 3(c).
contrasting
train_9333
Several recent methods are proposed to extend word representation to phrases (Yin and Schütze, 2014;Yu and Dredze, 2015;Passos et al., 2014).
they do not use structured knowledge to derive phrase representations.
contrasting
train_9334
The overcomplete word embeddings Z strongly differ from the word embeddings X; hence, the denoising is affected.
the performance of the vectors Z * still outperforms the original vectors X, A and B after the denoising process.
contrasting
train_9335
The accuracy of synonymy detection increases sharply from 63.2% to 88.6% according to the number of stages T from 0 to 3.
the denoising performance of vectors falls with the number of stages T > 3.
contrasting
train_9336
On the SemEval corpus, it achieves an accuracy of 74.8, outperforming all participants in the original shared task (Section 5).
these results are limited by the small size of both training sets.
contrasting
train_9337
Some such documents are written by professionals and contain well-formed, explicit arguments-i.e., propositions supported by evidence and connected through reasoning.
informal arguments in online argumentative discourses can exhibit different styles.
contrasting
train_9338
This extended approach employs conditional random fields (CRF) using dictionary-based features along with all the features from the original technique.
it resulted in lower accuracy than the SVMs.
contrasting
train_9339
CoVAR Sp : proposed technique similar to CoVAR Fro .
the distance between two matrices is determined using the spectral norm.
contrasting
train_9340
Hence the context model does not see enough data for learning and hence, if the learnt context model is fed directly to the hidden state of the target RNN, the improperly learnt context model can play a big role.
if a Concat kind of architecture is used, the linear plus softmax layer can decide on how much importance to give to the context model.
contrasting
train_9341
In fact, the winning architecture was Conditional-State with RNN-RNN combo, which did better in terms of test accuracy than the feature based models (Bowman et al., 2015) and one tree-based model (Mou et al., 2016).
it came close to the stateof-the-art attention based model (Parikh et al., 2016).
contrasting
train_9342
In other words, the above theorem implies that if we add the conditions defined in Equation (4)) to Ng's theorem, it is possible to obtain a succinct representation (Equation (5)) whose predictive performance is nearly as good as that of Ng's theorem, albeit with a probabilistic penalty f 1 (m) and an error penalty f 2 (m).
the penalties are negligible because f 1 (m) = o(1) and f 2 (m) = o(1).
contrasting
train_9343
Distributional methods applied to large-sized, often temporally stratified corpora have markedly enhanced the methodological repertoire of both synchronic and diachronic computational linguistics and are getting more and more popular in the Digital Humanities (see Section 2.2).
using such quantitative data as a basis for qualitative, empirically-grounded theories requires that measurements should not only be accurate, but also reliable.
contrasting
train_9344
Here, we extend their work into the multimodal domain by comparing the performance between visual and linguistic representations at encoding different types of attributes.
with Rubinstein et al.
contrasting
train_9345
The attributes were learned from both, visual and textual input.
with Silberer and Lapata (2014), here we aim at spotting differences on the fine-grain semantic knowledge encoded by vision and language instead of building multimodal representations and to use them in a task.
contrasting
train_9346
The classification of temporal relations between events in text has been long studied and attacked from different perspectives in the NLP community.
existing approaches heavily rely on information overtly expressed in text, such as explicit temporal markers (e.g.
contrasting
train_9347
predicate-argument structure, as features for the classifiers (Llorens et al., 2010;Laokulrat et al., 2013;D'Souza and Ng, 2013).
the evaluation results of TempEval-3 (UzZaman et al., 2013) show that a system with basic morphosyntactic and lexical semantic features, such as ClearTK (Bethard, 2013), is hard to beat even if using more sophisticated semantic features.
contrasting
train_9348
These studies model a type of CSP: a subject or an object can be regarded as an additional context to restrict a set of possible fillers of a query predicate.
the context captured in our study is not a local context of a query predicate but that of a query argument, working as a validator of the narrative consistency between a query predicate and events in which a query argument participates (see Section 4).
contrasting
train_9349
In Section 6.2, we reported the results of a binary classification task in which negative instances were artificially generated.
this section describes a more realistic task setting: ranking coreference clusters in the OntoNotes corpus (Hovy et al., 2006).
contrasting
train_9350
Based on the growth rate of SP and SP-CW12, we conjectured that we needed 10 3 times more instances of TYPE A for SP to reach the same performance as that of CSP.
it is inefficient and unrealistic to increase the number of training instances of TYPE A in terms of the training time and availability of training data.
contrasting
train_9351
For example, for the test pronoun it in ⟨you, own, it⟩, the CSP model can rank the correct antecedent ⟨you, buy, something⟩ obj at the top by capturing the narrative consistency between buy X-own X.
the context is attached to a correct coreference cluster in only 31.5% of the degradations.
contrasting
train_9352
By iteratively clustering adjectives on the basis of co-occurring nouns and vice versa, the hidden attributes connecting both can be crystallized out.
bigger gold clusterings and more reliable evaluation measures are still missing.
contrasting
train_9353
The models of the previous sections provide a variety of options for representing the meaning of a verb from its arguments.
none of these constructions takes into account the distributional vector of the verb itself, which includes valuable information that could further help in entailment tasks.
contrasting
train_9354
Predicate argument relations are usually marked by case particles denoting grammatical cases in Japanese, therefore identifying dependencies marked by the major obligatory cases, ga (nominative), wo (accusative) and ni (dative) is the main task.
since ellipses are ubiquitous in Japanese texts, arguments might be identified beyond the sentence including the target predicates (inter-sentence arguments) as well as within the sentence (intra-sentence arguments).
contrasting
train_9355
Concerning the directly dependent ga argument cases, the FixRank shows the better recall value with the similar precision value.
in the cases of directly dependent 'other' case markers, the BiReg shows the better precision value.
contrasting
train_9356
the directly dependent "better negative examples", to obtain 428 candidates out of 1,288 candidates (= 184 texts × 7 annotators) that make 33% of the total training examples.
the number of the correct arguments that directly depend on the target predicate is 35 out of 184 examples, making 19% of the texts.
contrasting
train_9357
Among them, only the attention mechanism can make the two sentences contact with each other.
the word-by-word attention does not represent a better understanding of the sentences.
contrasting
train_9358
For instance, on the clean-text Microsoft Paraphrase benchmark database, the existing systems attain an accuracy as high as 0.8596.
existing systems for detecting paraphrases and semantic similarity on user-generated short-text content on microblogs such as Twitter, comprising of noisy and ad hoc short-text, needs significant research attention.
contrasting
train_9359
• The best known system, designed by (Ji and Eisenstein, 2013), currently delivers an F1-score of 0.8596 on the Microsoft Paraphrase dataset, while ours delivers 0.825, without any adaptation.
our system at an F1-score of 0.741, delivers a significantly better performance compared to their system with an F1-score of 0.641 on the Twitter data, also without any adaptation.
contrasting
train_9360
the parents "inherit" the alignment from their children.
this does not mean that the direction of entailment needs to be the same; in Figure 7, "Super PACs" entails "campaign funding" (t 1 s 1 t 2 ), while "X criticizes Y on Z" entails "X disagrees with Y over Z" (π(t 2 , s 2 ) s 2 π(t 1 , s 1 )).
contrasting
train_9361
Residual connection is added after every n layers.
for stacked LSTM, n > 3 is very expensive in terms of computation.
contrasting
train_9362
Figure 4 and the results from Table 5 suggest that perplexity is a good loss function for training paraphrase generation models.
a more ideal metric to fully encode the fundamental objective of paraphrasing should also reward novelty and penalize redundancy during paraphrase generation, which is a notable limitation of the existing paraphrase evaluation metrics.
contrasting
train_9363
A straightforward way to enrich Chinese KB is to directly translate English KB (source) to Chinese (target) based on the surface texts of a triple with existing machine translation system.
we find that they suffer from the problem of ambiguity.
contrasting
train_9364
(2013) compiled an initial analysis of the questions in these datasets, and identified 7 broad categories of knowledge and inference requirements.
this analysis forced a single knowledge type for each question, for example causality, and from our detailed analysis we find that many types of knowledge are necessary to arrive at the correct answer, e.g., causality, actions, and purposes.
contrasting
train_9365
One reason could be SDA and SDA-DSS separate domain-dependent and domain-independent features and keep all features in the learned representation, while our model suppresses domain-dependent feature.
in general, "domain-dependent" is a relative definition.
contrasting
train_9366
In order to determine the scope of the negative phrase, most computational approaches make use of a syntactic parser.
for languages suffering from a lack of such resources, such as Norwegian, this strategy would be too expensive.
contrasting
train_9367
This feature combination includes all three types of COTs.
it does not include negations at all.
contrasting
train_9368
Integrating such models with human interaction enables many new use cases.
adding human interaction to probabilistic models requires inference algorithms which are both fast and accurate.
contrasting
train_9369
We also note that variational inference is inappropriate for this model, as it only achieves good performance when used in conjunction with hyper-parameter optimization.
such optimization tends to undo the constraints, rendering the model useless (Hu et al., 2011).
contrasting
train_9370
These measures can be used by experts or individuals for diagnostic and management purposes, but also in aggregation, for large scale surveys.
the reliance on self-reporting required to obtain these measures is time consuming and expensive and can only produce sparse data on small populations.
contrasting
train_9371
Such approaches have been employed in various tasks including sentiment (Wang et al., 2014) and emotion analysis (Poria et al., 2015;Wimmer et al., 2008) and benefit from the ability of the learning model to capture the semantic relations between different modalities; however, the resulting features are treated in the same way by the learning model (Akbari et al., 2015).
in late fusion approaches different models are trained per modality and their outputs are combined at a later stage, usually by employing weighted sum (Dobrišek et al., 2013;Poria et al., 2016).
contrasting
train_9372
The lowest errors are observed with respect to the negative target (the comparison with the well-being case in terms of the error is not straight-forward, due to the larger scale that is used in WEMWBS).
r 2 for this task is considerably lower, pointing to the low variance in each model's prediction.
contrasting
train_9373
Most of these methods depend on sparse lexical features including bag-of-word (BoW) models and exquisitely designed patterns.
feature engineering is labor-intensive and the sparse and discrete features cannot effectively encode semantic and syntactic information of words.
contrasting
train_9374
More recently, Gong and Zhang (2016) propose an attention-based convolutional neural network, which incorporates a local attention channel and global channel for hashtag recommendation.
to the best of our knowledge, there is no work yet on employing both topic models and deep neural networks for this task.
contrasting
train_9375
The feature vector of a composed component such as a pair, pair(x1, x2) is usually described by the local features of x1, x2 and the relational features between them, such as the order of their position , etc.
the main label set for SRL is l = {l ispredicate , lisargument, largmenttype}.
contrasting
train_9376
The availability of manually annotated corpora of high quality (or, at least, reliability) is therefore key to the development of the field in any given language.
the creation of such resources is notoriously costly, especially when complex annotations, e.g.
contrasting
train_9377
The proposed tasks all relate to semantic disambiguation (noun vs verb, co-reference identification, named entity annotation, etc) and while some are relatively easy -like Play Twins (noun vs verb) or Play Names (named entity annotation) -most require some more advanced (at least school-level) knowledge.
the interface does not provide a training phase and the only help available is a short guide to the task.
contrasting
train_9378
(In the Czech Republic, the presentation of syntax in school is very close to dependency syntax.)
the authors report in (Hana and Hladká, 2012) that the accuracy of the annotations they obtained is significantly lower than that of their parser.
contrasting
train_9379
The other way is to train a suitable deep learning model (Collobert et al., 2011;Mikolov et al., 2013a) on a raw corpus in that language and then use the obtained embeddings of these in-language words as input to the sentiment classification model.
learning context-rich word embeddings in any language requires large datasets, generally of the order of billions of words, thereby eating up a lot of time as well as system resources.
contrasting
train_9380
Generally, a large annotated corpus is able to overcome this anomaly, as the network gets enough data to learn the correct linguistic patterns.
when the labeled dataset is small in size, as is often the case in most languages, this problem adversely affects the classification performance.
contrasting
train_9381
The success of NMT lies in its strong ability of composing the global context information.
as a newly approach, the NMT model has some flaws and limitations that may jeopardize its translation performance (Luong et al., 2014;Sennrich et al., 2015;.
contrasting
train_9382
This is partly because that the name entity usually occurs rarely in the training corpus and is often mapped to an unknown word by the RNN search model.
in character-aware NMT model, the name entity is split into a sequence of characters and each character can be found in the vocabulary.
contrasting
train_9383
(2015), where they treat word alignments as a constraint to the RAE model.
as discussed in Section 1, the composition criterion in RAE (i.e.
contrasting
train_9384
On the one hand, we simultaneously maximize the Score con ( * ) and minimize the Score inc ( * ) of SAC nodes.
we take an opposite approach to deal with non-SAC nodes.
contrasting
train_9385
In neural machine translation, the attention mechanism facilitates the translation process by producing a soft alignment between the source sentence and the target sentence.
without dedicated distortion and fertility models seen in traditional SMT systems, the learned alignment may not be accurate, which can lead to low translation quality.
contrasting
train_9386
In addition, the attention in NMT is learned in an unsupervised manner without explicit prior knowledge about alignment.
2 in conventional statistical machine translation (SMT), it is standard practice to learn reordering models in a supervised manner with the guidance from conventional alignment models (Xiong et al., 2006;Koehn et al., 2007;Bisazza and Federico, 2016).
contrasting
train_9387
Evaluating the quality of output from language processing systems such as machine translation or speech recognition is an essential step in ensuring that they are sufficient for practical use.
depending on the practical requirements, evaluation approaches can differ strongly.
contrasting
train_9388
It can be used to predict arbitrary quality metrics, provided that suitably labeled training data is available.
in practice automatic QE has been found difficult and not reliable enough when applied to new domains or unknown systems, e.g.
contrasting
train_9389
Because automatic scores are available for all segments, the sample size is much bigger.
the regressor used for prediction is often biased to predicting values p i close to those indicated by the training data, which may differ considerably from the true values for y i , especially for documents that are less similar to the training data.
contrasting
train_9390
Based on these definitions, we observe that Q (man) is subject to high variance, because the small sample size makes it sensitive to the randomness of the data.
its bias is zero, since averaging over all randomness would cancel out estimation errors exactly.
contrasting
train_9391
The reasonable size of the label set makes it possible to have only one model.
we have one model per word for fine-grained WSD.
contrasting
train_9392
The cross-lingual mate relation in Wikipedia is a strong indicator for parallelism.
wikipedia entries in different languages are not necessarily translations of each other, but can be edited independently.
contrasting
train_9393
For most document pairs, only a single sentence pair was extracted.
there were a few document pairs that yielded several hundred sentence pairs.
contrasting
train_9394
Preordering English sentences into Japanese word order thus only involves two simple steps: (1) Finding the parse tree of the English sentence (the authors used HPSG derivations) and (2) moving the head of each constituent to the initial position.
this approach does not seem to scale up easily because manually encoding reordering rules for all the world's language pairs would be a rather difficult and very slow process.
contrasting
train_9395
The second technique is to train a self-normalized NNJM to avoid computation of the softmax normalization factor (i.e., the denominator in Equation 2) in decoding.
selfnormalization does not solve the computational cost of training the model.
contrasting
train_9396
With the surge of word embeddings trained by neural networks, recent approaches that learn bilingual word representations from non-parallel data for bilingual lexicon induction have also shown promise (Mikolov et al., 2013b;Vulić and Moens, 2015).
none of the existing methods explicitly considers multiple alternative translation, i.e., the phenomenon that one source language word may have multiple possible translations in the target language.
contrasting
train_9397
Following its monolingual counterpart (Mikolov et al., 2013c, inter alia), bilingual word representation learning has attracted considerable attention.
most of the works require parallel data as the cross-lingual signal (Zou et al., 2013;Chandar A P et al., 2014;Hermann and Blunsom, 2014;Kočiský et al., 2014;Luong et al., 2015;Coulmance et al., 2015), making them unsuitable for bilingual lexicon induction.
contrasting
train_9398
When inspecting the output of both splitters, we noticed that the hybrid/linguistic splitter is more likely to produce a correct splitting.
the corpus-based splitter provides better results for NMT.
contrasting
train_9399
We found that sentences were often restructured causing the performance to drop.
an example of this restructuring may look as follows: Source Sentence the new rules put in place will undoubtedly make it more difficult to exercise the right to vote in 2012 .
contrasting