id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_3500
In these cases, learning tense transition patterns will mislead the model and accordingly affect the performance.
our global model is more robust because it is based on our "One tense per scene" hypothesis which can be seen as prior linguistic knowledge, thus achieves good performance even when the training data is not sufficient.
contrasting
train_3501
The results clearly show that the thread-level features are important, providing consistent improvement for all our learning models.
the linear-chain models fail to exploit the sequential dependencies between nearby answer labels to improve the results significantly: although the labels from the neighboring answers can affect the label of the current answer, this dependency is too loose to have impact on the selection accuracy.
contrasting
train_3502
In the first case (Q u 1 ), the third comment is classified as good by models that only use basic features.
thanks to the thread-level features, the classifier can consider that there is a dialogue between u 1 and u 2 , causing all the comments to be assigned to the correct class: bad.
contrasting
train_3503
In the second example (Q u 4 ), the first two comments are classified as bad when using the basic features.
the third comment -written by the same user who asked Q u 4 -includes an acknowledgment.
contrasting
train_3504
Synthetic text provides a cleanroom environment for evaluating QA systems, and has spurred development of powerful neural architectures for complex reasoning.
the formulaic semantics underlying these synthetic texts allows for the construction of perfect rule-based question answering systems, and may not reflect the patterns of natural linguistic expression.
contrasting
train_3505
7 These three results are obtained from files at http://research.microsoft.com/en-us/ um/redmond/projects/mctest/results.html.
8 we inspected these question annotations and Table 3: Ablation study of feature types on the dev set.
contrasting
train_3506
In order to identify the correct candidate answer sentences, it is crucial to match the cardinal numbers and proper nouns with those occurred in the question.
many cardinal numbers and proper nouns are out of the vocabulary (OOV) of our word embeddings.
contrasting
train_3507
One is the full training set containing 1229 questions that are automatically labeled by matching answer keys' regular expressions.
2 the generated labels are noisy and sometimes erroneously mark unrelated sentences as the correct answers solely because those sentences contain answer keys.
contrasting
train_3508
It is notable that the methods based on deep learning perform more powerful than SVM and CRF, especially for complicate answers (e.g., Potential answers).
sVM and CRF using a large amount of features perform better for the answers that have obvious tendency (e.g., Good and Bad answers).
contrasting
train_3509
The main reason is that the distributed representation learnt from deep learning architecture is able to capture the semantic relationships between question and answer.
the feature-engineers in both SVM and CRF suffer from noisy information of CQA and the feature sparse problem for short questions and answers.
contrasting
train_3510
Baroni and Lenci (2008) is the only work we are aware of that addressed different property types, while utilizing a DM for property extraction.
their approach is simple, and includes defining the properties of a concept to be the 10 neighboring words of that concept in the DM space.
contrasting
train_3511
For example, we can append the special beginning-of-sentence symbol s and end-of-sentence symbol /s to all sentences to increase their lengths, allowing the relaxed hybrid trees to be constructed for certain sentence-semantics pairs with short sentences.
such an approach does not resolve the theoretical limitation of the model.
contrasting
train_3512
Just from reading the text, it is difficult to tell whether the speaker is asking an informational question or whether they are implying that they did not say that writing a dissertation was easy.
according to our observation, which forms the basis of this work, there are two cases in which rhetorical questions can be identified solely based on the text.
contrasting
train_3513
We should note also that our results assume a high level of consistency of the hand annotations from the original taggging of the Switchboard Corpus.
based on our observation and the strict guidelines followed by annotators as mentioned in Jurafsky et al.
contrasting
train_3514
Due to the reduced size of the evenly split dataset, performing a McNemar's test with Edwards' correction (Edwards 1948) does not allow us to reject the null hypothesis that the two experiments do not derive from the same distribution with 95% confidence (χ 2 = 1.49 giving a 2-tailed p value of 0.22).
over the whole skewed dataset, we find χ 2 = 30.74 giving a 2-tailed p < 0.00001 so we have reason to believe that with a larger evenly-split dataset inte-grating context-based features provides a quantifiable advantage.
contrasting
train_3515
Further, transfer learning is usually inferior to traditional supervised learning when the target domain already has good training data.
our target (or future) domain/task has good training data and we aim to further improve the learning using both the target domain training data and the knowledge gained in past learning.
contrasting
train_3516
Multi-task learning optimizes the learning of multiple related tasks at the same time (Caruana, 1997, Chen et al., 2011, Saha et al., 2011, Zhang et al., 2008.
these methods are not for sentiment analysis.
contrasting
train_3517
There is no explicit incongruity here: the only polar word is 'love'.
the clause 'I made a doggy bag out of it' has an implied sentiment that is incongruous with the polar word 'love'.
contrasting
train_3518
Note that the 'native polarity' need not be correct.
a tweet that is strongly positive on the surface is more likely to be sarcastic than a tweet that seems to be negative.
contrasting
train_3519
Previous researches on emotion analysis have mainly focused on emotion expressions in monolingual texts (Chen et al., 2010;Lee et al., 2013a).
in informal settings such as micro-blogs, emotions are often expressed by a mixture of different natural languages.
contrasting
train_3520
A typical approach is to collect a portion of historical reviews from each user to construct a shared training corpus .
this setting is problematic: it already exploits information from every user and does not reflect the reality that some (new) users might not exist when training the global model.
contrasting
train_3521
We inspected the learned weights in the adapted models in each user from LinAdapt, and found the words like waste, poor, and good share the same sentiment polarity as in the global model but different magnitudes; while words like money, instead, and return are almost neutral in global model, but vary across the personalized models.
words such as care, sex, evil, pure, and correct constantly carry the same sen- timent across users.
contrasting
train_3522
We split the data in training (train) and evaluation (test) sets as indicated in Table 1.
the SMt system was trained on freely avail-Source: <zone> <x translation="ou1-P">x</x> <wall/> a big advantage <wall/> <x translation="/ou1">x</x> </zone> of the hostel is its placement translation: por otra parte <ou1-P>una gran ventaja</ou1> del hostal es su colocación Figure 3: Source text with reordering constraint mark-up as well as code to pass tags, and its translation.
contrasting
train_3523
These features enable SLU models to robustly handle unseen entities at test time.
these lists are often massive and very noisy.
contrasting
train_3524
This corpus provides recorded eye-tracking data, collected with a remote faceLAB system.
the evaluation presented by Engonopoulos et al.
contrasting
train_3525
The system with added phrasal deletion achieved the BLEU score of 60.46, while the the standard model without phrasal deletion achieved the BLEU score of 59.87.
the baseline (BLEU score when the system does not perform any simplification on the original sentence) was 59.37, indicating that the systems often leave the original sentences unchanged.
contrasting
train_3526
Recently, there have been several attempts at addressing the TS task as a monolingual translation problem, translating from 'original' to 'simple' sentences.
they did not try to seek reasons for the success or the failure of their systems.
contrasting
train_3527
Hence, Gillick and Favre (2009) were right in their assumption that syntactic and semantic concepts would not lead to performance improvements, when restricting ourselves to this dataset.
when we change domain to the legal judgments or Wikipedia articles, using syntactic and semantic concepts leads to significant gains across all the ROUGE metrics.
contrasting
train_3528
We note that FrameNet 1.5 covers the legal domain quite well, which may explain why these concepts are particularly useful for the ECHR dataset.
labeled (LDEP) and unlabeled (UDEP) dependencies also significantly outperform the baseline.
contrasting
train_3529
This project provides a starting point for developing a treebank for resource-poor languages.
a mature parser requires a large treebank for training, and this is still extremely costly to create.
contrasting
train_3530
All we need to do is modify the learning objective function so that it includes the regularization part.
we don't want to regularize the part related to E en word since it will be very different between source and target language.
contrasting
train_3531
The error rate reduction is from 15.8% down to 6.5% for training data sizes from 1k to 15k tokens.
when we use all the training data, the supervised model is slightly better.
contrasting
train_3532
3 Parser Extensions We previously create the learning target by representing an AMR graph as a Span Graph, where each AMR concept is annotated with the text span of the word or the (contiguous) word sequence it is aligned to.
abstract concepts that are not aligned to any word or word sequence are simply ignored and are unreachable during training.
contrasting
train_3533
Neural networks have also been shown to be powerful generative models for language modelling (Bengio et al., 2003;Mikolov et al., 2010) and machine translation (Kalchbrenner and Blunsom, 2013;Devlin et al., 2014;Sutskever et al., 2014).
currently these models lack awareness of syntax, which limits their ability to include longer-distance dependencies even when potentially unbounded contexts are used.
contrasting
train_3534
The probability p(w|t, h) can be estimated similarly.
to reduce the computational cost of normalising over the entire vocabulary, we factorize the probability as P (w|h) = P (c|t, h)P (w|c, t, h), where c = c(w) is the unique class of word w. For each c, let Γ(c) be the set of words in that class.
contrasting
train_3535
When the size of the beam exceeds a set threshold, the lowest-scoring derivations are removed.
in an incremental generative model we need to compare derivations with the same number of words shifted, rather than transitions performed.
contrasting
train_3536
Boonkwan and Steedman (2011) train a parser that uses a semi-automatically constructed Combinatory Categorial Grammar (CCG, Steedman (2000)) lexicon for POS tags, while Bisk and Hockenmaier (2012;2013) show that CCG lexicons can be induced automatically if POS tags are used to identify nouns and verbs.
assuming clean POS tags is highly unrealistic for most scenarios in which one would wish to use an otherwise unsupervised parser.
contrasting
train_3537
Because RNNs make very few domain-specific assumptions, they have the potential to succeed at a wide variety of tasks with minimal feature engineering.
this flexibility also puts RNNs at a disadvantage compared to standard semantic parsers, which can generalize naturally by leveraging their built-in awareness of logical compositionality.
contrasting
train_3538
Of course, it also does not introduce additional information about compositionality or independence properties present in semantic parsing.
it does generate harder examples for the attention-based RNN, since the model must learn to attend to the correct parts of the now-longer input sequence.
contrasting
train_3539
On GEO and ATIS, the copying mechanism helps significantly: it improves test accuracy by 10.4 percentage points on GEO and 6.4 points on ATIS.
on OVERNIGHT, adding the copying mechanism actually makes our model perform slightly worse.
contrasting
train_3540
2 After finding the set Z of all consistent logical forms, we want to filter out spurious logical forms.
to do so, we observe that semantically correct logical forms should also give the correct denotation in worlds w other than than w. spurious logical forms will fail to produce the correct denotation on some other world.
contrasting
train_3541
The present model shares the same encoder with the sequence-to-sequence model described in Section 3.1 (essentially it learns to encode input q as vectors).
its decoder is fundamentally different as it generates logical forms in a topdown manner.
contrasting
train_3542
In other words, a sequence decoder is used to hierarchically generate the tree structure.
to the sequence decoder described in Section 3.1, the current hidden state does not only depend on its previous time step.
contrasting
train_3543
In fact, some trigger gazetteers have already been constructed by previous work such as (Yu et al., 2015).
manual construction of these triggers heavily rely upon labeled training data and high-quality patterns, which would be unavailable for a new language or a new slot type.
contrasting
train_3544
As far as we know, there are no results available for comparison.
the performance of Chinese SF is heavily influenced by the relatively low performance of name tagging since our method returns an empty result if it fails to find any query metnion.
contrasting
train_3545
A large amount of non-parallel, domain-rich, topically-related comparable corpora naturally exist across LLs and HLs for breaking incidents, such as coordinated news streams (Wang et al., 2007) and code-switching social media (Voss et al., 2014;Barman et al., 2014).
without effective Machine Translation techniques, even just identifying such data in HLs is not a trivial task.
contrasting
train_3546
For example, using the textual clues above is not sufficient to find the Hausa equivalent "Majalisar Dinkin Duniya" for "United Nations", because their pronunciations are quite different.
figure 5 shows the images retrieved by "Majalisar Dinkin Duniya" and "United Nations" are very similar.
contrasting
train_3547
(...as the violent typhoon, which has been given the name, Haiyan, has swept through the island of Leyte and Samar.)"
• Retrieved English comparable document: "As Haiyan heads west toward Vietnam, the Red Cross is at the forefront of an international effort to provide food, water, shelter and other relief..." using face detection results we successfully remove it based on processing the retrieved images as shown in Figure 7.
contrasting
train_3548
This neural system provides even more accurate predictions than our improved phrase-based system.
inference is two orders of magnitude slower, which is prob-lematic for an interactive setting.
contrasting
train_3549
The user query may be satisfied if the machine predicts the correct completion in its top-n output.
it is well-known that n-best lists are poor approximations of MT structured output spaces (Macherey et al., 2008;Gimpel et al., 2013).
contrasting
train_3550
a direct correspondence in the English source, but makes the sentence feel more natural in German.
nMT sometimes drops content words, as in Ex.
contrasting
train_3551
(2015) in image-to-caption translation to fix Φ = 1 for all source words, which means that we directly use the sum of previous alignment probabilities without normalization as coverage for each word, as done in (Cohn et al., 2016).
in machine translation, different types of source words may contribute differently to the generation of target sentence.
contrasting
train_3552
(2015), which encourages the model to pay equal attention to every part of the image (i.e., Φ j = 1).
our empirical study shows that the combined objective consistently worsens the translation quality while slightly improves the alignment quality.
contrasting
train_3553
In (attentional) encoder-decoder architectures for neural machine translation (Sutskever et al., 2014;Bahdanau et al., 2015), the decoder is essentially an RNN language model that is also conditioned on source context, so the first rationale, adding a language model to compensate for the independence assumptions of the translation model, does not apply.
the data argument is still valid in NMT, and we expect monolingual data to be especially helpful if parallel data is sparse, or a poor fit for the translation task, for instance because of a domain mismatch.
contrasting
train_3554
By using phrases, PB models can capture local phenomena, such as word order, word deletion, and word insertion.
one of the significant weaknesses in conventional PB models is that only continuous phrases are used, so generalizations such as French ne .
contrasting
train_3555
C D S (Koehn et al., 2003) • sequence (Galley and Manning, 2010) • • sequence and • tree This work • • graph Table 1: Comparison between our work and previous work in terms of three aspects: keeping continuous phrases (C), allowing discontinuous phrases (D), and input structures (S).
the expressiveness of these models is confined by hierarchical constraints of the grammars used (Galley and Manning, 2010) since these patterns still cover continuous spans of an input sentence.
contrasting
train_3556
Galley and Manning (2010) directly extract discontinuous phrases from input sequences.
without imposing additional restrictions on discontinuity, the amount of extracted rules can be very large and unreliable.
contrasting
train_3557
Another argument for discontinuous phrases is that they allow the decoder to use larger translation units which tend to produce better translations (Galley and Manning, 2010).
this argument was only verified on ZH-EN.
contrasting
train_3558
We find that both DTU and GBMT indeed tend to use larger translation units on ZH-EN.
more smaller translation units are used on DE-EN.
contrasting
train_3559
Typically, trees used in SMT are either phrasal structures (Galley et al., 2004;Marcu et al., 2006) or dependency structures Xiong et al., 2007;Xie et al., 2011;Li et al., 2014).
conventional treebased models only use linguistically well-formed phrases.
contrasting
train_3560
as evaluated by a human partner), the hypothesis selector will be updated with the success of its prediction.
if the action has never been encountered (i.e., the system has no knowledge about this verb and thus the corresponding space is empty) or the predicted action sequence is incorrect, the human partner will provide an action sequence A i that can correctly perform command v i in the current environment.
contrasting
train_3561
determined based on the consistency defined previously), node t (i) and a link t → t (i) are added to the space H. The node t (i) is also added to a temporary hypothesis container waiting to be further generalized.
some children hypotheses can be inconsistent with their parents.
contrasting
train_3562
As shown in Figure 6, all the four curves become steady after 8 learning instances are used.
while some verb frames have final SJIs of more than 0.55 (i.e.
contrasting
train_3563
Words are the basic input/output units in most of the NLP systems, and thus the ability to cover a large number of words is a key to building a robust NLP system.
considering that (i) the number of all words in a language including named entities is very large and that (ii) language itself is an evolving system (people create new words), this can be a challenging problem.
contrasting
train_3564
In particular, applying to machine translation task, (Luong et al., 2015) learns to point some words in source sentence and copy it to the target sentence, similarly to our method.
it does not use attention mechanism, and by having fixed sized soft-max output over the relative pointing range (e.g., -7, .
contrasting
train_3565
In question answering setting, (Hermann et al., 2015) have used placeholders on named entities in the context.
the placeholder id is directly predicted in the softmax output rather than predicting its location in the context.
contrasting
train_3566
Our hope was that the improvement would be larger for the entities data since the incidence of pointers was much greater.
it turns out this is not the case, and we suspect the main reason is anonymization of entities which removed datasparsity by converting all entities to integer-ids that are shared across all documents.
contrasting
train_3567
Table 1 shows that easy-first is more accurate than arc-standard.
it is also more computationally expensive.
contrasting
train_3568
Otherwise, it would entail that there is a tree reachable from C but unreachable from t(C), for any t. Therefore, we reformulate equation 1: (3) In the transition system, the grammar is left implicit: any reduction is allowed (even if the corresponding grammar rule has never been seen in the training corpus).
due to the introduction of temporary symbols during binarization, there are constraints to ensure that any derivation corresponds to a well-formed unbinarized tree.
contrasting
train_3569
conjecture that CDSMs might largely avoid problems handling adjectives with multiple senses because the matrices for adjectives implicitly incorporate contextual information.
they do draw a distinction between two ways in which the meaning of a term can vary.
contrasting
train_3570
Frequently these global lexical models create a different idiom token classifier for each phrase.
a number of papers on idiom type and token classification have pointed to a range of other features that could be useful for idiom token classification; including local syntactic and lexical patterns (Fazly et al., 2009) and cue words (Li and Sporleder, 2010a).
contrasting
train_3571
The decoder is essentially a neural language model conditioned on the input sentence representation h N i .
two RNNs are used (one for the sentence s i−1 and the other for the sentence s i+1 ) with different parameters except the embedding matrix (E), and a new set of matrices (C r , C z and C) are introduced to condition the GRU on h N i .
contrasting
train_3572
Context dependence Both our method and the two datasets, VJ'05 and MC'07, assume that the compositionality score can be computed for each phrase with no contextual information.
in general, the compositionality level of a phrase depends on its contextual information.
contrasting
train_3573
The correlation score is 0.674 and that is, the two different corpora lead to reasonably consistent results, which indicates the robustness of our method.
the correlation score is still much lower than perfect correlation; in other words, there are disagreements between the outputs learned with the corpora.
contrasting
train_3574
Therefore, determining semantic or topical cohesion is important for metaphor detection.
even if a text is literal and cohesive, not all words within the text are semantically related.
contrasting
train_3575
Our topical and emotion and cognition context features are general across target words.
the specific features that are informative for metaphor identification may depend on the target word.
contrasting
train_3576
The topical features added to the baseline led to a significant improvement in accuracy, while emotion and cognition features only slightly improved the accuracy without statistical significance.
the combination of these emotion and cognition features with topical features (in the last row of Table 2) leads to improvement.
contrasting
train_3577
• To learn the sparse codes, we first train the "true" embeddings by word2vec 2 for both common words and rare words.
these true embeddings are slacked during our language modeling.
contrasting
train_3578
The main motivation for this approach is the assumption that single-word distributional representations cannot represent all senses of a word well (Huang et al., 2012).
li and Jurafsky (2015) show that simply increasing the dimension- Figure 5: This grammar generates nouns (x n. i ) and adjectives (x a. i ) with masculine (x .m i ) and feminine (x .f i ) gender as well as paradigm features u i .
contrasting
train_3579
There are test cases in analogy that hypothetically evaluate specific facets like gender of words, as in king-man+woman=queen.
it does not consider the impact of other facets and assumes the only difference of "king" and "queen" is gender.
contrasting
train_3580
Our results confirm our previous observation that a classification by looking at subspaces is needed to answer this question.
based on full-space similarity, one can infer little about the quality of embeddings.
contrasting
train_3581
Based on our results, SSKIP and CWIN embeddings contain more accurate and consistent information because MLP classifier gives better results for them.
if we considered 1NN for comparison, SKIP and CBOW would be superior.
contrasting
train_3582
The smoothness properties could be much more complicated.
even if this was the case, then much of the general framework of what we have presented in this paper would still apply; e.g., the criterion that a particular facet be fully and correctly represented is as important as before.
contrasting
train_3583
The improvements, however, are not statistically significant.
a too conservative pair selection criterion with higher threshold values significantly deteriorates the overall performance of HYBWE with HFQ+HYB+SYM.
contrasting
train_3584
", "gun" is identified as an Instrument for "battle" event based on the AMR relation :instrument.
dependency parsing identifies "gun" as a Table 7: Impact of semantic information and representations on typing for ERE data.
contrasting
train_3585
Then the probability of the sequence is computed as k t=1 c∈c(wt) p(c|w t , θ).
to skip-gram, CBOW (Mikolov et al., 2013a) uses context to predict each token, i.e.
contrasting
train_3586
(2010) proposed a spectral feature alignment (SFA) algorithm to align the domain-specific words from different domains in order to reduce the gap between source and target domains.
all of these methods transfer sentiment information from only one source domain.
contrasting
train_3587
The word "fast" is positive when used to describe CPU.
it is frequently used as a negative word in Battery domain.
contrasting
train_3588
opinion role induction (Wiegand and Ruppenhofer, 2015) and effect analysis (Choi and Wiebe, 2014).
our work is the first to organize various aspects of the connotative information into coherent frames.
contrasting
train_3589
Sentiment inference rules have been explored by the recent work of and .
we make a novel conceptual connection between inferred sentiments and frame semantics, organized as connotation frames, and present a unified model that integrates different aspects of the connotation frames.
contrasting
train_3590
The first kind of methods assume that some training data in the source domain are very useful for the target domain and these data can be used to train models for the target domain after re-weighting.
feature representation approaches attempt to develop an adaptive feature representation that is effective in reducing the difference between domains.
contrasting
train_3591
computational cost and lack of scalability to highdimensional features.
these methods learn the unified domain-invariable feature representations by combining the source domain data and that of the target domain data together, which cannot well characterize the domain-specific features as well as the commonality of domains.
contrasting
train_3592
First, we apply social dynamics motivated by social science theories to entity-entity sentiment analysis in unstructured text.
most previous studies focused on social media or dialogue data with overt social network structure when integrating social dynamics (Tan et al., 2011;Hu et al., 2013;West et al., 2014).
contrasting
train_3593
Following previous works that use scores of local classifiers for uncertainty measurement (Sassano and Kurohashi, 2010;Flannery and Mori, 2015), we use Score(x, d * ) to measure the uncertainty of x, assuming that the model is more uncertain about x if d * gets a smaller score.
we find that directly using Score(x, d * ) always selects very short sentences due to the definition in Eq.
contrasting
train_3594
Our initial assumption is that the model is more uncertain about x if d * gets a smaller probability.
we find that directly using p(d * |x) would select very long sentences because the solution space grows exponentially with sentence length.
contrasting
train_3595
The reason may be that under PA, annotators can be more focused and therefore perform better on the few selected tasks.
some annotators may perform better under FA.
contrasting
train_3596
Second, we so far assume that the selected tasks are equally difficult and take the same amount of effort for human annotators.
it is more reasonable that human are good at resolving some ambiguities but bad at others.
contrasting
train_3597
If a similar question is found in the database of previous questions, then the corresponding best answer can be provided without any delay.
the major challenge associated with retrieval of similar questions is the lexicosyntactic gap between them.
contrasting
train_3598
Previous work represents the documents with averaged vectors of words (Tang et al., 2014;Tan et al., 2015).
this may lead to the loss of detailed information of the documents.
contrasting
train_3599
Most of the features are extracted under both of the settings.
feature 2 is much too computation-intensive and feature 7 needs POS-tagging as the preprocessing.
contrasting