id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_13700
In message boards, these types of Commissives are relatively rare.
we found many statements where the main purpose was to confirm to the readers that the writer would perform some action in the future.
contrasting
train_13701
And in many cases, that is true.
we saw many posts where that inference would have been wrong.
contrasting
train_13702
The lexical and syntactic features dramatically improve performance on Commissives, increasing F score from 27% to 40%, and they produce a 2% recall gain for Representatives but with a corresponding loss of precision.
we observed that only a few of the lexical and syntactic features had much impact on performance.
contrasting
train_13703
From the first example the CSD could learn even that the bag-of-word contains brother or the SUBJ=brother.
in the second example, the bag-of-word representation is not sufficient to learn that the local context of Newcastle is altered because it is the subset of the bag-of-word representation of Arsenal's non-altered local context.
contrasting
train_13704
Indicator selectors can be even derived from most classifiers which are based on feature weighting (like MaxEnt and AvgPerceptron) or feature ranking (like rule-based classifiers) 5 as well.
indicator selection is not the focus of this For our experiments, a feature evaluation-based greedy algorithm was employed to select the set of indicators from the pool of token uni-and bigrams.
contrasting
train_13705
The improvement of voting function is statistically significant at a 99.6% confidence level by conducting Wilcoxon Matched-Pairs Signed-Ranks Test on the 10 folds of the testing set.
the improvement of average function is not significant at the 0.05 level which implies that average is inferior to voting.
contrasting
train_13706
A comprehensive survey is presented in (Rokach, 2009).
ensemble-based ranking has only recently attracted research interests (Hoi and Jin, 2008;Wei et al., 2010).
contrasting
train_13707
With respect to (iii), the above-mentioned studies use ad hoc thresholds to separate compositional and non-compositional phrases but do not offer a principled decision criterion.
2 we train a statistical classifier to learn a decision criterion.
contrasting
train_13708
In the latter case, the modifier(s) introduce(s) a non-compositional, unpredictable shift of meaning; hot shifts the meaning of dog from live animal to food.
the compositional meaning shift caused by small in small dog is transparent.
contrasting
train_13709
This shows the effectiveness of using the principled query expansion technique coupled with KL-divergence retrieval model to rank KB entries.
again we observe that the effects on the Nil and the non-Nil queries are different.
contrasting
train_13710
For entity diambiguation they used the contextual comparisons between the Wikipedia article and the KB article.
their work ignores the possibilities of acronyms in the entities.
contrasting
train_13711
Template construction is usually done manually by domain experts, and annotated documents are often created to facilitate supervised learning approaches to IE.
both manual template construction and data annotation are labor-intensive.
contrasting
train_13712
Such a complex task calls for a combination of multiple approaches, and much research indeed suggests "hybrid" approaches to MWE identification (Duan et al., 2009;Weller and Fritzinger, 2010;Ramisch et al., 2010;Hazelbeck and Saito, 2010).
we believe that Bayesian Networks provide an optimal architecture for expressing various pieces of knowledge aimed at MwE identification, for the following reasons (Heckerman, 1995): • to many other classification methods, BN can learn (and express) causal relationships between features.
contrasting
train_13713
The histogram for this candidate is thus (75, 15, 8, 2).
the non-MWE txwm mepv (domain-of law) "domain of the law", which is syntactically identical, occurs in nine different inflected forms, and its sorted histogram is (59, 14, 7, 7, 5, 2, 2, 2, 2).
contrasting
train_13714
Several techniques have been proposed to deal with these ambiguity cases (Tanaka and Umemura, 1994;Shirai and Yamamoto, 2001;Bond et al., 2001;Paik et al., 2004;Kaji et al., 2008;Shezaf and Rappoport, 2010).
each technique has different performance and properties producing dictionaries of certain characteristics, such as different levels of coverage of entries and/or translations.
contrasting
train_13715
Sjöbergh (2005) compared full definitions in order to detect words corresponding to the same sense.
not all the dictionaries provide this kind of information.
contrasting
train_13716
IC depends on the structure of the source dictionaries.
dS depends on a good comparable corpus and translation process.
contrasting
train_13717
This improvement is more marked with the strictest thresholds (T OP 1 , 0.1).
if global thresholds are used, performance starts to decline significantly when dealing with words whose frequency is above 1,000.
contrasting
train_13718
As for average precision, IC provides better results than DS if all entries are taken into account.
dS tips the scales in its favor if only entries with frequencies above 50 are considered and strict thresholds are used (T OP 1 , 0.1).
contrasting
train_13719
Hierarchical rules whose lexical evidence helps resolve words locally will also be favored by our cohesion penalty feature.
ignorant of the syntactic structure, the glue rule penalty may penalize a reasonably cohesive derivation such as Derivation 5 and at the same time promote a less cohesive hierarchical translation, such as Derivation 6.
contrasting
train_13720
The baseline system translates dui (English to) as "of the" and misorders the sentence.
the feature-augmented model "bin-2" cap- tures the boxed area as a whole and uses Rule 10 to perform the right global reordering.
contrasting
train_13721
In fact, it constrains reordering for the phrase-based model, as Cherry finds that the cohesion constraint is used "primarily to prevent distortion" and to provide "an intelligent estimate as to when source order must be respected" (Cherry, 2008).
since the hierarchical phrasebased model already conducts principled reordering search with rules through the more constrained chart-decoding, ill-formed derivations exhibit themselves more often as nonconstituent translation than interrupted translation as defined in (Cherry, 2008;Bach et al., 2009a,b) (They do have a non-empty intersection, but neither subsumes the other).
contrasting
train_13722
(2005) proposed a discriminative language modeling approach that uses mixtures of POS and surface information and showed that it leads to a reduction in speech recognition word er-ror rates.
their approach seems more suited for n-best list re-ranking and it is not clear whether those improvements carry over to machine translation.
contrasting
train_13723
Motivated by these work, we use a translation forest (Section 3) which contains both "reference" derivations that potentially yield the reference translation and also neighboring "non-reference" derivations that fail to produce the reference translation.
1 the complexity of generating this translation forest is up to O(n 6 ), because we still need biparsing to create the reference derivations.
contrasting
train_13724
The motivation of using such a forest is efficiency.
since this space contains both "good" and "bad" translations, it still provides evidences for discriminative training.
contrasting
train_13725
We refer these rules as added rules.
this may introduce rules with more than two variables and increase the complexity of bi-parsing.
contrasting
train_13726
The window size restrictions mean that some reference alignments are not reachable from the starting point.
this is unlikely to limit performance -an oracle aligner achieves 97.6%F -measure on the Arabic-English training set.
contrasting
train_13727
Again the resulting improvement over the ME-seq aligner is statistically significant.
here the improvement in recall is somewhat larger than the improvement in precision.
contrasting
train_13728
Given a test document, our approach imitates this procedure by first retrieving similar bilingual document pairs from the training parallel corpus, which has often been applied in IR-based adaptation of SMT systems (Zhao et al.2004;Hildebrand et al.2005;Lu et al.2007) and then extracting bilingual phrase pairs from similar bilingual document pairs to store them in a static cache.
such a cache-based approach may introduce many noisy/unnecessary bilingual phrase pairs in both the static and dynamic caches.
contrasting
train_13729
(1992) and gave a detailed comparison and analysis of the "one translation per discourse" hypothesis.
she failed to propose an effective way to integrate document-level information into a SMT system.
contrasting
train_13730
Previous cache-based approaches mainly point to cache-based language modeling (Kuhn and Mori, 1990), which uses a large global language model to mix with a small local model estimated from recent history data.
applying such a language model in SMT is very difficult due to the risk of introducing extra noise (Raab, 2007).
contrasting
train_13731
We don't think the translation quality for example 4 in our system is worse than Moses.
the translation quality for example 3 in our system is very bad and especially showed on "re-ordering".
contrasting
train_13732
In this experiment, we only adopt the flat data in our cache.
the structured data may improve the correctness of matching and thus effectively avoid noise.
contrasting
train_13733
We make rather little use of the dual form of the problem.
the complementary slackness conditions that are necessary for optimality to hold play an important role in the next section in which we present a reformulation of the relaxed maximum entropy problem.
contrasting
train_13734
Traditional approaches for active learning query the human experts to obtain the labels for intelligently chosen data samples.
in text classification where the input data is generally represented as documentword matrices, human supervision can be obtained on both documents and words.
contrasting
train_13735
This demonstrates that splitting a katakana noun compound is not at all a trivial task to resolve, even for the state-of-theart word segmentation systems.
pROpOSED outperformed both JUMAN and MECAB in this task, meaning that our technique can successfully complement the weaknesses of the existing word segmentation systems.
contrasting
train_13736
This shows that the back-transliteration feature successfully reduced the number of out-ofvocabulary words.
we observed that the paraphrase and back-transliteration features were activated for 79.5% (1926/2423) and 15.5% (376/2423) of the word boundaries in our test data.
contrasting
train_13737
Similarly, we can discretize the punctuation variety features.
we only set one threshold, 30, for this value.
contrasting
train_13738
In spite of its simplicity, the document-based features can help the task.
when we combine statistics-based features with document-based features, we cannot get further improvement in terms of F-score.
contrasting
train_13739
In this section, we limit our discussion of logical metonymy to the verb-object case, its corresponding baseline for ranking interpretations, and our proposed enhancements.
similar baselines exist for other types of logical metonymy, such as adjective-noun and noun-noun.
contrasting
train_13740
Both metonymy data sets were limited to the verbs found in Lapata and Lascarides (2003), which are still quite common (attempt, begin, enjoy, expect, finish, prefer, start, survive, try, want).
the verbs used in our data set had a greater number of WordNet senses attested in a corpus than the SemEval data (an average of 4.4 senses for our data versus 3.0 senses for the SemEval data).
contrasting
train_13741
WordNet has been used in earlier studies (Hirst and St-Onge, 1998;Jiang and Conrath, 1997;Lin, 1998;Leacock and Chodorow 1998;Resnik, 1995;Seco et al., 2004;Wu and Palmer, 1994) and is still a preferred knowledge source in recent works (Agirre et al., 2009).
its effectiveness may be hindered by its lack of coverage of specialized lexicons and domain specific concepts (Strube and Ponzetto, 2006;Zhang et al., 2010).
contrasting
train_13742
On one hand, WordNet is a lexical resource containing rich and strict semantic relations between words, but lacks coverage of specialized vocabularies.
wikipedia is a semi-structured resource with good coverage of domains and named entities, but the semantic knowledge is organized in a looser way.
contrasting
train_13743
For example, the node piano has a density value of 15 under the node percussion instrument.
the density value of its hyponyms Grand piano, upright piano, and mechanical piano, is only 3.
contrasting
train_13744
Also note the resemblance to Mitchell and Lapata's best scoring vector composition model which, likewise, uses pointwise multiplication.
the model presented here has two advantages.
contrasting
train_13745
The number of clusters (K) and levels (L) were inferred automatically for HGFC as described in section 3.2.3.
to make the results comparable with previously published ones, we cut the resulting hierarchy at the level of closest match (12 clusters) to the K (13) in the gold-standard.
contrasting
train_13746
31.2.0 and 31.2.1 which both belong to 31.2 Admire verbs).
the remaining 8 clusters group together sub-classes (and their members) belonging to unrelated parent classes.
contrasting
train_13747
On one hand, previous work shows that there is a substantial lack of automatic methods for engineering lexical/syntactic features (or more in general syntactic/semantic similarity).
automatic feature engineering of syntactic or shallow semantic structures has been carried out by means of structural kernels, e.g.
contrasting
train_13748
Thus, it just gives an additional weight to the fragment and does not violate the Mercer's conditions.
the multiplication by σ(n 1 , n 2 ) does depend on both comparing examples, i.e.
contrasting
train_13749
This is useful to measure similarity between lexicals belonging to the same grammatical category.
the conversion of dependency structures in computationally effective trees (for the above kernels) is not straightforward.
contrasting
train_13750
Section 2 has already described the kind of features generated by SK, STK and PTK.
it is interesting to analyze what happens when SPTK is applied.
contrasting
train_13751
all f occurrences will always have span less than x for x ≥ ).
for typical values of x (i.e.
contrasting
train_13752
In Nissim's (2006) feature set, there are a couple of features that capture NP-internal information, such as determiner, NP length, and NP type.
there is only one feature that captures the syntactic context of an NP, grammatical role, which is computed based on the parse tree in which the NP resides.
contrasting
train_13753
Comparing each of Baseline+Ana and Baseline+Lexical+Ana with the corresponding experiments in Table 3, we see that the addition of anaphoricity features yields a mild performance improvement, which is consistent over all three classes.
comparing the last column of the two tables, we can see that in the presence of the structured features, the anaphoricity features do not contribute positively to overall performance.
contrasting
train_13754
Synset replacement using a similarity metric shows an improvement over using words alone.
the improvement in classification accuracy is marginal compared to sense-based representation without synset replacement (Similarity Met-ric=NA).
contrasting
train_13755
Again, it is only 1% over the vanilla setting that uses combination of synset and words.
the similarity metric is not sophisticated as LIN or LCH.
contrasting
train_13756
The assumption underlying our analysis is that a document contains description of only one topic.
reviews are generic in nature and tend to express contrasting sentiment about sub-topics .
contrasting
train_13757
Because Web 1T data is just n-gram statistics, rather than a collection of normal documents, it does not provide co-occurrence statistics of any random word pairs.
it provides a nice approximation to the particular co-occurrence statistics we are interested in, which are, predicate -argument pairs.
contrasting
train_13758
At the beginning, sentiment lexicons were designed to include only those words that express sentiment, that is, subjective words.
in recent years, sentiment lexicons started expanding to include some of those words that simply associate with sentiment, even if those words are purely objective (e.g., Velikovich et al.
contrasting
train_13759
In turn this implies that the reranking model must not rerank 75% of times and rerank the other 25% of times, someway contrasting the evidence provided by the baseline model score.
using our WRR strategy, we can tune the reranking model to maximize reranking effect and recover from reranking errors applying WRR.
contrasting
train_13760
The standard MELM with n-gram features suffers drastically as we sample more aggressively.
the binary n-gram MELM(Feat-I) does not appear to be hurt by aggressive subsampling, even when 99% of the negative examples are discarded.
contrasting
train_13761
In fact, the short-list approach in (Schwenk, 2007) and the adaptive importance sampling in (Bengio and Senecal, 2008) have exactly this intuition.
in the multi-class setup, subsampling like this has to be very careful.
contrasting
train_13762
During a perceptron update, an incorrect prediction, corresponding to the current best edge in the agenda, is penalized, and the corresponding gold edge is rewarded.
in our scenario it is not obvious what the corresponding gold edge should be, and there are many ways in which the gold edge could be defined.
contrasting
train_13763
The quality of the parse tree can reflect both the grammaticality of the surface string and the quality of the trained grammar model.
there is no direct way to automatically evaluate parse trees since output word choice and order can be different from the gold-standard.
contrasting
train_13764
The best lexical category accuracy of 77% is achieved when using a supertagger with a β level 0.075, the level for which the least lexical category disambiguation is required.
compared to the 93% lexical category accuracy of a CCG parser (Clark and Curran, 2007), which also uses a β level of 0.075 for the majority of sentences, the accuracy of our grammaticality improvement system is much lower.
contrasting
train_13765
(2007), which maintains a queue of hypotheses during search, and performs learning to ensure that the highest scored hypothesis in the queue is correct.
in easy-first search, hypotheses from the queue are ranked by the score of their next action, rather than the hypothesis score.
contrasting
train_13766
(2006), who aimed at building a dialogue system for a situated agent giving instructions in a virtual 3D world.
their approach was concerned with choosing the type of reference to use (definite or indefinite, pronominal, bare or modified head noun), and not with the content of the reference; and their data set consisted of only 1242 referring expressions.
contrasting
train_13767
An early machine learning ap- proach to content selection was presented by Jordan and Walker (2000;; they were also interested in an exploration of the validity of different psycholinguistic models of reference production, including Grosz and Sidner's (1986) model of discourse structure, the conceptual pacts model of Clark and colleagues, and the intentional influences model developed by .
their data set consists of only 393 referring expressions, compared to our 16,358, and these expressions had functions other than identification; most importantly, the entities referred to were not part of a shared visual scene as is the case in our data.
contrasting
train_13768
The alignment approach would appear to be preferable on the grounds of computational cost: we would expect that retrieving a previously-used referring expression, or parts thereof, generally requires less computation than building a new referring expression from scratch.
if the context has changed in any way, then a previously-used form of reference may no longer be effective in identifying the intended referent, and recomputation may be required.
contrasting
train_13769
The smallest agreement lies at 3424 instances (68.2%) between TradREG (the least successful model) and Alignment+Ind (the most successful model).
they also each predict correct solutions that the other misses: 493 (10.0%) for TradREG and 1031 (20.8%) for Alignment+Ind.
contrasting
train_13770
First, we have demonstrated that a model using all these features to predict content patterns in subsequent references in shared visual scenes delivers an Accuracy of 58.8% and a DICE score of 0.81, outperforming models based only on features inspired by one of the two approaches.
we found that the features based on traditional REG considerations do not contribute as much to this score as those based on the alignment approach, and that dropping the traditional REG features does not significantly hurt the performance of a model based on alignment and theory-independent features.
contrasting
train_13771
Typically, POS tagging and dependency parsing are modeled in a pipelined way.
the pipelined method is prone to error propagation, especially for Chinese.
contrasting
train_13772
The secondorder model of Carreras (2007) incorporates both sibling and grandparent parts, and needs O(n 4 ) parsing time.
the grandparent parts are restricted to those composed of outermost grandchildren.
contrasting
train_13773
Based on the above illustration, we can see that joint models of version 1 are more efficient with regard to the number of POS tags for each word, but fail to incorporate syntactic surrounding features and POS trigram features in the DP structures.
joint models of version 2 can incorporate both aforementioned feature sets, but have higher complexity.
contrasting
train_13774
From another perspective, the joint model is capable of preferring the right tag with the help of syntactic structures, which is impossible for the baseline sequential labeling model.
pairs like {NN, NR}, {VV, VA} and {NN, JJ} only slightly influence the syntactic structure when mis-tagged.
contrasting
train_13775
For patterns marked by ♡, the error rate of the joint model usually increases by large margin.
the proportion of these patterns is substantially decreased, since the joint model can better resolve these ambiguities with the help of syntactic knowledge.
contrasting
train_13776
He conducts primitive experiments on English Penn Treebank, and shows that parsing accuracy can be improved from 91.5% to 91.9%.
he finds that the model is unbearably timeconsuming.
contrasting
train_13777
Each production in the optimum tree should satisfy this principle: the rule used in this production appears in the whole corpus as frequently as possible.
due to translation diversity and word alignment error, the real constituent tree of the target sentence may not be contained in the candidate projected constituents.
contrasting
train_13778
For example, we need to determine that the implicit determiner associated with biological products is universal, and hence, we have IMP ≫ Post.
the determiner "A" associated with general safety test is existential, and hence, we have Post ≫ A.
contrasting
train_13779
The most dramatic improvement is for determiners, and indeed, our features were designed for this case.
the performance gains are not very high for implicit determiners, and further investigation is needed.
contrasting
train_13780
Here, there is no direct method to evaluate the correctness of the translation.
indirect evaluations are possible, for example, by studying improvement in textual entailment tasks.
contrasting
train_13781
In practical applications where large PCFGs are empirically estimated from data sets, the standard conditions mentioned above for the polynomial time approximation of the partition function are usually met.
there are some degenerate cases for which these standard conditions do not hold, resulting in exponential time behaviour of the fixed-point iteration method.
contrasting
train_13782
This means that the running times in the last row of Table 1 can be reduced by treating C 3 differently from the other strongly connected components.
the running time for C 1 dominates the total time consumption.
contrasting
train_13783
In the work of Hall and Novák (2005) and of Attardi and Ciaramita (2007), D contains all nodes in the input parse tree.
one advantage of parse correction is its ability to focus on specific attachment types, so an additional criterion for choosing dependents is to look separately at those dependents that correspond to difficult attachment types.
contrasting
train_13784
Without any restriction on transition functions in T , these functions might have an infinite domain, and could thus encode even nonrecursively enumerable languages.
in standard practice for natural language parsing, transitions are always specified by some finite mean.
contrasting
train_13785
11,12 Lateen strategies may seem conceptually related to co-training (Blum and Mitchell, 1998).
bootstrapping methods generally begin with some labeled data and gradually label the rest (discriminatively) as they grow more confident, but do not optimize an explicit objective function; EM, on the other hand, can be fully unsupervised, relabels all examples on each iteration (generatively), and guarantees not to hurt a well-defined objective, at every step.
contrasting
train_13786
13 Co-training classically relies on two views of the data -redundant feature sets that allow different algorithms to label examples for each other, yielding "probably approximately correct" (PAC)-style guarantees under certain (strong) assumptions.
lateen EM uses the same data, features, model and essentially the same algorithms, changing only their objective functions: it makes no assumptions, but guarantees not to harm the primary objective.
contrasting
train_13787
As in experiment #3 ( §4.1), we modified the base system in exactly one way: we swapped out gold part-of-speech tags and replaced them with a flat distributional similarity clustering.
to simpler models, which suffer multi-point drops in accuracy from switching to unsupervised tags (e.g., 2.6%), our new system's performance degrades only slightly, by 0.2% (see Tables 4 and 5).
contrasting
train_13788
One of the top performing models of spelling correction (Bergsma et al., 2010) is based on web-scale n-gram counts, which reflect both syntax and meaning.
even with a large-scale n-gram corpus, data sparsity can hurt performance in two ways.
contrasting
train_13789
This may only indicate that people who misrepresent their gender are simply consistent across different aspects of their online presence.
the effort involved in maintaining this deception in two different places suggests that the blog labels on the Twitter data are largely reliable.
contrasting
train_13790
Previous work focused on aggregation of sentiment from all users.
in this work we show that it is beneficial to distinguish expert users from non-experts.
contrasting
train_13791
As with the joint all model, test tweets are ranked according to the SVM's score.
the model considers only the tweets of expert users in the test set.
contrasting
train_13792
Most traditional summarization methods treat their outputs as static and plain texts, which fail to capture user interests during summarization because the generated summaries are the same for different users.
users have individual preferences on a particular source document collection and obviously a universal summary for all users might not always be satisfactory.
contrasting
train_13793
in (2003) pre-define several topic concepts, assuming users will foresee their interested topics and then generate the topic biased summary.
such assumption is not quite reasonable because user interests may not be forecasted, or pre-defined accurately as we have explained in last section.
contrasting
train_13794
have proposed a summarization biased to neighboring reading context through anchor texts.
such scenario does not apply to contexts without human-edited anchor texts like Wikipedia they have used.
contrasting
train_13795
As presented so far, the search performed in Step 3 is admissible (or exact) -the true shortest path is found.
the search space in MT can be quite large.
contrasting
train_13796
§4 for definition) can take word-order into account (Smolensky, 1990) or even some more complex syntactic relations, as described in Clark and Pulman (2007).
the dimensionality of sentence vectors produced in this manner differs for sentences of different length, barring all sentences from being compared in the same vector space, and growing exponentially with sentence length hence quickly becoming computationally intractable.
contrasting
train_13797
(2008) or Zettlemoyer and Collins (2005) which rely on unambiguous training data where every sentence is paired only with its meaning.
chen, Kim and Mooney allow their training examples to exhibit the kind of uncertainty about sentence meanings human learners are likely to have to deal with by allowing for sentences to be associated with a set of candidate-meanings, and the correct meaning might not even be in this set.
contrasting
train_13798
By using several heuristics to define an effective portion of constituent trees, and training the classifiers using ACE relation sub-types (rather than on types), they achieved an impressive 75.8% F-measure.
as pointed out in (Nguyen et al., 2009), such heuristics are tuned on the target relation extraction task and might not be appropriate to compare against the automatic learning approaches.
contrasting
train_13799
Unlike ACE, this allows evaluators to measure performance without exhaustively annotating documents, allows for balance between rare and common relations, and implicitly measures coreference without requiring explicit annotation of answer keys for coreference.
because the evaluation only measures performance on the set of queries, many relation instances will be unscored.
contrasting