id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_13200
The lexicalized model in Charniak's parser was first optimized for English and required sophisticated smoothing to deal with sparseness; however, the lexicalized model developed for Chinese works less well.
the PCFG-LA parser learns the latent annotations from the data, without any specification of what precisely should be modeled and how it should be modeled.
contrasting
train_13201
The addition of self-labeled data helps on the test set initially but it provides little gain when the labeled training data becomes relatively large.
the PCFG-LA grammar is able to model the training data with different granularities.
contrasting
train_13202
It is clear from the figure that the improvement in parsing accuracy from self-training is the result of better bracketing across all span lengths 16 .
even though the automatically labeled training data provides more improvement than the additional treebank labeled data in terms of parsing accuracy, this data is less effective at improving tagging accuracy than the additional treebank labeled training data.
contrasting
train_13203
One way to deal with ambiguity is by applying distributional methods, usually requiring a large single-language corpus or, more frequently, parallel corpora.
such corpora are not readily available for many languages and domains.
contrasting
train_13204
We can see that while the scores for original WN terms are not perfect (7/10), single-language and cross-lingual concept extension achieve nearly the same scores.
the latter discovers many more new concept terms without reducing quality.
contrasting
train_13205
We can also see a decrease in precision when the algorithm is provided with 50% of the concept terms as input and had to discover the remaining 50%.
careful examination of the results shows that this decrease is due to discovery of additional correct terms not present in WordNet.
contrasting
train_13206
Among these, only 1 of them (#1) is a correct translation, the rest have similar or totally different meanings.
with the combined scores the faulty translations were eliminated and a new, correct, but previously average scoring translation (#2) was selected (Table 1).
contrasting
train_13207
During pre-evaluation type A and type B translations received a score of above 75%, while type C, type D and type E scored low (see §5.2 for details).
type F translations scored close to 80%, therefore from the six translation methods presented above we chose only three (type A, B and F) to construct the dictionary, while the remaining three methods (type C, D and E) are used only indirectly for type F selection.
contrasting
train_13208
It is relatively easy to automatically generate the translations of low-frequency keywords, because they tend to be less ambiguous.
the ambiguity of the high frequency words is much higher than their low-frequency counterparts, and as a result conventional methods fail to translate a considerable number of them.
contrasting
train_13209
On the contrary, the ambiguity of the high frequency words is much higher than their low-frequency counterparts, and as a result conventional methods fail to translate a considerable number of them.
this discrepancy is not reflected in the traditional recall evaluation, since each word has an equal weight, regardless of its frequency of use.
contrasting
train_13210
Nouns, adjectives and adverbs had a relatively high degree of accuracy.
verbs proved to be the most difficult POS to handle.
contrasting
train_13211
Spectral method such as Latent Semantic Analysis has been commonly applied for MLDC task.
current techniques strongly rely on the presence of common words between different languages.
contrasting
train_13212
This is because there are already enough links between multilingual documents, so we do not necessarily build more links through similarity propagation anymore.
even when there are already many links, our model with propagation still outperforms the model without propagation.
contrasting
train_13213
, Φ L and αm is intractable, due to the summation over an exponential number of topic assignments for these held-out documents.
recently developed methods provide efficient, accurate estimates of this probability.
contrasting
train_13214
The above results suggest that the Web-based suggestions system performs at least as well as the Aspell system.
it must be highlighted that results on the test set with artificial errors does not guarantee similar performance on real user data.
contrasting
train_13215
Explicit word acquisition data is based on interviewing adults regarding their age acquisition process during childhood and so may be unreliable and difficult to obtain for a large representative group of people.
it is possible to reliably collect large quantities of readability data defined as pairs of documents and ages of intended audience.
contrasting
train_13216
This raises some practical difficulties with respect to the computational maximization of the likelihood and subsequent estimation of (2).
for long documents containing a large number of words, q d (s, r) is approximately smooth which motivates a maximum likelihood procedure using gradient descent on a smoothed version of q d (s).
contrasting
train_13217
As a result, using a very simple metric (a line-initial '>' character) to identify reply lines achieves more than 95% accuracy.
this same simple metric applied to the Enron email data we annotated detects less than 10% of actual reply or forward lines.
contrasting
train_13218
Table 2 shows that average performance without n-grams (across two-, three-and nine-zone tasks) for line-based classification drops by 4.67%.
fragment-based classification accuracy drops by less than half this amount -an average of 2.26%.
contrasting
train_13219
Relations extracted from Wikipedia are relatively clean.
reliable distributional similarity can be calculated using a large number of documents on the Web.
contrasting
train_13220
This Wikipedia-based approach can extract a large volume of hyponymy relations with high accuracy.
it is also true that this approach does not account for many words that usually appear in Web documents; this could be because of the unbalanced topics in Wikipedia or merely because of the incomplete coverage of articles on Wikipedia.
contrasting
train_13221
1997) and Bunrui-Goi-Hyo (1996) contain approximately 300,000 words and 96,000 words, respectively.
the extracted hyponymy relations contain approximately 1.2 million hyponyms and are undoubtedly much larger than the existing taxonomies.
contrasting
train_13222
Note that the extracted relations have a hierarchical structure because one hypernym of a certain word may also be the hyponym of another hypernym.
we observed that the depth of the hierarchy, on an average, is extremely shallow.
contrasting
train_13223
The number of common words that are also included in the Wikipedia relation database are as follows: Hypernyms 28,015 (common hypernyms) Hyponyms 175,022 (common hyponyms) These common hypernyms become candidates for hypernyms for a target word.
the common hyponyms are used as clues for identifying appropriate hypernyms.
contrasting
train_13224
WordNet contains 20% of the Animal concepts and 51% of the People concepts learned by our algorithm, which confirms that many of these concepts were considered to be valuable taxonomic terms by the WordNet developers.
our human annotators judged 57% of the Animal and 84% of the People concepts to be correct, which suggests that our algorithm generates a substantial number of additional concepts that could be used to enrich taxonomic structure in WordNet.
contrasting
train_13225
This poses a bottleneck, requiring expertise in both machine learning and the application domain.
domain experts often express their knowledge through text; one direct expression is through text designed to aid human learning.
contrasting
train_13226
We envision an application scenario in which a designer manually specifies a few glosses for each predicate.
for the purposes of evaluation, it would be unprincipled for the experimenters to handcraft the ideal set of glosses.
contrasting
train_13227
Also, we retain twinless system mentions that are nonsingletons, as the resolver should be penalized for identifying spurious coreference relations.
we do not remove twinless mentions in the key partition, as we want to ensure that the resolver makes the correct (non-)coreference decisions for them.
contrasting
train_13228
Knowledge of noun phrase anaphoricity might be profitably exploited in coreference resolution to bypass the resolution of non-anaphoric noun phrases.
it is surprising to notice that recent attempts to incorporate automatically acquired anaphoricity information into coreference resolution have been somewhat disappointing.
contrasting
train_13229
Experiments show that the proposed method improves the performance by 2.9 and 1.6 to 67.3 and 67.2 in F1-measure on the MUC-6 and MUC-7 corpora, respectively, due to much more gain in precision compared with the loss in recall.
surprisingly, their experiments also show that eliminating non-anaphors using an anaphoricity determination module in advance harms the performance.
contrasting
train_13230
Experiments on the NWIRE, NPAPER and BNEWS domains of the ACE 2003 corpus shows that this joint anaphoricity-coreference ILP formulation improves the F1-measure by 0.7-1.0 over the coreferenceonly ILP formulation.
their experiments assume true ACE mentions(i.e.
contrasting
train_13231
This algorithm has been shown to converge to a unique solution (Zhu and Ghahramani 2002), which can be obtained without iteration in theory, and the initialization of Y U 0 (the unlabeled data) is not important since Y U 0 does not affect its estimation.
proper initialization of Y U 0 actually helps the algorithm converge more rapidly in practice.
contrasting
train_13232
A successful PCDC must accurately extract the relevant context for coreference.
the context relevance is not absolute.
contrasting
train_13233
The existence of such rules makes it possible for the disambiguation decisions to be made considering the local context.
the distribution of the PNMs in a corpus is rather random and the relevant coreference context is a dynamic variable depending on the diversity of the corpus, that is, on how many different persons with the same name share a similar context.
contrasting
train_13234
A typical case for this situation is when there is a person that is very often mentioned, and few other persons having few mentions; when the number of clusters is passed in the input, the clusters representing the persons who are rarely mentioned are wrongly enriched.
this situation can be avoided if there is a measure of how probable it is to have a certain number of different persons with the same name, each being mentioned very often in a newspaper.
contrasting
train_13235
To find out such relationships is computationally very hard.
the analysis carried out further shows that we can avoid making such computations in most of the cases.
contrasting
train_13236
The number of different persons is a parameter that cannot be known beforehand.
not all the names behave alike with respect to coreference.
contrasting
train_13237
Considering any two PNMs of the same name the similarity of two of the professional contexts guarantees the correct coreference.
two professional contexts are present in only 4% of the cases.
contrasting
train_13238
The chance that many different persons carry this name is high.
as both "Barack" and "Obama" are rare American first and last names respectively, almost surely many mentions of this name refer only to one person.
contrasting
train_13239
If we knew the distribution function of Y, let's call it F, we would simply determine ξ i from equation 1, where P i =Σp k , k ≤ i : and we would know that in each partition p i the name perplexity is between ξ i-1 and ξ i, with ξ 0 = 0.
we do not know F. Fortunately, we can estimate ξ i .
contrasting
train_13240
Combining all these pairwise preferences to find the best global reordering is NP-hard.
we present a non-trivial O(n 3) algorithm, based on chart parsing, that at least finds the best reordering within a certain exponentially large neighborhood.
contrasting
train_13241
The algorithm is based on CKY parsing.
a novelty is that the grammar weights must themselves be computed by O(n 3 ) dynamic programming.
contrasting
train_13242
During training, we iterate the local search as described earlier.
for decoding, we only do a single step of local search, thus restricting reorderings to the ITG neighborhood of the original German.
contrasting
train_13243
While discriminative methods show superior alignment accuracy in benchmarks, generative methods are still widely used to produce word alignments for large sentence-aligned corpora.
neither generative nor discriminative alignment methods are reliable enough to yield high quality alignments for SMT, especially for distantly-related language pairs such as Chinese-English and Arabic-English.
contrasting
train_13244
According to Och's algorithm, the target phrase "China" breaks the alignment consistency and therefore is not valid candidate.
this is not true for using the weighted matrix shown in Figure 2(c).
contrasting
train_13245
This way, each sentence pair may generate any number of potentially overlapping biphrases.
when defining the phrase-based sentence level translation model, phrase overlaps are explicitly disallowed: The source sentence is segmented into disjoint phrases, which are translated independently using conditional phrase-level translation models that have been estimated from extracted biphrase counts.
contrasting
train_13246
This means it is straightforward to use probabilistic criteria in learning the model parameters.
systems modelling P (y|x) directly are often plagued by the reference reachability problem.
contrasting
train_13247
The prediction strategy outlined in the previous section is simple and conceptually clean.
biphrase overlaps alone may not be enough to enforce fluent output, especially given that bilingual data is typically more scarce than monolingual data.
contrasting
train_13248
A partial explanation for the good relative performance could be that the challenge participants had only a week to train their models on the full version of GigaFrEn data, so they may not have had time to take full advantage of it.
many of the top ranked systems relied on external resources that were not available for us.
contrasting
train_13249
the maximum number of nodes and the maximum height of fragment, to limit the number of possible fragments.
these heuristics are very subjective and hard to optimize.
contrasting
train_13250
It means the pruned forest is able to at least keep all the top n best trees.
because of the sharing nature of the packed forest, it may still contain a large number of additional trees.
contrasting
train_13251
These resources are valuable for human-consumption and can also be exploited in order to learn computational resources (Medelyan et al., 2008;Weld et al., 2008;Zesch et al., 2008b;Zesch et al., 2008a).
it is possible to acquire useful resources and knowledge from aggregating behavioral patterns of large groups of people, even in the absence of a conscious effort.
contrasting
train_13252
(2005) presented a semantic inference framework which "augments" the text representation with only the right-hand-side of an applied rule, and in this respect is similar to ours.
in their work, both rule application and the semantics of the resulting "augmented" structure were not fully specified.
contrasting
train_13253
Both approaches train using a structured perceptron, as we do here.
these models represent a dramatic departure from the existing literature, while ours has clear analogs to the well-known noisy-channel paradigm, which allows for useful comparisons and insights into the advantages of discriminative training.
contrasting
train_13254
Given the character domain's lack of sparsity, and the large amount of available training data, we had expected the hybrid generative system to behave only as a strong baseline; instead, it matched the performance of the indicator system.
this is not unprecedented: discriminatively weighted generative models have been shown to outperform purely discriminative competitors in various NLP classification tasks (Raina et al., 2004;Toutanova, 2006), and remain the standard approach in statistical translation modeling (Och, 2003).
contrasting
train_13255
Significant efforts have been made in attempt to learn a generic ranking model which can appropriately rank documents for all queries .
web users' query intentions are extremely heterogeneous, which makes it difficult for a generic ranking model to achieve best ranking results for all queries.
contrasting
train_13256
There are a few related works to apply multiple ranking models for different query categories.
none of them takes click-through information into consideration.
contrasting
train_13257
For example, in Table 5 (a), the NDCG 5 values (0.7822 and 0.7834) are very close to each other.
in Figure 3, we find that with the same amount of pairs, when we use 30,000 or fewer pairs, using dedi- cated click pairs alone is always better than using generic click pairs alone.
contrasting
train_13258
Although the correct translation can also be composed by two phrases 海水 ⇒ and 淡化 ⇒ , its overall translation score cannot beat the incorrect one because the combined phrase translation probability of these two phrases are much smaller than ( |海水 淡化) .
if we intentionally remove the ( | ) feature from the model, the preferred translation can be generated as shown in the result of − because in this way the bad estimation of ( | ) for this phrase is avoided.
contrasting
train_13259
Our method is similar to the work proposed by Hildebrand and Vogel (2008).
except the language model and translation length, we only use intra-hypothesis n-gram agreement features as Hildebrand and Vogel did and use additional intra-hypothesis n-gram disagreement features as Li et al.
contrasting
train_13260
As each edge in the confusion network only has a single word, it is possible to produce inappropriate translations such as "He is like of apples".
we allow many-to-many mappings in the hypothesis alignment shown in Figure 2(b).
contrasting
train_13261
One backbone arc in a lattice can only span one backbone word.
all hypothesis words in an alignment unit must be packed into one hypothesis arc.
contrasting
train_13262
Part-of-speech tags can be easily obtained for unannotated data using off-the-shelf POS taggers or PCFG parsers.
the amount of information these tags typically provide is very limited, Figure 1: A parse tree example e.g., while it is helpful to know whether fly is a verb or a noun, knowing that you is a personal pronoun does not carry the information whether it is a subject or an object (given the Penn Tree Bank tagset), which would certainly help to predict the following word.
contrasting
train_13263
<unk>, effectively changing the vocabulary thus making perplexity incomparable to models without these factors, without improving WER noticeably.
we do plan to use more overt factors in Machine Translation experiments where a language model faces a wider range of OOV phenomena, such as abbreviations, foreign words, numbers, dates, time, etc.
contrasting
train_13264
It would seem that rare lexical items are indeed crucial for SVM classification performance.
in Goldberg and Elhadad (2007), we suggested that the SVM learner is using the rare lexical features for singling out hard cases rather than for learning meaningful generalizations.
contrasting
train_13265
This was followed by (Bikel, 2004) who showed that bilexical-information is used in only 1.49% of the decisions in Collins' Model-2 parser, and that removing this information results in "an exceedingly small drop in performance".
uni-lexical information was still considered crucial.
contrasting
train_13266
Here, the learning objective is to minimize: Interestingly, for the linear kernel, SVM-anchoring reduces to L2-SVM with C=1.
for the case of non-linear kernels, anchored and L2-SVM produce different results, as the anchoring is applied prior to the kernel expansion.
contrasting
train_13267
As we show in Sect.5.4, fine-tuning the C parameter reaches better accuracy than L1-SVM with C=1.
as this fine-tuning is computationally expensive, we first report the comparison L1-SVM/C=1 vs. anchored-SVM, which consistently reached the best results, and was the quickest to train.
contrasting
train_13268
Either of the results are state-of-the-art for this task.
even modest pruning (k = 2) hurts the soft-margin model significantly.
contrasting
train_13269
Unlike the NPchunking case, here feature pruning has a relatively large impact on the results even for the anchored models.
the anchored models are still far more robust than the soft-margin ones.
contrasting
train_13270
When moving outside of the canonic training corpus, the fully lexicalized model have no advantage over the heavily pruned one.
the pruned models seem to have a small advantage in most cases (though it is hard to tell if the differences are significant).
contrasting
train_13271
The two predicate argument relations thus took the same word as their common arguments, and therefore the two errors co-occurred.
one-way inductive relations also exist among errors.
contrasting
train_13272
From the sentence in the figure, we can obtain two errors for "Prepositional attachment" around prepositions "to" and "for."
each "Predicate type selection" pattern collects errors around a word whose predicate type is erroneous.
contrasting
train_13273
Certainly these templates or word lattices are more useful in such NLP applications as Q&A than simple entailment relations between verbs.
our contention is that entailment certainly holds for some verb pairs (like snore → sleep) by themselves, and that such pairs constitute the core of a future entailment rule database.
contrasting
train_13274
Then, if we can directly estimate is large enough.
we cannot estimate P (v r |v l ) directly since it is unlikely that we will observe the verbs v r and v l at the same time.
contrasting
train_13275
Thirdly, (Shen et al., 2008) deploys the dependency language model to augment the lexical language model probability be-tween two head words but never seek a full dependency graph.
our approach integrates an incremental parsing capability, that produces the partial dependency structures incrementally while decoding, and thus provides for better guidance for the search of the decoder for more grammatical output.
contrasting
train_13276
In these cases, the systems may choose equivalent paraphrases.
the translations using syntactic structures are rather similar.
contrasting
train_13277
For example, if in such method, Hypothesis 3 is first aligned to the backbone, followed by Hypothesis 1, we are likely to arrive at the CN in Figure 2b) in which the two instances of Jeep are aligned.
if Hypothesis 1 is aligned to the backbone first, we would still get the CN in Figure 2a).
contrasting
train_13278
Then it is marked as a finished path.
sometimes the state may contain a few input words that have not been visited.
contrasting
train_13279
If resilient changes function and becomes a noun modifier, its modifiers must change category too: There is often a way to analyse around the need for type-changing operations in CCG.
these solutions tend to cause new difficulties, and the resulting category ambiguity is quite problematic (Hockenmaier and Steedman, 2002).
contrasting
train_13280
So far, we have focused on replacing the phrasestructure rules added to CCGbank, which are not part of the CCG linguistic theory.
the theory does include some type-changing rules, referred to as type-raising.
contrasting
train_13281
Since each of the combinators is hard-coded for speed, this was time-consuming and error prone.
we created a detailed set of regression tests for the new versions which greatly reduced our development time.
contrasting
train_13282
The derivs model uses features calculated over the derivations, while the hybrid model uses features calculated on the dependency structures.
unlike the deps model Clark and Curran (2007) describe, the hybrid model uses two sets of derivation-based constraints.
contrasting
train_13283
The lexicalised type-changing scheme we have proposed offers many opportunities for favourable analyses, because it allows form and function to be represented simultaneously.
we have limited our changes to replacing the existing CCGbank non-combinatory rules.
contrasting
train_13284
In this case we say feature c(s t−1 , s t ) =+, which encourages "reduce".
in Figure 3(b), the source span is still [saw .. Bill], but this time maps onto a much longer span on the Chinese side.
contrasting
train_13285
WordNet is just one of the several knowledge sources which have been utilized.
the WordNet based features is not informative compared to other features such as the semantic neighbor feature.
contrasting
train_13286
(2006) used speech acts to capture the intentional focus of emails and discussion boards.
they assume that enough labeled data are available for developing speech act recognition models.
contrasting
train_13287
(2006)'s work, Ravi and Kim (2007) applied speech act classification to detect unanswered questions.
none of these studies have focused on the semisupervised speech act recognition problem and examined their methods across different genres.
contrasting
train_13288
(2005) presented semi-supervised learning to employ auxiliary unlabeled data in call classification, and is closely related to our work.
our approach uses the most discriminative subtree features, which is particularly attractive for reducing the model's size.
contrasting
train_13289
Previous work in speech act recognition used a large set of lexical features, e.g., bag-of-words, bigrams and trigrams (Stolcke et al., 2000;Cohen et al., 2004;Ang et al., 2005;Ravi and Kim, 2007).
these methods create a large number of lexical features that might not be necessary for speech act identification.
contrasting
train_13290
We can find the correct parts of speech of the composite characters of a word when it is an example word in the dictionary.
not all words are listed in the corpus.
contrasting
train_13291
The performance of opinion extraction boosts to an f-score 0.80 and the performance of polarity detection an f-score 0.54.
the utilization of structure trios needs the parsing tree of sentences as the prior knowledge.
contrasting
train_13292
Virtual evidence (VE), first introduced by Pearl (1988), offers a principled and convenient way of incorporating external knowledge into Bayesian networks.
to standard evidence (also known as observed variables), VE expresses a prior belief over values of random variables.
contrasting
train_13293
We let P l denote a prototype list associated with the label l. If x t belongs to P l , we should prefer y t = l as opposed to other values.
to this end, for cases where x t ∈ P l , we set s 1 as if x t is not a prototype, we will always have s 1 (y t , v t , t) = 0 for all hypotheses of y t .
contrasting
train_13294
If it is determined that x t is not the start of a sentence, we set s 2 as It is easy to see that this would penalize state transitions within a sentence.
if x t is a sentence start, we set s 2 (y t−1 , y t , v t , t) = 0 for all possible (y t−1 , y t ) pairs.
contrasting
train_13295
Lexicalized PCFGs use the structural features on the lexical head of phrasal node in a tree, and get significant improvements for parsing (Collins, 1997;Charniak, 1997;Collins, 1999;Charniak, 2000).
they suffer from the problem of fundamental sparseness of the lexical dependency information.
contrasting
train_13296
By splitting and merging alternately, this method can refine the grammars step by step to mitigate the overfitting risk to some extent.
this data-driven method can not solve this problem completely, and we need to find other external information to improve it.
contrasting
train_13297
To our knowledge, traditional indexing consistency metrics have not yet been applied to collaboratively tagged data.
experiments on determining tagging quality do follow the same idea.
contrasting
train_13298
(2006) automatically suggest tags previously assigned to similar documents.
in Maui (as in Kea) this feature is just one component of the overall model.
contrasting
train_13299
Supervised methods for automatic verb classification have been extensively investigated Merlo and Stevenson, 2001;Joanis et al., 2008).
their focus has been limited to a small subset of verb classes, and a limited number of monosemous verbs.
contrasting