id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_1000
A standard approach is to use the EM algorithm to optimize the empirical likelihoodÊ log p θ (x).
1 EM only finds a local maximum, which we denoteθ EM , so there is a discrepancy between what we get (pθ EM ) and what we want (p * ).
contrasting
train_1001
We cannot capture these problematic higher-order dependencies in M .
we can bound D µ (θ || θ ) as follows.
contrasting
train_1002
In the base-Input: raw sentence sent -a list of characters Variables: candidate sentence item -a list of (word, tag) pairs; maximum word-length record maxlen for each tag; the agenda list agendas; the tag dictionary tagdict; start index for current word; end index for current word Initialization: line POS tagger, candidates in the beam are tagged sequences ending with the current word, which can be compared directly with each other.
for the joint problem, candidates in the beam are segmented and tagged sequences up to the current character, where the last word can be a complete word or a partial word.
contrasting
train_1003
One likely reason is that Shi and Wang (2007) included knowledge about special characters and semantic knowledge from web corpora (which may explain the higher baseline accuracy), while our system is completely data-driven.
the comparison is indirect because our partitions of the CTB corpus are different.
contrasting
train_1004
and beam search for efficient decoding.
the application of beam search was far from trivial because of the size of the combined search space.
contrasting
train_1005
Besides the usual character-based features, additional features dependent on POS's or words can also be employed to improve the performance.
as such features are generated dynamically during the decoding procedure, two limitation arise: on the one hand, the amount of parameters increases rapidly, which is apt to overfit on training corpus; on the other hand, exact inference by dynamic programming is intractable because the current predication relies on the results of prior predications.
contrasting
train_1006
Additional features most widely used are related to word or POS ngrams.
such features are generated dynamically during the decoding procedure so that the feature space enlarges much more rapidly.
contrasting
train_1007
This is a substitute method to use both local and non-local features, and it would be especially useful when the training corpus is very large.
can the perceptron incorporate all the knowledge used in the outside-layer linear model?
contrasting
train_1008
Effective rule-based approaches can be designed for some languages such as Spanish.
kominek and Black (2006) show that in languages with a less transparent relationship between spelling and pronunciation, such as English, Dutch, or German, the number of letter-to-sound rules grows almost linearly with the lexicon size.
contrasting
train_1009
These approaches include metadata extraction (Cao et al., 2005), expert profile building (Craswell, 2001, Fu et al., 2007, data fusion (Maconald and Ounis, 2006), query expansion (Macdonald and Ounis, 2007), hierarchical language model (Petkova and Croft, 2006), and formal model generation (Balog et al., 2006;Fang et al., 2006).
all of them conduct expert search with what we call a coarse-grained approach.
contrasting
train_1010
Macdonald and Ounis (2006) investigate the effectiveness of the voting approach and the associated data fusion techniques.
such models are conducted in a coarse-grain scope of document as discussed before.
contrasting
train_1011
The construction integration model described before is already making use of syntactic patterns to some extent, through the use of a shallow parser to identify noun phrases.
that approach does not cover patterns other than noun phrases.
contrasting
train_1012
Unary rules of the form N i → N j can form cycles, leading to infinite unary chains with infinite mass.
it is standard in the parsing literature to transform grammars into a restricted class of CFGs so as to permit efficient parsing.
contrasting
train_1013
The formalism was initially defined for context-free grammars and later applied to other constituencybased formalisms, such as tree-adjoining grammars (Alonso et al., 1999).
since parsing schemata are defined as deduction systems over sets of constituency trees, they cannot be used to describe dependency parsers.
contrasting
train_1014
Therefore, it is possible to define a variant of parsing schemata, where these structures can be defined as items and the strategies used for combining them can be expressed as inference rules.
in order to define such a formalism we have to tackle some issues specific to dependency parsers: • Traditional parsing schemata are used to define grammar-based parsers, in which the parsing process is guided by some set of rules which are used to license deduction steps: for example, an Earley Predictor step is tied to a particular grammar rule, and can only be executed if such a rule exists.
contrasting
train_1015
Therefore, as items for constituency parsers are defined as sets of partial constituency trees, it is tempting to define items for dependency parsers as sets of partial dependency graphs.
predictive grammar-based algorithms such as those of Lombardo and Lesmo (1996) and Kahane et al.
contrasting
train_1016
Parsing schemata are not suitable for directly describing deterministic parsers, since they work at a high abstraction level where a set of operations are defined without imposing order constraints on them.
many deterministic parsers can be viewed as particular optimisations of more general, nondeterministic algorithms.
contrasting
train_1017
For example, X → jingtian X 1 , X 1 this year , seems to capture the use of jingtian/this year as a temporal modifier when building linguistic constituents such as noun phrases (the election this year) or verb phrases (voted in the primary this year).
it is important to observe that nothing in the Hiero framework actually requires nonterminal symbols to cover linguistically sensible constituents, and in practice they frequently do not.
contrasting
train_1018
In detail, the monolingual parallel corpus is fairly small, thus automatical word alignment tool like Giza++ may not work well on it.
the monolingual comparable corpus is quite large, hence we cannot conduct the timeconsuming syntactic parsing on it as we do on the monolingual parallel corpus.
contrasting
train_1019
In (Bayraktar et al., 1998) the WSJ PennTreebank corpus (Marcus et al., 1993) is analyzed and a very detailed list of syntactic patterns that correspond to different roles of commas is created.
they do not study the extraction of entailed relations as a function of the comma's interpretation.
contrasting
train_1020
Pattern-based relation extraction methods (e.g., (Davidov and Rappoport, 2008;Davidov et al., 2007;Banko et al., 2007;Pasca et al., 2006;Sekine, 2006)) could in theory be used to extract relations represented by commas.
the types of patterns used in web-scale lexical approaches currently constrain discovered patterns to relatively short spans of text, so will most likely fail on structures whose arguments cover large spans (for example, appositional clauses containing relative clauses).
contrasting
train_1021
Note that in theory, the third relation will not be valid: one example is 'The brothers, all honest men, testified at the trial', which does not entail 'all honest men testified at the trial'.
we encountered no examples of this kind in the corpus, and leave this refinement to future work.
contrasting
train_1022
This gives an average precision 0.85 and an average recall of 0.36 for identifying the comma type.
this baseline does not help in identifying relations.
contrasting
train_1023
According to the Relation metric, there is no difference between them.
there is a semantic difference between the two sentences -the ATTRIBUTE relation says that being 59 is an attribute of John Smith while the SUBSTITUTE relation says that John Smith is the number 59.
contrasting
train_1024
One standard is to adopt a strict logical definition of contradiction: sentences A and B are contradictory if there is no possible world in which A and B are both true.
for contradiction detection to be useful, a looser definition that more closely matches human intuitions is necessary; contradiction occurs when two sentences are extremely unlikely to be true simultaneously.
contrasting
train_1025
The second sentence posits a similar relationship that includes one of the entities involved in the original relationship as well as an entity that was not involved.
different outcomes result because a tunnel connects only two unique locations whereas more than one entity may purchase food.
contrasting
train_1026
For instance, we detect examples 5 and 6 in table 1.
creating features with sufficient precision is an issue for these types of contradictions.
contrasting
train_1027
Even on the web, the doublyanchored hyponym pattern eventually ran out of steam and could not produce more instances.
all of our experiments were conducted using just a single hyponym pattern.
contrasting
train_1028
Results reveal the expected increase of performance, specially in terms of recall.
these results can not be directly compared with previous work on this subject, because of the different corpora used.
contrasting
train_1029
In this case, uncertainty does not necessarily imply subjectivity.
people sometimes explicitly indicate uncertainty to avoid being subjective.
contrasting
train_1030
In this study, unsupervised MFCC based GMM classifiers are employed for pronunciation modeling.
english dialects differ in many ways other than pronunciation like Word Selection and Grammar, which cannot be modeled using frame based GMM acoustic information.
contrasting
train_1031
These are both serious challenges for data-driven methods and could be addressed with the integration of linguistic resources.
there is more work to be done on data driven methods.
contrasting
train_1032
Similar to standard boosting, this bound shows that the training score can be improved exponentially in the number of iterations.
we found that the conditions for which this bound is applicable is rarely satisfied in our experiments.
contrasting
train_1033
These models typically view a sentence either as a bag of words (Foltz et al., 1998) or as a bag of entities associated with various syntactic roles .
a mention of an entity contains more information than just its head and syntactic role.
contrasting
train_1034
Their method of combination is quite different from ours; they use the system's judgements to define the "entities" whose repetitions the system measures 7 .
we do not attempt to use any proposed coreference links; as point out, these links are often erroneous because the disorded input text is so dissimilar to the training data.
contrasting
train_1035
The b 3 scorer (Amit and Baldwin, 1998) was proposed to overcome several shortcomings of the MUC scorer.
coreference resolution is a clustering task, and many cluster scorers already exist.
contrasting
train_1036
When using the MUC scorer, the ILP system always did worse than the D&B-STYLE baseline.
this is precisely because the transitivity constraints tend to yield smaller clusters (which increase precision while decreasing recall).
contrasting
train_1037
Proper name transliteration is primarily handled by TRANSEX.
an OOV with a different spelling of an INV name can be handled by SPELLEX.
contrasting
train_1038
Our average baseline BLEU score went up from 42.60 to 45.00.
using the ALL combination, we still increase the scaled-up system's score to an average BLEU of 45.28 (0.61% relative).
contrasting
train_1039
The best improvement is The comparison of the SVM results to the results of previous (Section 2) shows that our system achieves relatively high accuracy.
most previous systems researched ambiguous abbreviations in the English language, as well as different abbreviations and texts.
contrasting
train_1040
However, the purpose of this paper is to create a framework for accounting for cost in AL algorithms.
to the model presented by Ngai and Yarowsky (2000), which predicts monetary cost given time spent, this model estimates time spent from characteristics of a sentence.
contrasting
train_1041
Efficiency is maintained because such arbitrary disjunction is not needed to encode the most common forms of uncertainty, and thus the number of MDP states in the set can be kept small without losing accuracy.
allowing multiple MDP states provides the representational mechanism necessary to incorporate multiple speech recognition hypotheses into the belief state representation.
contrasting
train_1042
Lexical ambiguity resolution is an important research problem for the fields of information retrieval and machine translation (Sanderson, 2000;Chan et al., 2007).
making fine-grained sense distinctions for words with multiple closelyrelated meanings is a subjective task (Jorgenson, 1990;Palmer et al., 2005), which makes it difficult and error-prone.
contrasting
train_1043
Currently, the only indicators under consideration are "and" and "or".
more patterns can be included in the future.
contrasting
train_1044
The shared components in these groups are radicals of the characters, so we can find the characters of the same group in the same section in a Chinese dictionary.
information about radicals as they are defined by the lexicographers is not sufficient.
contrasting
train_1045
Certainly, more sophisticated personalization models and user clustering methods could be devised.
as we show next, even the simple models described above prove surprisingly effective.
contrasting
train_1046
Also note that baseline ASP classifier is not able to achieve higher accuracy even for users with large amount of past history.
the ASP Pers+Text classifier, trained only on the past question(s) of each user, achieves surprisingly good accuracy -often significantly outperforming the ASP and ASP Text classifiers.
contrasting
train_1047
While self-training has worked in several domains, the early results on self-training for parsing were negative (Steedman et al., 2003;Charniak, 1997).
more recent results have shown that it can indeed improve parser performance (Bacchiani et al., 2006;McClosky et al., 2006a;McClosky et al., 2006b).
contrasting
train_1048
The Brown corpus has an out-of-vocabulary rate of approximately 6% when given WSJ training as the lexicon.
the out-of-vocabulary rate of biomedical abstracts given the same lexicon is significantly higher at about 25% (Lease and Charniak, 2005).
contrasting
train_1049
The other parsers were not close.
several very good current parsers were not available when this paper was written (e.g., the Berkeley Parser (Petrov et al., 2006)).
contrasting
train_1050
Clegg and Shepherd (2005) do not provide separate precision and recall numbers.
we can see that the reranker) modified to use an in-domain tagger.
contrasting
train_1051
At 80.4, it is clearly the worst of the lot.
it is already better than the 80.2% best previous result for biomedical data.
contrasting
train_1052
In this paper, similarly to our previous approach, we design an SVM-based answer extractor, that selects the correct answers from those provided by a basic QA system by applying tree kernel technology.
we also provide: (i) a new kernel to process PASs based on the partial tree kernel algorithm (PAS-PTK), which is highly more efficient and more accurate than the SSTK and (ii) a new kernel called Part of Speech sequence kernel (POSSK), which proves very accurate to represent shallow syntactic information in the learning algorithm.
contrasting
train_1053
MorphAll is the hardest of the three morphological tagging tasks, subsuming MorphPart and MorphPOS, and DiacFull is the hardest lexical task, subsuming DiacPart, which in turn subsumes LexChoice.
morphAll and DiacFull are (in general) orthogonal, since morphAll has no lexemic component, while DiacFull does.
contrasting
train_1054
When designing a dialog manager for a spoken dialog system, we would ideally like to try different dialog management strategies on the actual user population that will be using the system, and select the one that works best.
users are typically unwilling to endure this kind of experimentation.
contrasting
train_1055
The usual method of building a user model is to estimate it from transcribed corpora of human-computer dialogs.
manually transcribing dialogs is expensive, and consequently these corpora are usually small and sparse.
contrasting
train_1056
All of them require a corpus whose NEs are annotated properly as training data.
it is difficult to obtain an enough corpus in the real world, because there are increasing the number of NEs like personal names and company names.
contrasting
train_1057
These are then used as input to the summarisation process.
modelling user needs is a difficult task.
contrasting
train_1058
This is similar to the elaborative goal of our summary in the sense that one could answer the question: "What else can I say about topic X (that hasn't already been mentioned in the reading context)".
whereas DUC focused on unlinked news wire text, we explore a different genre of text.
contrasting
train_1059
It is difficult to beat the first-5 baseline, which attains the best recall of 0.52 and a precision of 0.2, with all other strategies falling behind.
we believe that this may be due to the presence of some types of Wikipedia articles that are narrow in scope and centered on specific events.
contrasting
train_1060
The important idea in Kneser-Ney is to let the probability of a back-off n-gram be proportional to the number of unique words that precede it.
we do not need to use the absolute discount form for the estimates.
contrasting
train_1061
The hierarchical phrase-based model (Chiang, 2005) used hierarchical phrase pairs to strengthen the generalization ability of phrases and allow long distance reorderings.
the huge grammar table greatly increases computational complexity.
contrasting
train_1062
For this task, the BLEU score of the baseline is 30.45.
for partial matching method with α=0.5 3 , the BLEU score is 30.96, achieving an absolute improvement of 0.51.
contrasting
train_1063
The improvement on large-scale task is less than that on small-scale task since larger corpus relieves data sparseness.
the partial matching approach can also improve translation quality by using long phrases.
contrasting
train_1064
The results for the probabilistic models for infering lexical semantic roles are shown in Table 3, where the term naive means that no WordNet features were included in the training of the models, but only simple features like noun rel for nouns.
when mode is complete, WordNet hypernyms up to the 5th level in the hierarchy were used.
contrasting
train_1065
Which type is more useful is likely to depend on the kind of application and information needs of the user, but this is essentially still an open question.
there is a complication.
contrasting
train_1066
In the news article domain, ROUGE scores have been shown to be generally highly correlated with human evaluation in content match (Lin, 2004).
there are many differences between written texts (e.g., news wire) and spoken documents, especially in the meeting domain, for example, the presence of disfluencies and multiple speakers, and the lack of structure in spontaneous utterances.
contrasting
train_1067
Inspired by automatic speech recognition (ASR) evaluation, (Hori et al., 2003) proposed the summarization accuracy metric (SumACCY) based on a word network created by merging manual summaries.
(Zhu and Penn, 2005) found a statistically significant difference between the ASR-inspired metrics and those taken from text summarization (e.g., RU, ROUGE) on a subset of the Switchboard data.
contrasting
train_1068
There have been efforts on the study of the impact of disfluencies on summarization techniques (Liu et al., 2007;Zhu and Penn, 2006) and human readability (Jones et al., 2003).
it is not clear whether disfluencies impact automatic evaluation of extractive meeting summarization.
contrasting
train_1069
Our analysis suggests that simple word frequency computations of these clusters and the documents alone can produce reasonable summaries.
the human selecting the relevant documents may have already influenced the way summaries can automatically be generated.
contrasting
train_1070
Both classifiers misclassify approximately 45% of the Noisy00 sentences.
the sentences misclassified by the E2prob classifier are those that are handled well by the E0 parser, and this is reflected in the parsing results for Noisy00.
contrasting
train_1071
Most previous researches on speakers' intentions have been focused on intention identification techniques.
intention prediction techniques have been not studied enough although there are many practical needs, as shown in Figure 1.
contrasting
train_1072
Reithinger showed that his model can reduce the searching complexity of an ASR to 19~60%.
his model did not achieve good performances because the input features were not rich enough to predict next speech acts.
contrasting
train_1073
This technique reduces the number of support vectors in each classifier (because each classifier was trained on only a portion of the data).
it relies on human intuition on the way the data should be split, and usually results in a degradation in performance relative to a single classifier trained on all the data points.
contrasting
train_1074
It is because of common features that the PKI reverse indexing method does not yield great improvements: if at least one of the features of the current instance is active in a support vector, this vector is taken into account in the sum calculation, and the common features are active in many support vectors.
the long tail of rare features is the reason the Kernel Expansion methods requires so much space: every rare feature adds many possible feature pairs.
contrasting
train_1075
Our approach is similar to the PKE approach (Kudo and Matsumoto, 2003), which used a basket mining approach to prune many features from the expansion.
we use a simpler approach to choose which features to include in the expansion, and we also compensate for the feature we did not include by the PKI method.
contrasting
train_1076
Similarly, the noun phrases the pattern of pigments and the bunch of leaves typically result in identical dependency parses.
the word pattern is considered the governor of pigments; whereas, conversely the word leaves is treated as the governor of bunch because it carries more semantics.
contrasting
train_1077
While this algorithm works well with underspecified semantic representations in semantics, it is too slow for the larger discourse graphs, as we will see in Section 5.
we will now optimise it for the special case of constrained chains.
contrasting
train_1078
In fact, WordNet, with its broad coverage and easy accessibility, has become the resource of choice for WSD.
some have questioned whether WordNet's fine-grained sense distinctions are appropriate for the task (Ide & Wilks, 2007;Palmer et al., 2007).
contrasting
train_1079
The geometric mean of the frequencies of compound parts and the probability estimated from the language model usually attain a high recall, given they are based on unigram features which are easy to collect, but they have some weaknesses, as mentioned above.
while Mutual Information is a much more precise metric, it is less likely to have evidence about every single possible pair of compound parts from a corpus, so it suffers from low recall.
contrasting
train_1080
Thus it can not capture the unbalanced information.
metalearning is able to learn the unbalance automatically through training the meta-classifier using the development data.
contrasting
train_1081
keys normal − keys with prediction keys normal × 100% A word prediction system that offers higher savings will benefit a user more in practice.
the equation for keystroke savings has two major deficiencies.
contrasting
train_1082
In our case, eliminating the top 1% of terms reduces the number of document pairs by several orders of magnitude.
the impact of this technique on effectiveness (e.g., in a query-by-example experiment) has not yet been characterized.
contrasting
train_1083
Traditional machine learning relies on the availability of a large amount of data to train a model, which is then applied to test data in the same feature space.
labeled data are often scarce and expensive to obtain.
contrasting
train_1084
In addition, most of them are designed for supervised learning.
in practice, we often face the problem where the labeled data are scarce in their own feature space, whereas there may be a large amount of labeled heterogeneous data in another feature space.
contrasting
train_1085
In principle, it should be possible to use existing sense-annotated data to explore this question: almost all sense annotation efforts have allowed annotators to assign multiple senses to a single occurrence, and the distribution of these sense labels should indicate whether annotators viewed the senses as disjoint or not.
the percentage of markables that received multiple sense labels in existing corpora is small, and it varies massively between corpora: In the SemCor corpus (Landes et al., 1998), only 0.3% of all markables received multiple sense labels.
contrasting
train_1086
In addition, there is a strong correlation between WSsim and Usim, which indicates that the potential bias introduced by the use of dictionary senses in WSsim is not too prominent.
we note that WSsim only contained a small portion of 3 lemmas (30 sentences and 135 SPAIRs) in common with Usim, so more annotation is needed to be certain of this relationship.
contrasting
train_1087
Some previous studies have employed this idea to remedy the data sparseness problem in the training data (Gildea and Jurafsky, 2002).
we cannot apply this approach when multiple roles in Y f are contained in the same class.
contrasting
train_1088
Swier and Stevenson (2004) and Swier and Stevenson (2005) presented the first model that does not use an SRL annotated corpus.
they utilize the extensive verb lexicon Verb-Net, which lists the possible argument structures allowable for each verb, and supervised syntactic tools.
contrasting
train_1089
Notable examples include (Manning, 1993;Briscoe and Carroll, 1997;Korhonen, 2002) who all used statistical hypothesis testing to filter a parser's output for arguments, with the goal of compiling verb subcategorization lexicons.
these works differ from ours as they attempt to characterize the behavior of a verb type, by collecting statistics from various instances of that verb, and not to determine which are the arguments of specific verb instances.
contrasting
train_1090
Next, during labeling, the precise verb-specific roles for each word are determined.
to the approach in (Punyakanok et al., 2008), which tags constituents directly, we tag headwords and then associate them with a constituent, as in a previous CCG-based approach (Gildea and Hockenmaier, 2003).
contrasting
train_1091
In the sentence the company stopped using asbestos in 1956 (figure 7), the correct Arg1 of stopped is using asbestos.
because in 1956 is erroneously modifying the verb using rather than the verb stopped in the treebank parse, the system trusts the syntactic analysis and places Arg1 of stopped on using asbestos in 1956.
contrasting
train_1092
It is important to acquire additional labeled data for the target grammar parsing through exploitation of existing source treebanks since there is often a shortage of labeled data.
to our knowledge, there is no previous study on this issue.
contrasting
train_1093
A possible solution is to simply concatenate the two treebanks as training data.
this method may lead to a problem that if the size of C P S is significantly less than that of converted C DS , converted C DS may weaken the effect C P S might have.
contrasting
train_1094
The reason is straightforward, syntactic structure is too complicated to be properly translated and the cost of translation cannot be afforded in many cases.
we empirically find this difficulty may be dramatically alleviated as dependencies rather than phrases are used for syntactic structure representation.
contrasting
train_1095
Among of existing works that we are aware of, we regard that the most similar one to ours is (Zeman and Resnik, 2008), who adapted a parser to a new language that is much poorer in linguistic resources than the source language.
there are two main differences between their work and ours.
contrasting
train_1096
In an early version of our approach, the former was implemented.
it is proven to be quite inefficient in computation.
contrasting
train_1097
Table 5 shows the results achieved by other researchers and ours (UAS with p), which indicates that our parser outperforms any other ones 4 .
our results is only slightly better than that of (Chen et al., 2008) as only sentences whose lengths are less than 40 are considered.
contrasting
train_1098
Sister-head dependencies are useful in this case because of the flat structure of NEGRA's trees.
to the deeper approaches to parsing described above, topological field parsing identifies the major sections of a sentence in relation to the clausal main verb and subordinating heads, when present.
contrasting
train_1099
(Kübler et al., 2006;Dubey and Keller, 2003) in the case of German).
topological fields explain a higher level of structure pertaining to clause-level word order, and we hypothesize that lexicalization is unlikely to be helpful.
contrasting