id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7900
", the BA is a negative response.
if we consider all answers, 7 Inquirer dictionary http://www.wjh.harvard.edu/~inquirer.
contrasting
train_7901
Our baseline uses a tight heuristic, requiring aligned words at phrase edges.
ayan and Dorr (2006a) showed that a loose heuristic, allowing unaligned words at the phrase edges, improved accuracy by 3.7 BLEU with some alignments, again with much less training data.
contrasting
train_7902
The situation worsens with discontiguous phrases.
with the loose heuristic, we see the opposite effect.
contrasting
train_7903
Increasing the maximum phrase length in standard phrase-based translation does not improve BLEU (Koehn et al., 2003;Zens and Ney, 2007).
this effect has not yet been evaluated in hierarchical phrase-based translation.
contrasting
train_7904
(2008) address this bottleneck with a promising approach based on parallel processing, showing reductions in real time that are linear in the number of CPUs.
they do not reduce the overall CPU time.
contrasting
train_7905
Strictly speaking, the effective projectivity for a particular edit should be computed based on the intermediate form upon which the edit operates, since the projectivity properties of this form can depend on preceding edits.
the Nat-Log system minimizes the need to compute projectivity in intermediate forms by reordering the edits in an alignment in such a way that effective projectivity can, in most cases, simply be taken from the projectivity marking of P and H performed during the linguistic analysis stage.
contrasting
train_7906
NatLog sanctions the deletion of a restrictive modifier and an appositive from the premise, and recognizes that deleting a negation generates a contradiction; thus it correctly answers no.
there are many RTE problems where NatLog's precision works against it.
contrasting
train_7907
Similarly to the PTB-CFG trees, the simplified HPSG trees do not include empty categories, co-indexing, and function-tags.
we cannot attain a PTB-CFG tree by simply mapping those atomic symbols to the corresponding PTB non-terminal symbols, because the analyses by the PTB-CFG and the HPSG yield different tree structures for the same sentence.
contrasting
train_7908
will call the direction of offset (here, respectively, past, past and future).
in other cases there is no explicit indication of the direction of offset from the temporal focus.
contrasting
train_7909
(4) Jones met with Defense Minister Paulo Portas on Tuesday and will meet Foreign Minister Antonio Martins da Cruz before leaving Portugal Wednesday.
this approach will not correctly interpret example (10): (10) Still a decision has to made on what, if any, punishment he will face in the wake of that incident Tuesday night.
contrasting
train_7910
As the parsing chart is filled in from the bottom up, each rule applied is essentially coming out of a special repair rule set, and so at the top of the tree the EDITED hypothesis is much more likely.
this requires that several fluent speech rules from the data set be modified for use in a special repair grammar, which not only reduces the amount of available training data, The right corner transform works in a different way, by building up constituent structure from left to right.
contrasting
train_7911
Extending this model to actual speech adds some complexity, since disfluency phenomena are difficult to detect in an audio signal.
there are also advantages in this extension, since the extra phonological variables and acoustic observations contain information that can be useful in the recognition of disfluency phenomena.
contrasting
train_7912
Assessing language knowledge and competence quantitatively is not a novel concept in second language learning assessment.
the application of data mining methods to automatically assess language proficiency in a discourse setting is novel.
contrasting
train_7913
In the STD group, we see that four feature sets, all, turns, wordBased and patient equal or surpass the baseline F-score.
to this, upon examination of the performance of the classifiers built over the BC dataset, we do not observe any improvements over the baseline and the results are markedly worse than those for the combined dataset.
contrasting
train_7914
The examiners' subjectivity of overall performance is minimised by the highly structured examination setup and well-defined assessment criteria.
as shown in Table 3, the communicative style of the SP is a contributing factor to the perception of successful clinical and communication skills.
contrasting
train_7915
Powell's method is not designed to reach a better local optimum than co-ordinate ascent, but does have convergence guarantees under certain idealized conditions.
we have observed informally that, in MERT, co-ordinate ascent always seems to converge relatively quickly, with Powell's method offering no clear advantage.
contrasting
train_7916
Without random restarts, this will guarantee convergence because the last set of feature weights selected will still be a local optimum.
2 if we go through the coordinate ascent procedure without finding a better set of feature weights, then we do not have to perform the last iteration of nbest decoding, because it will necessarily produce the same n-best lists as the previous iteration, as long as the decoder is deterministic.
contrasting
train_7917
This is due to the double matching problem: for example, pair "Tehran Times (Tehran)"/"Inter Press Service" (from INT) is scored more than 1.0 because "Tehran" matches "Inter" twice: even with a low score as a coefficient, "Inter" has a high IDF compared to "Press" and "Service", so counting it twice makes normalization wrong.
this problem may be solved by choosing a more adequate sub-measure: experiments show that using the CosTFIDF measure with bigrams or trigrams outperforms standard CosT-FIDF.
contrasting
train_7918
Although Model II reproduces the structural properties of PlaNet and PhoNet quite accurately, as we shall see shortly, it fails to generate inventories that closely match the real ones in terms of feature entropy.
at this point, recall that Model II assumes that the consonant nodes are unlabeled; therefore, the inventories that are produced as a result of the synthesis are composed of consonants, which unlike the real inventories, are not marked by their distinctive features.
contrasting
train_7919
and "A young boy and his mother were found dead on Wednesday evening.".
it also needs to detect complex cases like: "An ambulance rushed the soldier to hospital, but efforts to save him failed."
contrasting
train_7920
It seems intuitive that a naïve system that selects only sentences that contain terms with senses connected with death like "kill", "die" or "execute" as positive instances would catch many positive cases.
there are instances where this approach would fail.
contrasting
train_7921
The overall goal of the TDT Event Tracking task was to track the development of specific events over time.
these TDT tasks were somewhat restrictive in the sense that detection is carried out at document level.
contrasting
train_7922
As a result, the probability of a sentence is the product of the probabilities of its terms.
we calculate the probability that a given test sentence s k belongs to class c i as follows: this model will generally underestimate the probability of any unseen word in the sentence, that is terms that do not appear in the training data used to build the language model.
contrasting
train_7923
In general, the language modeling based tech-niques are not as effective as the SVM approach for this classification task.
from Table 4 we see that all language models achieve approx.
contrasting
train_7924
Thus, a term like "professor" that may only occur once in the dataset has the same likelihood of occurring in an "on-event" sentence as a term like "kill" that has a very high frequency in the dataset.
the Jelinek-Mercer and Absolute Discounting smoothing methods estimate the probability of unseen terms according to a background model built using the entire collection.
contrasting
train_7925
96%, 95%, 86% and 60% "off-event" F1 scores respectively across all event types.
figure 2 demonstrates that the performance of each approach for the "onevent" class varies considerably across the event types.
contrasting
train_7926
For instance, the trigger-based classification baseline out-performs all other approaches achieving over 60% F1 score for the "Meet", "Die" and "Charge-Indict" types.
for events like "Attack" and "Transport" this baselines F1 score drops to approx.
contrasting
train_7927
Event types where it achieves poor results are broader types like "Transport" and "Attack" that cover a larger spectrum of event instances from heterogeneous contexts and situations.
we see from Figure 2 that the SVM performs well on such event types and as a result out-performs the trigger-based selection process by approximately a factor of 4 for the "Attack" event and a factor of 2 for the "Transport" event.
contrasting
train_7928
We find that the F1 scores of the "off-event" class are not affected much.
the F1 scores for the "on-event" class for the SVM and trigger-based baseline are reduced by margins of approx.
contrasting
train_7929
"), these cases are in the minority.
the success of this baseline is somewhat dependent on the nature of the event in question.
contrasting
train_7930
This system resolves anaphoric pronouns by using heuristic rules and seven patterns for parallelism.
the sizes of the data sets used in their experiments were small.
contrasting
train_7931
Such methods were claimed to be comparable with traditional methods.
the problems caused by domain differences, which strongly affect a deep-semantics related task like pronoun resolution, have not yet been studied well enough.
contrasting
train_7932
In addition to these types of pronouns, the annotated corpora contain other types of pronominal anaphora, including "both," "one," numeric mentions (GE-NIA), and bound anaphora (ACE).
analysis statistics show that such pronouns occupy less than 5% of the total pronouns in the GENIA corpus, thus we have ignored them.
contrasting
train_7933
Some support for this view can be found in the results from the CoNLL shared tasks on dependency parsing in 2006 and 2007, where a variety of data-driven methods for dependency parsing have been applied with encouraging results to languages of great typological diversity (Buchholz and Marsi, 2006;Nivre et al., 2007a).
there are still important differences in parsing accuracy for different language types.
contrasting
train_7934
Comparing the use of raw word forms (LEX) and lemmas (LEM) as lexical features, we see a slight advantage for the latter, at least for labeled accuracy.
it must be remembered that the experiments are based on gold standard input annotation, which probably leads to an overestimation of the value of LEM features.
contrasting
train_7935
It is also worth mentioning that phantom tokens, i.e., empty tokens inserted for the analysis of certain elliptical constructions (see section 2), have a labeled precision of 82.4 and a labeled recall of 82.8 (89.2 and 89.6 unlabeled), which is very close to the average accuracy, despite being very infrequent.
it must be remembered that these tokens were given as part of the input in these experiments.
contrasting
train_7936
The feasibility of this approach was demonstrated by Gordon and Swanson (2007) for syntactically similar verbs.
their approach requires at least one annotated instance of each new predicate, limiting its practicability.
contrasting
train_7937
Therefore, it is necessary to consider syntactic features as well.
these vary substantially between verbs and nouns.
contrasting
train_7938
Example 3 shows that an "external" NP like Bill can be analysed as filling the HELPER role of the noun help.
the overall proportion of non-local roles is still fairly small in our data (around 10%).
contrasting
train_7939
Caution should be taken when interpreting the results for n < 3 since the annotator agreement for these was very low.
as shown in Figure 1, human preference for the JIR model was higher at n ≥ 3.
contrasting
train_7940
This dependency relationship offers a very condensed representation of the information needed to assess the relationship in the forms of the dependency tree (Culotta and Sorensen, 2004) or the shortest dependency path (Bunescu and Mooney, 2005) that includes both entities.
when the parse tree corresponding to the sentence is derived using derivation rules from the bottom to the top, the wordword dependencies extend upward, making a unique head child containing the head word for every non-terminal constituent.
contrasting
train_7941
It shows that their model can guess the POS for disyllabic words with a relatively good F-measure (83.60%).
the recall is not high for disyllabic (79.11%) and tri-syllabic (82.70%) words, and quite low for four-character (20.95%) and fivecharacter (0%) words.
contrasting
train_7942
Notice that the training data was formed by segmenting and tagging POS of each word in a dictionary using an existing tool like ICTCLAS.
these tools usually generate quite a few errors on the words, because they are designed to handle sentence but not word.
contrasting
train_7943
In fact, Fellbaum (1998) allows for more than one unique beginner per verb category.
cases where there is a large number of unique beginners in one category merit investigation.
contrasting
train_7944
Besides the sheer size of these data sets, the main attraction of user logs lies in the possibility to capitalize on users' input, either in form of user-generated query reformulations, or in form of user clicks on presented search results.
noisy, sparse, incomplete, and volatile these data may be, recent research has presented impressive results that are based on simply taking the majority vote of user clicks as a signal for the relevance of results.
contrasting
train_7945
SMT-based expansions such as henry viii restaurant portland, maine, or ladybug birthday ideas, or top ten restaurants, vancouver achieve a change in retrieval results that does not result in a query drift, but rather in improved retrieval results.
the terms introduced by the correlation-based system are either only vaguely related or noise.
contrasting
train_7946
Furthermore, each component of the SMT model takes great care to avoid sparse data problems by various sophisticated smoothing techniques.
the correlation-based model relies on pure counts of term frequencies.
contrasting
train_7947
If w i ∈ S 1 , then under the assumption that incomplete edges are extended from left-toright (see footnote 1), the incomplete edge should be discarded, because any completed edges that could result from extending that incomplete edge would have the same start position, i.e., the chart cell would be (i, k) for some k>i, which is closed to the completed edge.
if w i ∈ S 1 , then w j ∈ E 1 .
contrasting
train_7948
Informally, we define a projective DAG to be a DAG where all arcs can be drawn above the sentence (written sequentially in its original order) in a way such that no arcs cross and there are no covered roots (although a root is not a concept associated with DAGs, we borrow the term from trees to denote words with no heads in the sentence).
nonprojectivity is predictably more wide-spread in DAG representations, since there are at least as many arcs as in a tree representation, and often more, including arcs that represent non-local relationships.
contrasting
train_7949
Agent, Patient, Location and Instrument.
propBank and FrameNet use a mixture of role types: some are common amongst a number of frames; others are specific to particular frames.
contrasting
train_7950
Biology-specific extensions have been attempted both for PropBank (Wattarujeekrit et al., 2004) and FrameNet (Dolbey et al., 2006).
to our knowledge, there has been no such attempt at extending VerbNet into the biological domain.
contrasting
train_7951
(2008) uses frameindependent roles.
only a few semantic argument types are annotated.
contrasting
train_7952
Our event patterns are similar to information extraction rules used in conventional IE systems.
the goal of this paper is not event instance extraction but event (or semantic) frame extraction.
contrasting
train_7953
They use the case frames as selectional restriction for zero pronoun resolution, but do not utilize the frequency of each example of case slots.
since the frequency is shown to be a good clue for syntactic and case structure analysis (Kawahara and Kurohashi, 2006), we consider the frequency also can benefit zero pronoun detection.
contrasting
train_7954
Pronoun P (n jm |CF l , s j , A =1) is similar to P (n j |CF l , s j , A=1) and can be estimated approximately from case frames using the frequencies of case slot examples.
while A (s j ) = 1 means s j is not filled with input case component but filled with an entity as the result of zero anaphora resolution, case frames are constructed by extracting only the input case component.
contrasting
train_7955
Kawahara and Kurohashi's model achieved almost 50% as F-measure against newspaper articles.
as a result of our experiment against web documents, it achieved only about 20% as F-measure.
contrasting
train_7956
Although compiling such resources is labor intensive and achieving wide coverage is difficult, these resources to some extent explicitly capture semantic structures of concepts and words.
corpus statistics achieve wide coverage, but the semantic structure of a concept is only implicitly represented in the context.
contrasting
train_7957
Actually, same relations holding between entities often involve co-reference (where co-reference is broadly conceived to include relations such as part-whole listed above).
there are no morphosyntactic constraints on what targets may be.
contrasting
train_7958
For instance, Example 4 is characterized by mixtures of opinion types, polarities, and target relations.
the opinions are still unified in the intention to argue for a particular type of shape.
contrasting
train_7959
The opinion frames flesh out the discourse relations: we have lists specifically of positive sentiments toward related objects.
opinion-frame and discourse-relation schemes are not redundant.
contrasting
train_7960
Our results are encouraging -even using simple features to capture target relations achieves considerable improvement over the baselines.
there is much room for improvement.
contrasting
train_7961
Evidence from the surrounding context has been used previously to determine if the current sentence should be subjective/objective (Riloff et al., 2003;Pang and Lee, 2004) and adjacency pair information has been used to predict congressional votes (Thomas et al., 2006).
these methods do not explicitly model the relations between opinions.
contrasting
train_7962
Parallel text is used as training data with the alternative translations serving as sense labels.
disadvantages of this approach are that the alternative translations do not always correspond to the sense distinctions in the original language and parallel text is not always available.
contrasting
train_7963
For 11 terms training using the additional examples alone is more effective than using the NLM-WSD corpus.
there are several words for which the performance using the automatically acquired examples is considerably worse than using the NLM-WSD corpus.
contrasting
train_7964
We show that results achieved with existing resources that are not tailored towards word sense subjectivity classification can rival results achieved with supervision on a manually annotated training set.
results with different resources vary substantially and are dependent on the different definitions of subjectivity used in the establishment of the resources.
contrasting
train_7965
It is also true that the sentences (plot descriptions) in its "objective" data set relatively rarely contain opinions about the movie.
they still contain other opinionated content like opinions and emotions of the characters in the movie such as the obsession of a character with John Lennon in the beatles fan is a drama about Albert, a psychotic prisoner who is a devoted fan of John Lennon and the beatles.
contrasting
train_7966
The effectiveness of the resulting algorithms depends greatly on the generated training data, more specifically on the different definitions of subjectivity used in resource creation.
we were able to show that at least one of these methods (based on the SL word list) resulted in a classifier that performed on a par with a supervised classifier that used dedicated training data developed for this task (CV).
contrasting
train_7967
Compared with table 1 on page 4, table 5 and table 6 both indicate the predicted MP can help to label semantic roles.
there is an interesting phenomenon.
contrasting
train_7968
For testing, given a new test sequence x, we want to estimate the most probable label sequence (Best Label Path), y * , that maximizes our conditional model: In the CRF model, y * can be simply searched by using the Viterbi algorithm.
for latent conditional models like LDCRF, the Best Label Path y * cannot directly be produced by the Viterbi algorithm because of the incorporation of hidden states.
contrasting
train_7969
Most unsupervised rule learning algorithms focused on learning binary entailment rules.
using binary rules for inference is not enough.
contrasting
train_7970
Given a dependency parse tree, any sub-tree can be a candidate template, setting some of its nodes as variables (Sudo et al., 2003).
the number of possible templates is exponential in the size of the sentence.
contrasting
train_7971
They suggest that for certain error annotation tasks, such as preposition usage, it may not be appropriate to use only one rater and that if one uses multiple raters for error annotation, there is the possibility of creating an adjudicated set, or at least calculating the variability of the system's performance.
annotation with multiple raters has its own disadvantages as it is much more expensive and timeconsuming.
contrasting
train_7972
, x S }, which is a literal representation of the document.
it can also be defined in terms of its information content.
contrasting
train_7973
For example, birthday appears more often with joy, while war appears more often with fear.
the accuracy achieved by their method is not practical in applications assumed in this paper.
contrasting
train_7974
For example, in the case of There are a lot of enemies in Table 4, the "Polarity" is Correct because it represents a negative emotion.
its emotion class unpleasantness is judged Context-dep.
contrasting
train_7975
From these results, we speculate that, as far as the two-class sentiment polarity problem is concerned, word n-gram features might be sufficient if a very large set of labelled data are available.
table 8 indicates that the three-class problem is much harder than the twoclass problem.
contrasting
train_7976
(2007)'s method is that it does not require any manual supervion.
our models, which rely on emotion-provoking event instances, are also totally unsupervised -no supervision is required to collect emotion-provoking event instances.
contrasting
train_7977
Most of the corpora that are well-known in NLP communities are completely-annotated in general.
it is quite common that the available annotations are partial or ambiguous in practical applications.
contrasting
train_7978
This feature is suited to many NLP tasks that include correlations between elements in the output structure, such as the interrelation of part-of-speech (POS) tags in a sentence.
conventional CRF algorithms require fully annotated sentences.
contrasting
train_7979
A CRF that was trained using only the source domain corpus (A), CRF S , achieved F =96.84 in the source domain validation data (B).
it showed the need for the domain adaptation that this CRF S suffered severe performance degradation (F =92.3) on the target domain data.
contrasting
train_7980
In this experiment, we used 118 sentences in which some words (82 distinct words) are annotated with ambiguous POS tags, and these sentences are called the POS ambiguous sentences.
we call sentences in which the POS tags of these terms are uniquely annotated as the POS unique sentences.
contrasting
train_7981
It might seem that a lexicon, such as Word-Net (Fellbaum, 1998), contains all the information we need to handle these four tasks.
we prefer to take a corpus-based approach to semantics.
contrasting
train_7982
For the second two experiments, PairClass performs significantly above the baselines.
the strength of this approach is not its performance on any one task, but the range of tasks it can handle.
contrasting
train_7983
• The morphological processing in PairClass (Minnen et al., 2001) is more sophisticated than in Turney (2006).
we believe that the main contribution of this paper is not PairClass itself, but the extension of supervised word pair classification beyond the classification of noun-modifier pairs and semantic relations between nominals, to analogies, synonyms, antonyms, and associations.
contrasting
train_7984
As the marker-based classification only used overuse and the necessary degree of overuse is lower for Linguistic Profiling, the latter pays attention to many more features and should be able to attain a better classification rate.
if we want to determine which features are most powerful, interpreting the workings of Linguistic Profiling will be more difficult than for the marker-based approach.
contrasting
train_7985
The Dutch (or Flemish) parliamentarians announce in some way that they are nearing the end of their speech.
if we examine the original Dutch text, we observe a much more varied phrasing.
contrasting
train_7986
We have not examined POS classes, but only specific words.
we do see various adverbs in prominent positions, especially in Table 7, which indeed shows overuse.
contrasting
train_7987
In all, we extract talking points of the form is adj:noun for over 40,000 WordNet concepts, and talking points of the form verb:noun for over 50,000 concepts.
the real power of talking points emerges when they are connected to form a slipnet, as we discuss in section 4.
contrasting
train_7988
There are eleven questions for which the reference answer was ranked in the top 10 by the baseline system but it drops out of the top 10 by re-ranking.
there are 22 questions for which the reference answer enters the top 10 by re-ranking the answers, leading to an overall improvement in success@10.
contrasting
train_7989
68.1%, for a resulting F-measure of 63.1%.
adding the binding kernel (PK+STK) leads to a significant improvement in 17% precision for MUC-6 with a small gain (1%) in recall, whereas on the ACE data set, it also helps to increase the recall by 7%.
contrasting
train_7990
We can see that when λ∈(0.5,1), the RankFusion methods with high-quality clusters can outperform both the corresponding SingleRank and the corresponding CollabRank.
the performance improvements of RankFusion over CollabRank are not significant.
contrasting
train_7991
Due to the limitations in natural language processing technology, abstractive approaches are restricted to specific domains.
extractive approaches commonly select sentences that contain the most significant concepts in the documents.
contrasting
train_7992
However, only moderate results were reported.
dejong (1978) represented documents using predefined templates.
contrasting
train_7993
The trained SMT systems are suitable for translating texts in the same domain as the training corpus.
for some specific domains, it is difficult to obtain a bilingual corpus.
contrasting
train_7994
It has been shown that there is no faster way for solving the all-pairs shortest paths problem in general graphs.
the structure of wordnets is somewhat different to that of a general directed graph.
contrasting
train_7995
For example, (Gao et al., 2005) described an adaptive CWS system, and (Andrew, 2006) employed a conditional random field model for sequence segmentation.
these methods are not specifically developed for the MT application, and significant improvements in translation performance need to be shown.
contrasting
train_7996
In practice we do not re-normalize the probabilities and our model is thus deficient because it does not sum to 1 over valid observations.
we found the model work very good in our experiments.
contrasting
train_7997
Domain specific term and after these term succ sors of the terms, are likely to be either functional words or other general substantives cting terms successors can be considered as markers of terms, and are referred to as term delimiters in this paper.
to terms, delimiters are relatively stable and domain independent.
contrasting
train_7998
Thus, TF-IDF performs relatively well because of the high-quality domain corpus.
tF-IDF, as a statistics based algorithm suffers from similar problem as others based on statistics.
contrasting
train_7999
Thus, new terms are actually ranked higher than other terms in TV_ConSem which explains its higher ability to identify new terms in the low range of N TCList .
its performance drops in the high range of N TCList because the influence of context words diminishes in terms of percentage in the domain lexicon to distinguish terms from non-terms.
contrasting