id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_11600
Also, new information on POS correspondences and syntactic functions will be put in the dynamic resources.
if a user rejects a proposal this information is stored as negative data in the dynamic resources on all applicable levels.
contrasting
train_11601
The apparently high number of queries actually corresponds to a moderate size dataset, given that the space of parameters includes one parameter for each word-category combination.
assuming SVM Ð Ø does not run out of memory, using the entire dataset for training and testing is extremely ¾ ).
contrasting
train_11602
As we have defined it, OP shows a strong similarity with Named Entity Recognition and Classification (NERC).
a major difference is that in NERC each occurrences of a recognized term has to be classified separately, while in OP it is the term, independently of the context in which it appears, that has to be classified.
contrasting
train_11603
Building a classic index at word level was not an option, since we have to search for syntactic structures, not words.
indexing syntactic relations (i.e.
contrasting
train_11604
(Fleischman, 2001), have similar accuracy.
the presented weakly supervised Class-Example approach requires as a training data only a list of terms for each class under consideration.
contrasting
train_11605
Here, we propose an alignment procedure that explicitly models reordering of words in the hypotheses.
to existing approaches, the context of the whole document rather than a single sentence is considered in this iterative, unsupervised procedure, yielding a more reliable alignment.
contrasting
train_11606
The training corpus for alignment is created from a test corpus of N sentences (usually a few hundred) translated by all of the involved MT engines.
the effective size of the training corpus is larger than N , since all pairs of different hypotheses have to be aligned.
contrasting
train_11607
Nowadays, most current coreference resolution systems for written text include some means for the detection of nonreferential it.
evaluation figures for this task are not always given.
contrasting
train_11608
In view of these results, it would be interesting to see similar annotation experiments on written texts.
a study of the types of confusions that occur showed that quite a few of the disagreements arise from confusions of sub-categories belonging to the same super-category, i.e.
contrasting
train_11609
To our knowledge, this task has not been tackled before.
the still fairly good results obtained by only using automatically determined features (P:71.9% / R:55.1% / F:62.4%) show that a practically usable filtering component for nonreferential it can be created even with rather simple means.
contrasting
train_11610
Remember that these two models have the same vocabulary and are both de- rived from the same GF interpretation grammar.
the flexibility of the SLM gives a relative improvement of 37% over the Nuance grammar.
contrasting
train_11611
This implies that the evaluation carried out is not strictly fair considering the possible task improvement.
a fair automatic evaluation of dialogue move error rate will be possible only when we have a way to do semantic decoding that is not entirely dependent on the GF grammar rules.
contrasting
train_11612
The SR dialogues received on average slightly higher scores for understandability (question 1), which can be explained by the shorter length of the system turns for that system.
the difference is not statistically significant (p = 0.97 using a twotailed paired t-test).
contrasting
train_11613
When we get to PROJECTIVITY, the quadratic coefficient b is so small that the average running time is practically linear for the great majority of sentences.
the complexity is not much worse for the bounded degrees of non-projectivity (d ≤ 1, d ≤ 2).
contrasting
train_11614
Past work on tree-structured outputs has used constraints for the k-best scoring tree (Mc-Donald et al., 2005b) or even all possible trees by using factored representations (Taskar et al., 2004;McDonald et al., 2005c).
we have found that a single margin constraint per example leads to much faster training with a negligible degradation in performance.
contrasting
train_11615
These approximations work because freer-word order languages we studied are still primarily projective, making the approximate starting point close to the goal parse.
we would like to investigate the benefits for parsing of more principled approaches to approximate learning and inference techniques such as the learning as search optimization framework of (Daumé and Marcu, 2005).
contrasting
train_11616
paired data, would therefore be achievable using standard GHA as follows: In the above, c a and c b are left and right singular vectors.
to be able to feed the algorithm with rows of the matrices M M T and M T M , we would need to have the entire training corpus available simultaneously, and square it, which we hoped to avoid.
contrasting
train_11617
Sense dominance may be determined by simple counting in sense-tagged data.
dominance varies with domain, and existing sensetagged data is largely insufficient.
contrasting
train_11618
If α is close to 0 5, then even if the system correctly identifies the predominant sense, the naive disambiguation system cannot achieve accuracies much higher than 50%.
if α is close to 0 or 1, then the system may achieve accuracies close to 100%.
contrasting
train_11619
On the other hand, using most significant sentence cooccurrences results in mostly semantical similarity (Curran, 2003).
whereas various context representations, similarity measures and clustering methods have already been compared against each other (Purandare, 2004), there is no evidence so far, whether the various window sizes or other parameters have influence on the type of ambiguity found, see also (Manning and Schütze, 1999, p. 259).
contrasting
train_11620
This threshold and all others to follow were chosen after experiment-ing with the algorithm.
as will be shown in section 4, the exact set-up of these numbers does not matter.
contrasting
train_11621
Unfortunately, the usefulness of their beam search solution is limited: potential alignments are constructed explicitly, which prevents a perfect search of alignment space and the use of algorithms like EM.
the cohesion constraint is based on a tree, which should make it amenable to dynamic programming solutions.
contrasting
train_11622
They concluded that methods like ITGs, which create a tree during alignment, perform better than methods with a fixed tree established before alignment begins.
the use of a fixed tree is not the only difference between (Yamada and Knight, 2001) and ITGs; the probability models are also very different.
contrasting
train_11623
According to the Gold Standard used for evaluation in the ACL2005 shared task, this interpretation was correct, and therefore, for the example in Figure 3, the F-measure for the YAWA alignment was 100%.
romanian is a pro-drop language and although the translation of the English pronoun is not lexicalized in romanian, one could argue that the auxiliary "ve i" should be aligned also to the pronoun "you" as it incorporates the grammatical information carried by the pronoun.
contrasting
train_11624
This is because the relative positions of the two words are the same and the POS-affinity of the English personal pronouns and the Romanian auxiliaries is significant.
the SVM-based combiner deleted this link, producing the result shown in Figure 3.
contrasting
train_11625
Viewed globally, both words are likely to belong to the long tail of the Zipf distribution, having almost indistinguishable logarithmic IDF.
in the encyclopedia entry describing the city, the city's name is likely to appear in many sentences, while the building name may appear only in the single sentence that refers to it, and thus the latter should be scored higher.
contrasting
train_11626
The DRIU (1.3) indicates that V failed to identify U's intended object in utterance (1.1).
(1.3) does not explicitly mention the repair target, i.e., either book or shelf in this case.
contrasting
train_11627
If (1.3) is uttered when V is reaching for a book, it would be natural to consider that (1.3) is aimed at repairing V's interpretation of "the book".
if (1.3) is uttered when V is putting the book on a shelf, it would be natural to consider that (1.3) is aimed at repairing V's interpretation of "the shelf to the right".
contrasting
train_11628
In Traum's grounding model, the content of a DU is uniformly grounded.
things in the same DU should be more finely grounded at various levels individually.
contrasting
train_11629
Although Traum admitted these problems existed in his model, he retained it for the sake of simplicity.
such partial and mid-DU grounding is necessary to identify repair targets.
contrasting
train_11630
In brief, when a level 3 evidence is presented by the follower and negative feedback (i.e., DRIUs) is not provided by the commander, only propositions supported by the evidence are considered to be grounded even though the DU has not yet reached state F. In general, past work on discourse has targeted dialogue consisting of only utterances, or has considered actions as subsidiary elements.
this paper targets action control dialogue, where actions are considered to be primary elements of dialogue as well as utterances.
contrasting
train_11631
In this case, the repair target of (5.5) X is "the left box", i.e., #Dst1.
5 the pronoun "that" cannot be resolved by anaphora resolution only using textual information.
contrasting
train_11632
7 There are two propositions concerned with #Dst1: destination(content(α)) = #Dst1 and referent(#Dst1) = Box1.
if dest(content(α)) = #Dst1 is not correct, this means that V grammatically misinterpreted (8.1).
contrasting
train_11633
In a dialogue where participants are paying attention to each other, the lack of negative feedback can be considered as positive evidence (see (9d)).
it is not clear how long the system needs to wait to consider the lack of negative feedback as positive evidence.
contrasting
train_11634
This action will present evidence for "who is the intended agent (#Agt)" at the beginning.
the evidence for "where is the intended position (#Dst)" will require the action to be completed.
contrasting
train_11635
However, the evidence for "where is the intended position (#Dst)" will require the action to be completed.
if the position intended by the follower is in a completely different direction from the one intended by the commander, his misunderstanding will be evident at a fairly early stage of the action.
contrasting
train_11636
Their model could also handle misunderstanding regarding domain level actions.
we think that their model using coherence to detect and resolve misunderstandings cannot handle DRIUs such as (8.5), since both possible repairs for #Obj1 and #Dst1 have the same degree of coherence in their model.
contrasting
train_11637
The semantic orientation classification of words has been pursued by several researchers (Hatzivassiloglou and McKeown, 1997;Turney and Littman, 2003;Kamps et al., 2004;.
no computational model for semantically oriented phrases has been proposed to date although research for a similar purpose has been proposed.
contrasting
train_11638
With these models, the nouns (e.g., "risk" and "mortality") that become positive by reducing their degree or amount would make a cluster.
the adjectives or verbs (e.g., "reduce" and "decrease") that are related to reduction would also make a cluster.
contrasting
train_11639
To work with numerical scales of the rating variable (i.e., the difference between c = −1 and c = 1 should be larger than that of c = −1 and c = 0), Hofmann (2004) used also a Gaussian distribution for P (c|az) in collaborative filtering.
we do not employ a Gaussian, because in our dataset, the number of rating classes is only 3, which is so small that a Gaussian distribution cannot be a good approximation of the actual probability density function.
contrasting
train_11640
We test our procedure to assess Web-corpus randomness on corpora built using seeds chosen following different strategies.
the method per se can also be used to assess the randomness of corpora built in other ways; e.g., by crawling the Web.
contrasting
train_11641
We are also interested in evaluating the effect that different seed selection (or, more in general, corpus building) strategies have on the nature of the resulting Web corpus.
rather than performing a qualitative investigation, we develop a quantitative measure that could be used to evaluate and compare a large number of different corpus building methods, as it does not require manual intervention.
contrasting
train_11642
The bootstrap estimate of δ i , calledδ i is the mean of the B estimates on the individual datasets: Bootstrap estimation can be used to compute the standard error of δ i : Instead of building one matrix of average distances over N trials, we could build N matrices and compute the variance from there rather than with bootstrap methods.
this second methodology produces noisier results.
contrasting
train_11643
Also note that, of the 24.3 pairs/seed output, 5.25 are listed in the French-Japanese Scientific Dictionary.
only 3.9 of those pairs are included in M'*.
contrasting
train_11644
For the WSJ testing set, the 2 billion word Web Corpus does not achieve the performance of the Gigaword (see Table 4).
the 10 billion word Web Corpus results approach that of the Gigaword.
contrasting
train_11645
The InvR values differ by a negligible 0.05 (out of a maximum of 5.92).
on a per word basis one corpus can sigificantly outperform the other.
contrasting
train_11646
Therefore, the long jump distance between the sentences is five.
the best Levenshtein path contains one deletion edge, four identity and five consecutive substitution edges; the Levenshtein distance between the two sentences is six.
contrasting
train_11647
However, this is counter-intuitive, as replacing a word with another one which has a similar meaning will rarely change the meaning of a sentence significantly.
replacing the same word with a completely different one probably will.
contrasting
train_11648
The same holds for different cases, numbers and genders of most nouns and adjectives.
it does not hold if verb prefixes are changed or removed.
contrasting
train_11649
On the Chinese-English task, the smoothed BLEU score has a higher sentence-level correlation than WER.
this is not the case for the Arabic- Table 3: Correlation (r) between human evaluation (adequacy + fluency) and automatic evaluation with BLEU, WER, and CDER (NIST 2004 evaluation; sentence level).
contrasting
train_11650
For instance a comma and a period may have different functionalities when tagging the dictionary.
when transformations are allowed to make reference to tokens, i.e., when lexicalized transformations are allowed, some relevant information may be lost because of sparsity.
contrasting
train_11651
(2004) report an error rate (Pk) of 0.25 on segmenting broadcast news stories using unsupervised lexical cohesion-based approaches.
topic segmentation of multiparty dialogue seems to be a considerably harder task.
contrasting
train_11652
(2003) have shown that a model integrating lexical and conversation-based features outperforms one based on solely lexical cohesion information.
the automatic segmentation models in prior work were developed for predicting toplevel topic segments.
contrasting
train_11653
This is suggested by the fact that absolute performance on subtopic prediction degrades when any of the interactional features are combined with the lexical cohesion features.
the interactional features slightly improve performance when predicting top-level segments.
contrasting
train_11654
predicting from ASR output Features extracted from ASR transcripts are distinct from those extracted from human transcripts in at least three ways: (1) incorrectly recognized words incur erroneous lexical cohesion features (LF), (2) incorrectly recognized words incur erroneous cue phrase features (CUE), and (3) the ASR system recognizes less overlapping speech (OVR).
to the finding that integrating conversational features with lexical cohesion features is useful for prediction from human transcripts, Table 3 shows that when operating on ASR output, neither adding interactional nor cue phrase features improves the performance of the model using only lexical cohesion features.
contrasting
train_11655
The intuition here is that if both of those parameters were varying between a corpus of 19 students to 20 students, then we can't assume that our policy is stable, and hence not reliable.
if these parameters converged as more data was added, this would indicate that the MDP is reliable.
contrasting
train_11656
Note that these features are meant to capture the same information in both the source and channel models of Knight and Marcu (2000).
here they are merely treated as evidence for the discriminative learner, which will set the weight of each feature relative to the other (possibly overlapping) features to optimize the models accuracy on the observed data.
contrasting
train_11657
This may seem problematic since longer compressions might contribute more to the score (since they contain more bigrams) and thus be preferred.
in Section 3.2 we define a rich feature set, including features on words dropped from the compression that will help disfavor compressions that drop very few words since this is rarely seen in the training data.
contrasting
train_11658
For instance, dropping verbs is not that uncommon -a relative clause for instance may be dropped during compression.
dropping the main verb in the sentence is uncommon, since that verb and its arguments typically encode most of the information being conveyed.
contrasting
train_11659
These parsers have been trained out-of-domain on the Penn WSJ Treebank and as a result contain noise.
we are merely going to use them as an additional source of features.
contrasting
train_11660
It is not unique to use soft syntactic features in this way, as it has been done for many problems in language processing.
we stress this aspect of our model due to the history of compression systems using syntax to provide hard structural constraints on the output.
contrasting
train_11661
"VP→VBD NP PP PP ⇒ VP→VBD NP PP".
we cannot neces-sarily calculate this feature since the extent of the production might be well beyond the local context of first-order feature factorization.
contrasting
train_11662
During system development, we found this measure to be effective because it was sensitive to the number of CFs mentioned in a given sentence as well as to the strength of the evaluation for each CF.
many sentences may have the same CF sum score (especially sentences which contain an evaluation for only one CF).
contrasting
train_11663
Finally, some users found the editing/viewing interface to be good despite the fact that several customers really disliked the viewfi nder .
there were some negative evaluations.
contrasting
train_11664
The pCRU choices reflect frequency in the SUMTIME corpus: later (837 in-stances) and by late evening (327 instances) are more common than by midnight (184 instances).
forecast readers dislike this use of later (because later is used to mean something else in a different type of forecast), and also dislike variants of by evening, because they are unsure how to interpret them ; this is why SUMTIME uses by midnight.
contrasting
train_11665
Traditionally, these principles have been defined via an interpretation of the Gricean maxims (Dale, 1989;Reiter, 1990;Dale and Reiter, 1995;van Deemter, 2002) 1 .
little attention has been paid to contextual or intentional influences on attribute selection (but cf.
contrasting
train_11666
spatial distance, colour, and shape) and then seeking to merge identical groups determined on the basis of these different qualities (see Thorisson (1994)).
the grouping strategy can still return groups which do not conform to human perceptual principles.
contrasting
train_11667
In Figure 1, for example, the pairs {e 1 , e 2 } and {e 5 , e 6 } could easily be consecutively ranked, since the distance between e 1 and e 2 is roughly equal to that between e 5 and e 6 .
they would not naturally be clustered together by a human observer, because grouping of objects also needs to take into account the position of the surrounding elements.
contrasting
train_11668
There was a significant main effect of domain type (F = 6.399, p = .01), while the main effect of algorithm was marginally significant (F = 3.542, p = .06).
there was a reliable type × algorithm interaction (F = 3.624, p = .05), confirming the finding that the agreement between target and human output differed between domain types.
contrasting
train_11669
Some idioms, such as by and large, contain syntactic violations; these are often completely fixed and hence can be listed in a lexicon as "words with spaces" (Sag et al., 2002).
among those idioms that are syntactically well-formed, some exhibit limited morphosyntactic flexibility, while others may be more syntactically flexible.
contrasting
train_11670
The main clause continuation is syntactically more likely.
there is a second, semantic clue provided by the high plausibility of deer being shot and the low plausibility of them shooting.
contrasting
train_11671
data, so we can assume consistency of the ratings.
in comparison to the McRae data set, the data is impoverished as it lacks ratings for plausible agents (in terms of the example in Table 1, this means there are no ratings for hunter).
contrasting
train_11672
Maximising the data likelihood during λ estimation does not approximate our final task well enough: The log likelihood of the test data is duly improved from −797.1 to −772.2 for the PropBank data and from −501.9 to −446.3 for the FrameNet data.
especially for the FrameNet training data, performance on the correlation task diminishes as data probability rises.
contrasting
train_11673
Such implementations do not make direct use of any recorded human motions; this means that they generate average behaviours from a range of people, but it is difficult to adapt them to reproduce the behaviour of an individual.
other ECA implementations have selected non-verbal behaviour based directly on motion-capture recordings of humans.
contrasting
train_11674
The findings from the corpus analysis generally agree with those of previous studies (e.g., the predicted pitch accent was correlated with nodding and eyebrow raises), and the corpus as it stands has proved useful for the task for which it was created.
to get a more definitive picture of the patterns in the corpus, it should be re-annotated by multiple coders, and the inter-annotator agreement should be assessed.
contrasting
train_11675
In (Bangalore Johnston and Banga-lore, 2005), we have shown that such grammars can be compiled into finite-state transducers enabling effective processing of lattice input from speech and gesture recognition and mutual compensation for errors and ambiguities.
like other approaches based on handcrafted grammars, multimodal grammars can be brittle with respect to extra-grammatical, erroneous and disfluent input.
contrasting
train_11676
The aspectual marker is present on the verb byHbw in the LA example in Figure 1. lys Construction (LYS): In the MSA data, lys is interchangeably marked as a verb and as a particle.
in the LA data, lys occurs only as a particle.
contrasting
train_11677
They should capture more specific phenomena.
they are not always applicable as we never apply a decision tree when there is a time expression between any of the events involved.
contrasting
train_11678
This can produce a training overfit.
c4.5, to some extent, makes provision for this and prunes the decision trees.
contrasting
train_11679
Longer paths typically impose stricter constraints on the slot fillers.
they tend to have fewer occurrences, making them more prone to errors arising from data sparseness.
contrasting
train_11680
Finally, we note that our particular figures are specific to this dataset and the biological abstracts domain.
the annotation and analysis methodologies are general and are suggested as highly effective tools for further research.
contrasting
train_11681
Using (simplified) RMRS representations, this might amount to: (3) l:a:boil v(e), a:ARG1(k), a:ARG2(x), water(x) (4) l:a:boil v(e), a:ARG2(x), water(x) Such an approach was used for a time in the ERG with unaccusatives.
it turns out to be impossible to carry through consistently for causative alternations.
contrasting
train_11682
Navigli and Lapata don't report overall results and therefore, we can't directly compare our results with theirs.
we can see that on a PoS-basis evaluation our results are consistently better for nouns and verbs (especially the Ppr w2w method) and rather similar for adjectives.
contrasting
train_11683
Thus, it must respect the following properties: As the objective function is linear with respect to X and as the constraints that X must respect are linear equations, we can solve the clustering problem using an integer linear programming solver.
this problem is NP-hard.
contrasting
train_11684
Many previous works exist in NEs recognition and classification.
most of them do not build a NEs resource but exploit external gazetteers (Bunescu and Pasca, 2006), (Cucerzan, 2007).
contrasting
train_11685
From a methodological point of view, our proposal is also close to (Ehrmann and Jacquet, 2007) as the latter proposes a system for NEs finegrained annotation, which is also corpus dependent.
in the present paper we use all syntactic relations for measuring the similarity between NEs whereas in the previous mentioned work, only specific syntactic relations were exploited.
contrasting
train_11686
These approaches may be more appropriate for users who are MT researchers themselves.
our approach focuses on providing intuitive visualization of a variety of information sources for users who may not be MTsavvy.
contrasting
train_11687
Then, the clause is incorporated into the subtask tree.
agent utterances a dialog system starts planning an agent utterance by identifying the subtask to contribute to next, st a i , based on the subtask tree so far ( , as shown in Equation 3 (Table 1) .
contrasting
train_11688
This method is more likely to mislabel tree-internal nodes than those immediately above the leaves.
the same non-terminals show up as error-prone for this method as for the others: out-of-domain, checkavailability, order-problem and summary.
contrasting
train_11689
2 A key element in these previous attempts at adapting LDA for WSD is the tendency to remain at a high level, document-like, setting.
we make use of much smaller units of text (a few sentences, rather than a full document), and create an individual model for each (ambiguous) word type.
contrasting
train_11690
In 70% of all items, the human judges chose the same string as the original author.
the remaining 30% of the time, the human judges picked an alternative as being the .
contrasting
train_11691
On the one hand, although each of the 4 alternatives was chosen at least once from Table 4, there is a clear preference for one string (and this is also the original string from the TIGER Corpus).
there is no clear preference 9 for any one of the alternatives in Table 5, and, in fact, the alternative that was selected most frequently by the participants is not the original string.
contrasting
train_11692
Such rules have been formalised and implemented for the 56 productive prefixes of Italian (Iacobini 2004) 1 , with their French translation equivalent.
finding the translation equivalent for each rule requires specific studies a, ad, anti, arci, auto, co, contro, de, dis, ex, extra, in, inter, intra, iper, ipo, macro, maxi, mega, meta, micro, mini, multi, neo, non, oltre, onni, para, pluri, poli, post, pre, pro, retro, ri, s, semi, sopra, sotto, sovra, stra, sub, super, trans, ultra, vice, mono, uni, bi, di, tri, quasi, pseudo.
contrasting
train_11693
As we stated, we chose two morphologically related languages on purpose: they present less divergences to deal with and allow concentrating on the method.
the proposed method (especially that contrastive knowledge acquisition) can clearly be ported to another pair of languages (at least inflexional languages).
contrasting
train_11694
Similarly, the metrics proposed for text generation by (simple accuracy, generation accuracy) are based on string-edit distance from an ideal output.
the work of (Wan et al., 2005) and (Mutton et al., 2007) directly sets as a goal the assessment of sentence-level fluency, regardless of content.
contrasting
train_11695
We noticed that if constant relevance values are used, the top ranked queries will consist of a rather small set of top ranked n-grams that are paired with each other in all possible combinations.
it is likely that each time an n-gram is used in a query, the need for finding more occurrences of this particular n-gram decreases.
contrasting
train_11696
We have tested an approximate solution that allows for fast computing.
the real effect of this addition was insignificant, and a further description is omitted in this paper.
contrasting
train_11697
In the above experiments, Good-Turing (GT) smoothing with Katz backoff was used, although modified Kneser-Ney (KN) interpolation has been shown to outperform other smoothing methods (Chen and Goodman, 1999).
as demonstrated by Siivola et al.
contrasting
train_11698
A greedy Viterbi training is then applied to improve this initial guess.
our BP/EM training do not need to compute correlation scores and start the training with uniform parameters.
contrasting
train_11699
In fact, brief examination shows that less than half of source language terms successfully pass translation and disambiguation stage.
more than 80% of terms which were skipped due to lack of available translations were re-discovered in the target language during the extension stage, along with the discovery of new correct terms not existing in the given source definition.
contrasting