id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_10500
As one example, the cake spine shown above can be s-adjoined into the VP node of the ate spine, to form the tree shown in figure 1(a).
if we use the r-adjunction operation to adjoin the cake tree into the VP node, we get a different structure, which has an additional VP level created by the r-adjunction operation: the resulting tree is shown in figure 1(b).
contrasting
train_10501
Note that the mapping from parse trees to derivations is many-to-one: for example, the example trees in section 2.3 have structures that are as "flat" (have as few levels) as is possible, given the set D that is involved.
other similar trees, but with more VP levels, will give the same set D. this issue appears to be benign in the Penn WSJ treebank.
contrasting
train_10502
F-dist ## F k : A set of k-feature-sets ## R o : ν optimal rules (feature-sets) ## R k,ω : ω k-feature-sets for generating candidates ## selectNBest(R, n, S, Wr): n best rules from R ## with gain on {wi,r} m i=1 and training samples S procedure weak-learner(F k , S, W r ) ## ν best feature-sets as rules Figure 3: Find optimal feature-sets with given weights distributes features to subsets of features, called buckets, based on frequencies of features.
we guess training using a subset of features depends on how to distribute features to buckets like online learning algorithms that generally depend on the order of the training examples (Kazama and Torisawa, 2007).
contrasting
train_10503
SVM learns a final hypothesis as a linear combination of the training examples using some coefficients.
this boosting-based rule learner learns a final hypothesis that is a subset of candidate rules (Kudo and Matsumoto, 2004).
contrasting
train_10504
Texts mixing a high number of topics (e.g., Sentence Salads) are almost as likely as natural texts that address only a few topics.
the former has much higher entropy of the topic distribution due to a large number of topics being active in such texts (see also Figure 1).
contrasting
train_10505
When only a few words are changed in a true document, it retrains the properties of a true document (high LLPW and low entropy).
as more number of words are changed in a true document, it starts showing the characteristics of a false document (low LLPW and high entropy).
contrasting
train_10506
Indeed, through experimental results, it was shown that entropy of the topic distribution is lower and average LLPW of true documents is higher for true documents and the former measure was found to be more effective.
the poor performance of this method on out-of-domain data suggests that we need to use a much larger training corpus to build a robust fake document detector.
contrasting
train_10507
These studies also show that children are generally good at referent selection, given a novel target.
there is not consistent evidence regarding whether children actually learn the novel word from one or a few such exposures (retention).
contrasting
train_10508
In all of the reported experimental results, children can readily pick the correct referent for a familiar or a novel target word in such a setting (Golinkoff et al., 1992;Halberda and Goldman, 2008;Halberda, 2006;Horst and Samuelson, 2008).
halberda's eye-tracking experiments on both adults and pre-schoolers suggest that the processes involved for referent selection in the familiar target situation may be different from those in the novel target situation.
contrasting
train_10509
As discussed in the previous section, results from the human experiments as well as our computational simulations show that the referent of a novel target word can be selected based on the previous knowledge about the present objects and their names.
the success of a subject in a referent selection task does not necessarily mean that the child/model has learned the meaning of the novel word based on that one trial.
contrasting
train_10510
Moreover, our retention experiments show that the model can map a recently heard novel word to its recently seen novel referent (among other novel objects) after only one exposure.
the strength of the association of a novel pair after one exposure shows a notable difference compared to the association between a "typical" familiar word and its meaning.
contrasting
train_10511
Venkataraman (2001) notes that considering the generation of the text as a single event is unlikely to be how infants approach the segmentation problem.
mBDP-1 uses an incremental search algorithm to segment one utterance at a time, which is more plausible as a model of infants' word segmentation.
contrasting
train_10512
We also tentatively conclude that phonotactic patterns can be learned from unsegmented text.
the phonotactic patterns learned by our model ought to be studied in detail to see how well they match the phonotactic patterns of English.
contrasting
train_10513
This process appears quite similar to the decision lists technique in machine learning (Rivest, 1987) which has already been combined with the MDL principle (Pfahringer, 1997).
we are not committed to this formalism: we are more interested in the content of the model rather than its knowledge representation.
contrasting
train_10514
Thus, in training the VPosition features become dominant as the SRL learns to recognize more verbs.
the VPosition features are inactive when the Baby SRL encounters the novel-verb test sentences.
contrasting
train_10515
The bootstrap model learns much more slowly, which is unsurprising, given that it depends on having some reasonable category knowledge in order to develop its clusters-leading to a chicken-and-egg problem.
once started, its performance improves well beyond the word-based model's plateau.
contrasting
train_10516
In the 'unambiguous' and the 'biased' conditions, the head words' lexical biases are too strong for the model to overcome.
the results show a realistic effect of the lexical bias.
contrasting
train_10517
The algorithm is currently only implemented as an incremental process.
comparison with a batch version of the algorithm, such as by using a Gibbs sampler (Sanborn et al., 2006), would help us further understand the effect of incrementality on language fidelity.
contrasting
train_10518
They are in fact important because they allow to structure information, thus fostering their search and reuse.
it is well known that any knowledge-based system suffers from the so-called knowledge acquisition bottleneck, i.e.
contrasting
train_10519
Moreover, we propose an unsupervised methodology for acquiring general-specific noun relationships.
it is clear that deep evaluation is needed.
contrasting
train_10520
On the one hand, catchwords represent the mass value orientation for a period.
they have a high timeliness.
contrasting
train_10521
These two synonyms are treated as one term in manual evaluation that corresponds to promote usage frequency.
relationship between the two synonyms is not concerned in automatic extraction.
contrasting
train_10522
However, this rule does not generalize to other verbs, such as "drive," as in the sentence "The boy drives his parents crazy," which also has three core arguments "Arg0: driver," "Arg1: thing in motion," and "Arg2: secondary predication on Arg1."
here the action is figurative, and we would expect a layout rule that puts Arg1 in position C: In addition, while modifier arguments have the same meaning across verbs, their pictorial representation may differ based on context.
contrasting
train_10523
Thus native speakers do not get as many syntactic hints.
non-native speakers do not have the same degree of built-in English syntactic knowledge.
contrasting
train_10524
The resulting system outperformed the original Arabic system trained on 3.3 million sentence pairs corpora when using monotone decoding.
an improvement in monotone decoding is no guarantee for an improvement over the best baseline achievable with full word forms.
contrasting
train_10525
There now exist numerous general-purpose algorithms for attribute selection (e.g., (Dale and Reiter, 1995;Krahmer et al., 2003;Belz and Gatt, 2007;Siddharthan and Copestake, 2004)).
these algorithms by-and-large focus on the algorithmic aspects of referring expression generation rather than on psycholinguistic factors that influence language production.
contrasting
train_10526
The most relevant previous research is the work of (Gupta and Stent, 2005), who modified Dale and Reiter's algorithm to model speaker adaptation in dialog.
this corpus does not involve dialog so there are no cross-speaker constraints, only within-speaker constraints (speaker style and priming).
contrasting
train_10527
For both domains, for automatic and human attribute selection, the speaker-dependent Template-based approach seems to perform the best, then the speakerindependent Template-based approach, and then the Permute&Rank approach.
we find automatic metrics for evaluating generation quality to be unreliable.
contrasting
train_10528
PropBank annotation includes the whole chain of empty categories, as well as the antecedent of the empty category (the filler of the gap).
the CoNLL-2005 version only includes the filler of the gap and if there is no filler, the argument is omitted, e.g., no ARG0 (subject) for leave would be included in I said to leave because the subject of leave is unspecified.
contrasting
train_10529
linguistic features can often be detected with greater reliability than deep (semantic) features.
deep features can cover more ground because they regularize across differences in surface strings.
contrasting
train_10530
This algorithm can cover 98.5% arguments while reducing about 60% of the training samples, according to our statistics.
this is achieved at the price of including a syntactic parse tree as part of the input for semantic parsing.
contrasting
train_10531
It should be cleared that the ordinal numbers are only used to distinguish different meanings of a predicate.
if these numbers are treated as tags for predicates, some statistical properties will be obtained as illustrated in Figure 1.
contrasting
train_10532
The main reason why we use a sequence labeling method for predicate identification was to relax the effect of the tagging error of PPOS and PPOSS.
we will show later that this module aggravates the total performance.
contrasting
train_10533
For example, voice (active or passive) is essential for verb predicate's argument classification.
presence of a genitive word is useful for noun predicate's argument classification.
contrasting
train_10534
The argument identifier regards a predicate and its nearest neighbors -its parent and children -as argument candidates.
if the POS tag of a nearest neighbor is IN, MD, or TO, it will be ignored and the next nearest candidates will be used.
contrasting
train_10535
The roleset matching method is devised to select the most appropriate role sequence and to identify the correct roleset.
the current features for roleset matching seem to be not enough and other useful features are expected to be found in the future work.
contrasting
train_10536
There is no significant difference in score for left and right dependencies, presumably because of the bi-directional parsing.
the system overpredicts dependencies to the root.
contrasting
train_10537
We address each of these subtasks with separate components without backward feedback between sub-tasks.
the use of multiple parsers at the beginning of the process, and re-ranking at the end, contribute beneficial stochastic aspects to the system.
contrasting
train_10538
Better models, features, and learning algorithms have allowed systems to perform many of these tasks with 90% accuracy or better.
success in integrated, end-to-end natural language understanding remains elusive.
contrasting
train_10539
In our case, I is the set of words that occur at least t times in the unlabelled pool, b i = t, ∀i ∈ I, the multisets are the sentences in that pool and w j = 1, ∀j ∈ J. Multiset multicover is NP-hard.
there is a good greedy approximation algorithm for it.
contrasting
train_10540
1b would be represented as follows (details omitted): (1) 1 U pron 0 FRAG 2 heeft verb 0 ROOT 3 volkomen adj 4 mod 4 gelijk noun 0 FRAG In the training phase, the MSTParser tries to maximize the scoring margin between the correct parse and all other valid dependency trees for the sentence.
in the case of fragmented trees, the training example is not strictly speaking correct, in the sense that it does not coincide with the desired parse tree.
contrasting
train_10541
We have seen that the graph-and the transitionbased parser react similarly to the various filtering methods.
there are interesting differences in the magnitude of the performance changes.
contrasting
train_10542
Negation signals in the BioScope corpus always have one consecutive block of scope tokens, including the signal token itself.
the classifiers only predict the first and last element of the scope.
contrasting
train_10543
The evaluation of this ratio is difficult because not all true interactions are known.
the Disorder module does not contribute significantly to the prediction.
contrasting
train_10544
The simplest implementation would be to check the most recent model with the previous model and stop if their agreement exceeds the intensity cutoff.
independent of wanting to provide users with a longevity control, this is not an ideal approach because there's a risk that these two models could happen to highly agree but then the next model will not highly agree with them.
contrasting
train_10545
Such fully unsupervised methods do not incorporate any language-specific parsers or taggers, so can be successfully applied to diverse languages.
unsupervised pattern-based methods suffer from several weaknesses.
contrasting
train_10546
This framework allows a fully unsupervised seed-less discovery of concepts without relying on language-specific tools.
it completely ignores potentially useful syntactic or morphological information.
contrasting
train_10547
As discussed in Section 5, windowing is required for successful noise reduction.
due to the increase in pattern quality with parser data, it is likely that less noise will be captured by the discovered patterns.
contrasting
train_10548
In the distributed model, x will be considered a member if the majority of its k nearest neighbors are members.
it is im-portant that both models can also be used to represent regions with soft boundaries.
contrasting
train_10549
This approach is similar to that of seed words (e.g., (Hearst, 1998)) or hook words (e.g., (Davidov and Rappoport, 2008)) in previous work.
in our case they are fixed and rich in grammatical information in the sense that they have to correspond to subject -object pairs in consecutive clauses.
contrasting
train_10550
This procedure can be partially solved by identifying complex verbs such as "take in".
we leave this improvement for future work.
contrasting
train_10551
Certainly, we are missing important information by excluding phrases and ignoring modality.
these features can be difficult to capture accurately, and since inaccurate input could degrade the clustering accuracy, in this research we stick with the important and easily-obtainable features.
contrasting
train_10552
And most of these constructions do not focus so much on the magnitude, but on the order in which one eventuality (the effect) is a reaction to the other (the cause).
a closer look at our data shows that there are also constructions which indicate this property more precisely.
contrasting
train_10553
Instead, it relied on built-in syntactic knowledge and a rich hierarchical representation of semantic knowledge to learn links between sentence structure and meaning.
our previous experimental design has a serious drawback that limits its relevance to the study of how children learn their first language.
contrasting
train_10554
In the present paper this error is over-determined, because the classifier learns only an agent/non-agent contrast, and the linguistic constraints forbid duplicate agents in a sentence.
for comparison to the earlier paper we test our system on the 'A and B gorp' sentences as well.
contrasting
train_10555
As expected, the NPattern feature makes the same overgeneralization error seen by children and the system in (Connor et al., 2008).
when the VPosition feature is added, different results are obtained for the 'A gorp B' and 'A and B gorp' sentences.
contrasting
train_10556
The models appear reasonably good at adapting to the empirical fixation distribution of individual readers.
the models typically tend to look at more words than the readers, as noted above.
contrasting
train_10557
Because MERT ignores the denominator in Equation 1, it is invariant with respect to the scale of the weight vector θ -the Moses implementation simply normalises the weight vector it finds by its 1 -norm.
when we use these weights in a true probabilistic model, the scaling factor affects the behaviour of the model since it determines how peaked or flat the distribution is.
contrasting
train_10558
The TYPE-AUTO predictions contribute little information, only fragmenting the data and leading to over-training and lower accuracy.
the CONTEXT-AUTO predictions improve accuracy, as these scores provide orthogonal and hence helpful information for the semi-supervised classifier.
contrasting
train_10559
When the members of the committee are in complete agreement, AL should be aborted since it no longer contributes to the overall learning process -in this case, AL is but a computationally expensive counterpart of random sampling.
as pointed out by Tomanek et al.
contrasting
train_10560
We analyze several approaches for modeling non-local dependencies proposed in the literature and find that none of them clearly outperforms the others across several datasets.
as we show, these contributions are, to a large extent, independent and, as we show, the approaches can be used together to yield better results.
contrasting
train_10561
The Viterbi algorithm has the limitation that it does not allow incorporating some of the non-local features which will be discussed later, therefore, we cannot use it in our end system.
it has the appealing quality of finding the most likely assignment to a second-order model, and since the baseline features only have second order dependencies, we have tested it on the baseline configuration.
contrasting
train_10562
This separation "breaks" the Viterbi decision process to independent maximization of assignment over short chunks, where the greedy policy performs well.
dependencies between isolated named entity chunks have longer-range dependencies and are not captured by second-order transition features, therefore requiring separate mechanisms, which we discuss in Section 5.
contrasting
train_10563
The sample text discussed in the introduction shows one such example, where all occurrences of "blinker" are assigned the PER label.
in general, this is not always the case; for example we might see in the same document the word sequences "Australia" and "The bank of Australia".
contrasting
train_10564
Both context aggregation and two-stage prediction aggregation treat all tokens in the text similarly.
we observed that the named entities in the beginning of the documents tended to be more easily identifiable and matched gazetteers more often.
contrasting
train_10565
Short constituents are more frequent and are generally more likely to be correct.
the correctness of long constituents is an indication that the parser has a correct interpretation of the tree structure and that it is likely to create a high quality tree.
contrasting
train_10566
(2008) are supervised, performing semantic analysis of the parse tree and gold standard-based calssification, respectively.
the SEPA algorithm of Reichart and Rappoport (2007), an algorithm for supervised constituency parsers, can be applied to unsupervised parsers in a way that preserves the unsupervised nature of the selection task.
contrasting
train_10567
It could be argued that the total number of 'real' labels in the data is indeed large (e.g., because every verb exhibits its own syntactic patterns) and that a small number of labels is just an arbitrary decision of the corpus annotators.
most linguistic theories agree that there is a prototypical level of generalization that uses concepts such as Noun Phrase and Verb Phrase, a level which consists of at most dozens of labels and is strongly manifested by real language data.
contrasting
train_10568
There have been earlier studies of hypernym extraction with either lexical or dependency extraction patterns.
these studies applied the techniques to a variety of different data sets and used different evaluation techniques.
contrasting
train_10569
In the example sentence, no lexical pattern will associate city with Liverpool because there are too many words in between.
a dependency pattern will create a link between these two words, via the word as.
contrasting
train_10570
Precision will be computed against all chosen candidate hypernyms.
recall will only be computed against the positive noun pairs which occur in the phrases selected by the examined method.
contrasting
train_10571
The parser also performs part-of-speech tagging and lemmatization, tasks which are useful for the lexical preprocessing methods.
taking future real-time applications in mind, we did not want the lexical processing to be dependent on the parser.
contrasting
train_10572
Dependency patterns remain finding more positive targets and obtaining a larger recall score but their precision score is disappointing.
when we examine the precision-recall plots of the two methods ( Figure 1, obtained by varying the acceptance threshold of the machine learner), they are almost indistinguishable.
contrasting
train_10573
The dependency parser ignored punctuation signs and therefore the dependency pattern covers both phrases with and without punctuation.
these phrase variants result in different lexical patterns.
contrasting
train_10574
The simplest method to estimate the similarity between two multi-contexts is to compute the inner product of their vector representations in the VSM.
formally, we define a space of dimensionality N in which each dimension is associated with one word from the dictionary, and the multi-context m is represented by a row vector where the function f (t i , m) records whether a particular token t i is used in m. Using this representation we define bag-of-words kernel between multicontexts as the bag-of-words representation does not deal well with lexical variability.
contrasting
train_10575
Moreover, further experiments have shown that even a larger number of Wikipedia articles (600,000) does not help.
the latent semantic kernels outperform all the other methods, and their composite (K BOW + K W and K BOW + K N Y T ) perform the best on every configuration, demonstrating the effectiveness of latent semantic kernels in fine-grained classification of named entities.
contrasting
train_10576
A named entity is then classified according the similarity between the word-domain lists and the global context in which the entity appears.
the evaluation was performed only on 6 person names using two categories.
contrasting
train_10577
For the SRL-only task, participants have been provided will all the data but the PRED and APREDs, which they were supposed to fill in with their correct values.
they did not have to determine which tokens are predicates (or more precisely, which are the argument-bearing tokens), since they were marked by 'Y' in the FILLPRED field.
contrasting
train_10578
closed challenge, Macro F 1 score.
scores can also be computed for a number of other conditions: • Task: Joint or SRL-only • Challenge: open or closed • Domain: in-domain data (IDD, separated from training corpus) or out-of-domain data (OOD) Joint task participants are also evaluated separately on the syntactic dependency task (labeled attachment score, LAS).
contrasting
train_10579
The largest number of systems can be compared in the SRL results table (Table 6), where all the systems have been evaluated solely on the SRL performance regardless whether they participated in the Joint or SRL-only task.
since the results might have been influenced by the supplied parser, separate ranking is provided for both types of the systems.
contrasting
train_10580
When labeling the semantic role, we use a similar approach as we did in CoNLL Shared Task 2008.
as the frames information is not supplied for all languages, we do not use it in this task.
contrasting
train_10581
More specifically, the lack of coverage of hand-crafted linguistic grammars is a major concern.
the CoNLL task is also a good opportunity for the deep processing community to (re-)evaluate their resources and software.
contrasting
train_10582
As indicated above in table 1, only 2% of sentences are unparsable, despite 16% requiring the Swap action.
this argument does not explain why our parser did relatively poorly on German semantic dependencies.
contrasting
train_10583
The model of (Che et al., 2008) decided one label for each arc before decoding according to unigram features, which caused lower labeled attachment score (LAS).
keeping all possible labels for each arc made the decoding inefficient.
contrasting
train_10584
We can see that along with the development data are added into the training data, the performance on the indomain test data is increased.
it is interesting that the additional data is harmful to the ood test.
contrasting
train_10585
Both LEGA and MEGA were used for this task.
training and test for the final submission of Chinese, Czech and English run in MEGA, and the rest in LEGA.
contrasting
train_10586
The complexity is therefore O(n 4 ).
they could show that a system can gain accuracy of about 2-4% which is a lot.
contrasting
train_10587
We will refer to predicates such as word/2 as observed because they are known in advance.
role/3 is hidden because we need to infer it at test time.
contrasting
train_10588
For five of the languages the difference to the F1 scores of the best system is 3%.
for German it is 6.19% and for Czech 10.76%.
contrasting
train_10589
As could be expected the parser performs much better for the languages for which a large training set is provided.
as discussed in the next section, simply adding more training data does not seem to solve the problem.
contrasting
train_10590
The improvement in scores are modest for Chinese, Czech, English and Japanese, while Catalan, German and Spanish stand out by vast improvements with additional training data.
the improvement when going from 75% to 100% of the training data is only modest for all languages.
contrasting
train_10591
Ideally, we would like to choose the most plausible syntactic-semantic structure among all possible structures in that syntactic dependencies and semantic dependencies are correlated.
solving this problem is too difficult because the search space of the problem is extremely large.
contrasting
train_10592
In order to avoid losing the benefits of higher-order parsing, we considered applying pseudo-projective transformation (Nivre and Nilsson, 2005).
growth of the number of dependency labels by pseudo-projective transformation increases the dependency parser training time, so we did not adopt transformations.
contrasting
train_10593
As for short sentences, the recall of our approach is in the same order as for the baseline.
precision is increased by a factor of two in comparison to the baseline, which is also similar to short sentences.
contrasting
train_10594
Due to space limitations we refer the reader to (Zelle and Mooney, 1996) for a detailed description of the Geoquery domain.
recall that the goal of semantic parsing is to produce the following function (Equation (1)): given that y and z are complex structures it is necessary to decompose the structure into a set of smaller decisions to facilitate efficient inference.
contrasting
train_10595
Precision and recall are typically used as evaluation metrics in semantic parsing.
as our model inherently has the ability to map any input sentence into the space of meaning representations the trade off between precision and recall does not exist.
contrasting
train_10596
Feedback Recall that our learning framework does not require meaning representation annotations.
we do require a Feedback function that informs the learner whether a predicted meaning representation when executed produces the desired response for a given input sentence.
contrasting
train_10597
Here we clearly need to identify sets of strings corresponding to the two parts of this language, but it is easy to see that no one context will suffice.
note that the first part is defined by the two contexts (λ, λ), (a, b) and the second by the two contexts (λ, λ), (a, bb).
contrasting
train_10598
In English, the capitalization of the phrase European Court of Auditors helps identify the span as a named entity.
in German, all nouns are capitalized, and capitalization is therefore a less useful cue.
contrasting
train_10599
Better POS induction algorithms yield lower perplexity language models.
clark did not study the correlation between the perplexity measure and the gold standard tagging.
contrasting