id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_92600
For example, consider the task of annotating WN with the labeled class renaissance painters containing the class instances Pisanello, Hieronymus Bosch, and Jan van Eyck and associated with the attributes "famous works" and "style."
there is also a pressure for these two occurrences to attach to a single concept.
neutral
train_92601
Regression fit and regression-based classification rank accuracy of the adjective, noun, additive, and multiplicative models for phrase stimuli.
nothing was done to elicit consistency across participants.
neutral
train_92602
Classification accuracies were significantly higher (p < 0.05) for the nouns, calculated with a paired t-test.
meaning associated with the noun should be more evoked.
neutral
train_92603
We quantitatively measure the impact of each of these subproblems on coreference resolution performance as a whole.
we see that focusing attention on all and only the annotated CEs leads to (often substantial) improvements in performance on all metrics over all data sets, especially when measured using the MUC score.
neutral
train_92604
Bengtson and Roth (2008) simply discard twinless CEs, but this solution is likely too lenient -it doles no punishment for mistakes on twinless annotated or extracted CEs and it would be tricked, for example, by a system that extracts only the CEs about which it is most confident.
we would expect a coreference resolution system to depend critically on its Named Entity (NE) extractor.
neutral
train_92605
Just as for the polarity features, we include features for both each tag and its negation.
since the learning curve is steeper when function words were removed, they hypothesize that using only non-function words will outperform using all words once enough training data is available.
neutral
train_92606
Our last baseline implements the active learning procedure as described in Tong and Koller (2002).
given that we now have a labeled set (composed of 100 manually labeled points selected by active learning and 500 unambiguous points) as well as a larger set of points that are yet to be labeled (i.e., the remaining unlabeled points in the training folds and those in the test fold), we aim to train a better classifier by using a weakly supervised learner to learn from both the labeled and unlabeled data.
neutral
train_92607
In contrast, the goal of this paper is to model and classify such speaker attributes from only the latent information found in textual transcripts.
to assess the potential gains from full exploitation of partner-sensitive modeling, we first report the result from an oracle experiment, where we assume we know whether the conversation is homogeneous (same gender) or heterogeneous (different gender).
neutral
train_92608
% of pronoun usage: Macaulay (2005) argues that females tend to use more third-person male/female pronouns (he, she, him, her and his) as compared to males.
on top of the standard Boulis and Ostendorf (2005) model, we also investigated the following features motivated by the sociolinguistic literature on gender differences in discourse (Macaulay, 2005): 1.
neutral
train_92609
Table 2 that, when the number of labeled data is small (n i L < 10% * n L ), graph based SSL, gSum SSL, has a better performance compared to SVM.
here we give brief descriptions of only the major modules of our QA due to space limitations.
neutral
train_92610
If its immediate k neighbors, dark blue colored nodes, have the same label, the algorithm continues to search for the secondary k neighbors, the light blue colored nodes, i.e., the neighbors of the neighbors, to find out if there are any opposite labeled nodes around.
each q/a pair is represented as a feature vector x i ∈ d characterizing the entailment information between them.
neutral
train_92611
Glosses and definitions for the same lexeme in different lexical semantic and encyclopedic resources can actually be considered as near-paraphrases, since they define the same terms and hence have We use glosses and definitions contained in the following resources to build a parallel corpus: • WordNet (Fellbaum, 1998).
this method offers several advantages.
neutral
train_92612
Translation-based retrieval models have been widely used in practice by the IR and QA community.
the correct answer to the first question "Who invented Halloween?"
neutral
train_92613
Our proposed method performs better than the best method in the TAC 2008 competition.
topic and opinion are not independent in reality.
neutral
train_92614
We thus consider two path simplifications as well: • compressed: only the source, target, and root nodes are preserved in the path (so the path above becomes [VBP ↑ S ↓ NP]) • POS class clusters: rather than distinguish, for example, between different tenses of verbs in a path, we consider only the first letter of each NT.
• Given a Levenshtein alignment between altered rules, the most common changes within a given NT phrase are detailed in column five of Table 1.
neutral
train_92615
We begin by reviewing previous work in the automatic labeling of structural semantics and motivating the analysis not only in terms of discovery but also regarding its potential application to automatic speech reconstruction research.
only these utterances (about 72% of the annotated SSR data) can be given a semantic analysis in the following sections.
neutral
train_92616
The language model is also re-trained as described in Section 3.4.
first, we assume P LM (c 1 ) ≈ P LM (W ).
neutral
train_92617
Some new words are clearly good for human sense and definitely convey novel semantic information, but they can be useless for speech recognition.
the proposed LAICA approach tries to focus on those new words which can not be handled well by simple character n-grams.
neutral
train_92618
This procedure can be iterated to give Lex i and LM i until convergence.
it's almost impossible to include all words into a lexicon both due to the technical difficulty and also the fact that new words are created continuously.
neutral
train_92619
Our own lectures consist of eleven lectures of approximately 50 minutes each, recorded in three separate courses, each taught by a different instructor.
one of the main challenges to integrating text transcripts into archives of webcast lectures is the poor performance of ASR systems on lecture transcription.
neutral
train_92620
These results are found to be statistically very significant (p ≤ .01).
6 For tuning and testing, we use the official NIST MT evaluation data for Chinese from 2002 to 2008 (MT02 to MT08), which all have four English references for each input sentence.
neutral
train_92621
In our experiments, we use a re-implementation of the Moses phrase-based decoder .
the difference on MT08 is significant in terms of TER.
neutral
train_92622
Moreover, the inference procedure for each sentence pair is non-trivial, proving NP-complete for learning phrase based models or a high order polynomial (O(|f | 3 |e| 3 )) 1 for a sub-class of weighted synchronous context free grammars (Wu, 1997).
, r n ), and z i denotes the root node of r i .
neutral
train_92623
In this way each process is sampling according to a slightly 'out-of-date' distribution.
although efficient, the sheer number of somewhat arbitrary heuristics makes this approach overly complicated.
neutral
train_92624
In the example sentence, the whole clause "that he was hit by Jack" forms the object of the verb said, and hence is represented in a scope.
training, tuning, and decoding were performed using the Moses toolkit 3 .
neutral
train_92625
While translating from a language of moderate case marking and morphology (English) to one with relatively richer case marking and morphology (Hindi), we are faced with the problem of extracting information from the source language sentence, transferring the information onto the target side, and translating this information into the appropriate case markers and morphological affixes.
it is imperative to make it possible for the system to learn general rules for morphology and case marking.
neutral
train_92626
The paradigm is characterized by a separation between realization and selection, in which rule-based methods are used to generate a space of possible paraphrases, and statistical methods are used to select the most likely realization from the space.
this kind of news-source-explanation is customary to place at the beginning of a sentence in Chinese.
neutral
train_92627
Human usually compress sentences by dropping the intermediate nodes in the dependency tree.
in order to consider the interdependence of words, we employ the Minimum Classification Error (MCE) learning framework (Juang and Katagiri, 1992), which was proposed for learning the goodness of a sequence.
neutral
train_92628
When using the MCE framework, the misclassification measure is defined as the difference between the score of the reference sentence and that of the best non-reference output and we optimize the parameters by minimizing the measure.
the differences in the scores are not significant.
neutral
train_92629
The meaning is generally preserved.
sentence preprocessing mainly includes POs tagging and dependency parsing for the input sentences, as POs tags and dependency information are necessary for matching the paraphrase pattern and collocation resources in the following stages.
neutral
train_92630
We will define a set of probabilistic context-free rules, which generates bags (i.e.
despite this importance, there has been relatively little published work on semantic tagging of web search queries.
neutral
train_92631
This combination provides a framework that benefits from the advantages of both generative and discriminative models.
we used the optional non-terminals to make the task of defining the grammar easier.
neutral
train_92632
Moreover, based on the assumption that anchor texts in different languages referring to the same web page are possibly translations of each other, (Lu et al., 2004) propose a novel approach to construct a multilingual lexicon by making use of web anchor texts and their linking structure.
the second is that the pronunciation similarity of the two words is above a certain threshold so that one can be considered the transliteration of the other.
neutral
train_92633
That means "[N]", "[P]" and "[S]" will be transformed into "[\d]+", "[\p{P}]+" and "[\s]+" respectively.
of the segmentation, the inner text of every node will look like "…ECECC 5 EC…".
neutral
train_92634
Here, we use a very simple rule: in the English snippet, we regard all characters within (and including) the first and the last English letter in the snippet as the English content; similarly, in the Chinese snippet we regard all characters within (and including) the first and the last Chinese character in the snippet as the Chinese content; b) Word segmentation of the Chinese content.
our approach contains four steps: 1) Preprocessing: parse the web page into a DOM tree and segment the inner text of each node into snippets; 2) Seed mining: identify potential translation pairs (seeds) using an alignment model which takes both translation and transliteration into consideration; 3) Pattern learning: learn generalized patterns with the identified seeds; 4) Pattern based mining: extract all bilingual data in the page using the learnt patterns.
neutral
train_92635
When evaluating this new system, we will include similar measures to those described here to enable the evaluations of the two systems to be compared.
automated evaluation metrics that rate system behaviour based on automatically computable properties have been developed in a number of other fields: widely used measures include BLEU (Papineni et al., 2002) for machine translation and ROUGE (Lin, 2004) for summarisation, for example.
neutral
train_92636
The overall correct-assembly rate was correlated with the overall rate of remembering objects: R 2 = 0.20, p < 0.005.
in this paper, we apply a PARADiSE-style process to the results of a user study of a human-robot dialogue system.
neutral
train_92637
In future studies, we will also gather data on these additional non-verbal behaviours, and we expect to find higher correlations with subjective judgements.
subjects who said that they did remember how to build a snowman or an L shape the second time around were no more likely to do it correctly than those who said that they did not remember.
neutral
train_92638
Some studies look into the impact of training data on user simulations.
since most of these current user simulation techniques use probabilistic models to generate user actions, how to set up the probabilities in the simulations is another important problem to solve.
neutral
train_92639
Therefore, this time the simulation gives a correct student answer based on the probability P (c|3rdLaw, ic).
since it is hard to generate a natural language utterance for each tutor's question, we use the student answers in the human user corpus as the candidate answers for the simulated students.
neutral
train_92640
The accuracy of the learned probabilities becomes more questionable when the collected human corpus is small.
most current simulation models are probabilistic models in which the models simulate user actions based on dialog context features (Schatzmann et al., 2006).
neutral
train_92641
¥ One utterance must contain four words or less.
this group had an average WD score of .199, better than the rest of the group at .268.
neutral
train_92642
The expressiveness of the model also slacks due to their weak ability of generalization.
as discussed above, the problem lies in that the non-contiguous phrases derived from the contiguous tree sequence pairs demand greater reliance on the context.
neutral
train_92643
Additionally, for STSSG we set , and for SncTSSG, we set .
this distortional operation, like phrase-based models, is much more flexible in the order of the target constituents than the traditional syntax-based models which are limited by the syntactic structure.
neutral
train_92644
As with general one-to-one matchings, we can optimize margin-based objectives.
the first type comprises bias features for each block length.
neutral
train_92645
This suggests that the intersection word alignment-based expansion method is more effective than the commonly used direct wordalignment-based hypothesis alignment method in confusion network-based MT system combination.
the aligned words are reordered according to their alignment indices.
neutral
train_92646
The hypothesis is modified to match the reference, where a greedy search is used to select the set of shifts because an optimal sequence of edits (with shifts) is very expensive to find.
to reduce the possibility of breaking a syntactic phrase, we extend to choose one of the two above operations depending on which one has the higher likelihood with the current null-aligned word.
neutral
train_92647
The column 'reversal' shows the impact of deliberately bad order, viz.
in distortion model 2 the word sequences are those sequences available in one of the component translations in the CN.
neutral
train_92648
In conventional IHMM, the transition from state i to j has probability: It is tempting to apply the same formula to the transitions in incremental IHMM.
alignment over multiple sequential patterns has been investigated in different contexts.
neutral
train_92649
The alignment of hypothesis 2 against the backbone cannot be considered an error if we consider only these two translations; nevertheless, when added with the alignment of another hypothesis, it produces the low-quality CN in Figure 1b, which may generate poor translations like "he bought a laptop laptop".
the transition from i to i is in fact composed of the first transition from i to δ(i, k) and the second transition from δ(i, k) to the null state at i.
neutral
train_92650
Note also that E(k) represents a word sequence with inserted empty symbols; the sequence with all inserted symbols removed is known as the compact form of E(k).
all the cell sequences S(i, i ) can be classified, with respect to the length of corresponding n-grams, into a set of parameters where each element (with a particular value of n) has the probability The probability of the transition from i to i is: the transition probability of incremental IHMM is a weighted sum of probabilities of 'ngram jumping', defined as conventional IHMM distortion probabilities.
neutral
train_92651
Word alignment between a backbone (or skeleton) translation and a hypothesis translation is a key problem in this approach.
a length limit L is imposed such that for all state transitions where |i − i| ≤ L, the transition probability is calculated as equation 4, otherwise it is calculated by: for some q between i and i .
neutral
train_92652
The last inequality characterizes the amount of work done in the bottom-up pass.
it is very inefficient on its own, but it leads to the full algorithm.
neutral
train_92653
Consistency also implies that items are popped off the agenda in increasing order of bounded Viterbi scores: β(e) + h(e) We will refer to this monotonicity as the ordering property of A * (Felzenszwalb and McAllester, 2007).
if all inside items are processed before any derivation items, the subsequent number of derivation items and outside items popped by KA * is nearly identical to the number popped by EXH in our experiments (both algorithms have the same ordering bounds on which derivation items are popped).
neutral
train_92654
Example 3 Let p be defined as in example 2 and let X = {X A 1 , X A 2 , X A 3 }.
at each iteration, the algorithm performs a reduction by arbitrarily choosing a pair of adjacent endpoint sets from the agenda and by merging them.
neutral
train_92655
Since X has fan-out ≤ 2, E(X) contains at most 4 endpoints.
lCFRS productions with a relatively large number of nonterminals are usually observed in real data.
neutral
train_92656
where V N and V T are disjoint alphabets of non-terminal and terminal symbols, respectively, S ∈ V N is the start symbol, and I and A are finite sets of initial and auxiliary trees, respectively.
the definition of TT-MCTAG imposes that the head tree of each tuple contains at least one lexical element.
neutral
train_92657
For each sentence we extract a dependency path between each pair of entities.
whereas the supervised training paradigm uses a small labeled corpus of only 17,000 relation instances as training data, our algorithm can use much larger amounts of data: more text, more relations, and more instances.
neutral
train_92658
The kd-tree algorithm (Bentley 1980) aims at speeding up nearest neighbor search.
for each input token sequence, we identify all sequences of tokens that are found in the phrase clusters.
neutral
train_92659
As Figure 2 reveals, the learning curves of SeSAL stop early (on MUC7 after 12,800 tokens, on PENNBIOIE after 27,600 tokens) because at that point the whole corpus has been labeled exhaustively -either manually, or automatically.
we propose a semisupervised AL approach for sequence labeling where only highly uncertain subsequences are presented to human annotators, while all others in the selected sequences are automatically labeled.
neutral
train_92660
Intuitively, the former should be given more importance than its constituents 'world' and 'bank', since the meaning of the original phrase cannot be predicted from the meaning of either constituent.
when performing experiments on each query set with the one-parameter and the multiparameter models, the other two query sets have been used for learning the optimal parameters.
neutral
train_92661
We start the construction of our data set by retrieving the queries, together with the clicked urls, from the Yahoo!
(Kruengkrai et al., 2005) proposed a feature based on alignment of string kernels using suffix trees, and used it in two different classifiers.
neutral
train_92662
Furthermore, it is suggested in (Levering and Cutler, 2006) that the average textual content in a web page is 474 words.
this is due to the characteristics of those languages, which allow the construction of composite words from multiple words, or have a richer morphology.
neutral
train_92663
In Section 4, we implement some of the existing models and compare their performance on our test set.
of these operations, we have, for each query q ∈ Q, a set of triplets (l, f l , c u,l ) where l is a language, f l is the count of clicks for l (which we obtained through the urls in language l), and c u,l is the count of unique urls in language l. The resulting table T 3 associates queries with languages, but also contains a lot of noise.
neutral
train_92664
Web search quality can vary widely across languages, even for the same information need.
this section addresses these steps in turn.
neutral
train_92665
We find that our bilingual rankings have good monolingual ranking properties.
for these queries, relevant information at Chinese side may be relatively poorer, so the English ranking can be more reliable.
neutral
train_92666
English is a right-branching language, and so dependents tend to occur after their heads.
for single word connectives, this might correspond to the POS tag of the word, however for multi-word connectives it will not.
neutral
train_92667
It is possible that different connectives have different syntactic contexts for discourse usage.
in sentence (2b), once occurs with a non-discourse sense, meaning "formerly" and modifying "used".
neutral
train_92668
For instance, while the Brazilian striker Ronaldo is rendered as 朗拿 度 long5-naa4-dou6 in Cantonese, other phonetically similar candidates like 朗娜度 long5-naa4-dou6 or 郎拿刀 long4-naa4-dou1 1 are least likely.
hence naming is more of an art than a science, and automatic transliteration should avoid over-reliance on the training data and thus missing unlikely but good candidates.
neutral
train_92669
From the research point of view, it suggests more should be considered in addition to grapheme mapping for handling Cantonese data.
if there happens to be a skewed distribution of a certain Chinese character, the model might preclude other acceptable transliteration alternatives.
neutral
train_92670
Among which, 7.2% involved rarely used character, and 98.4% were assigned common classifications of their causes by human subjects.
the incorrect words were entered into computers based on students' writings, ignoring those characters that did not actually exist and could not be entered.
neutral
train_92671
Experimental results show that using intuitive Web-based statistics helped us capture only about 75% of these errors.
the Web statistics are good for comparing the attractiveness of incorrect characters for computer assisted item authoring.
neutral
train_92672
Because it is nearly impossible to obtain enough test data for any error rate, we generate pseudo test data in the same way that we generate development data.
we feel that this strategy can model both the intentional and un-intentional human error patterns.
neutral
train_92673
We hypothesize that the performance degradation probably results from the spelling errors of the test data, and the inconsistencies that exist between the training data and the test data.
after completing the two steps described in Section 2.1 and 2.2, we have acquired the new spacing states for the user input generated by the baseline WS model, and the threshold measuring the word spacing quality of the user input.
neutral
train_92674
• Depending on the context, even a common word may have different POS tags.
for unknown words, we perform simple morphological analysis to determine probable tags.
neutral
train_92675
Generally, most POS taggers for Indian langauages use morphological analyzer as a module.
another problem was that a clearly defined POS tagset for assamese was unavailable to us.
neutral
train_92676
Many words that occur in natural language texts are not listed in any catalog or lexicon.
building morphological analyzer of a particular Indian language is a very difficult task.
neutral
train_92677
An auxiliary tree β is monotonic if adjoining β to any partial parse tree is monotonic.
the first and second approaches can prevent the parser from infinitely producing partial parse trees, but the parser has to produce partial parse trees as shown in Figure 1.
neutral
train_92678
Our standard CKY parser and Gibbs sampler were both written in Perl.
this compels heuristic methods of subtree extraction, or maximum likelihood estimators which tend to extract large subtrees that overfit the training data.
neutral
train_92679
We examined sentence-level F 1 scores and found that the use of larger subtrees did correlate with accuracy; however, the low overall accuracy (and the fact that there are so many of these large subtrees available in the grammar) suggests that such rules are overfit.
tSGs extend CFGs (and their probabilistic counterparts, which concern us here) by allowing nonterminals to be rewritten as subtrees of arbitrary size.
neutral
train_92680
The morpheme-based representation above cannot explicitly state the boundaries of bunsetsus.
we cannot directly compare results with ours.
neutral
train_92681
The CCG parser has been extensively evaluated elsewhere (Clark and Curran, 2007), and arguably GRs or predicate-argument structures provide a more suitable test set for the CCG parser than PTB phrase-structure trees.
clark and curran (2007) shows that converting gold-standard ccG derivations into the GRs in DepBank resulted in an Fscore of only 85%; hence the upper bound on the performance of the ccG parser, using this evaluation scheme, was only 85%.
neutral
train_92682
Whether PTB parsers could be competitive on alternative parser evaluations, such as those using GR schemes, for which the CCG parser performs very well, is an open question.
there are a number of sentences which are correct, or almost correct, according to EVALB after the conversion, and we are able to use those for a fair comparison.
neutral
train_92683
PTE-relaxed drops the singlepredicate constraint, and can be thought of as a 'bag-of-constituents' model.
in the task of Entailed Relation Recognition, a corpus and an information need are specified.
neutral
train_92684
In our problem setting, however, we have no negative examples at the initial stage.
uncertainty sampling without EM (Takayama et al.
neutral
train_92685
We can retrieve positive examples from Web archive with high precision (but low recall) by manually augmenting queries with hypernyms or semantically related words (e.g., "Loft AND shop" or "Loft AND stationary").
we select several example data sets from Japanese blog data crawled from web.
neutral
train_92686
We applied 8 graph connectivity measures (weighted and unweighted versions of average degree, cluster coefficient, graph entropy and edge density) separately on each of the clusters (resulting from the application of the chinese whispers algorithm).
similarly, 19 negative examples that were judged as compositional were collected (Table 1).
neutral
train_92687
The problem is indeed qualitatively different: we do not have to choose among the head words competing for a role (as in the papers above) but among selectional preferences competing for a head word.
distributional similarity has also been used to tackle syntactic ambiguity.
neutral
train_92688
Together with a set of constants, it de-fines a Markov network with one node per ground atom and one feature per ground clause.
as telephone-based information access systems become more commonly available to the general public, the inability to deal with non-native speakers is becoming a serious limitation since, at least for some applications, (e.g.
neutral
train_92689
Because we adopted discriminative models for argument identification, we can easily add new features.
features used in this paper are shown in Table 1.
neutral
train_92690
This is because our method preferred NULL phrases over unreliable phrases appearing before the predicate sentence.
we evaluated the precision and recall rates, and F scores, all of which were computed by comparing system output and the correct answer of each argument.
neutral
train_92691
In another aspect in dialogue systems, certain dialogue patterns indicate that ASR results in certain positions are reliable.
when the window width is N , the rates are calculated by using only the last N utterances, and the previous utterances are discarded.
neutral
train_92692
These features attempt to approximately encode changes in the grammar rules between source and target sentences.
here, Examples has been dropped, while for editors which has Examples as a head is retained.
neutral
train_92693
This clearly indicates that there is a strong negative correlation between likelihood of occurrence of a non-query-term and ROUGE-2 score.
most of the current automatic summarization systems rely on a sentence extractive paradigm, where key sentences in the original text are selected to form the summary based on the clues (or heuristics), or learning based approaches.
neutral
train_92694
To help the grammar developer decide the priority of problems to fix, we also output the count of items observed with the given CP and RP.
this is generally at the cost of reduced coverage, due both to the difficulty of providing analyses for all phenomena, and the complexity of implementing these analyses.
neutral
train_92695
We then compare our results to the DUC participating systems.
both of the abovementioned "combine-then-rank" and "rank-thencombine" approaches have a common drawback.
neutral
train_92696
From combination point of view, the newly proposed model can be considered as a novel method going beyond the conventional post-decoding style combination methods.
the grammar formalism determines the intrinsic capacities and computational efficiency of the SMt systems.
neutral
train_92697
Experimental results show that SSGbased model achieves significant improvements over the FSCFG-based model and LSTSSG-based model.
from expressiveness point of view, the former usually results in more ambiguities than the latter.
neutral
train_92698
In the case where two intermediate nodes share the same intermediate rule anchored to the same forest nodes, they can be shared.
we found that on average, intermediate nodes introduced in the forest are used in 4.5 different rules, which accounts for the speed increase.
neutral
train_92699
We would like to draw attention to Hidden Markov Tree Models (HMTM), which are to our knowledge still unexploited in the field of Computational Linguistics, in spite of highly successful Hidden Markov (Chain) Models.
although Hidden Markov Models belong to the most successful techniques in Computational Linguistics (CL), the HMTM modification remains to the best of our knowledge unknown in the field.
neutral