id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_96500
Average F-measure favors labels with small number of words, which complements word accuracy.
three prior distributions that have shape similar to this empirical distribution are the Gaussian prior, exponential prior, and hyperbolic-L 1 prior, each shown in Figure 2.
neutral
train_96501
The three methods below all provide ways to alter this rate by changing the variance of the Gaussian prior dependent on feature counts.
as we can see word accuracy can be misleading since HMM model even has a higher word accuracy than SVM, although it performs much worse than SVM in most individual fields except abstract.
neutral
train_96502
At this operating point, the discriminative tagger achieves an F-score of 96.08 compared to 94.72 for the HMM, a 25% reduction in error.
instead, we simply iterated for 5 epochs in all cases, regardless of the training set size or number of features used.
neutral
train_96503
At 1,000,000 word of training, the final combination continues to exhibit a 25% reduction in error over the baseline system (because of limitations in the experimental framework discussed earlier, active learning can provide no additional gain at this operating point).
we did not implement cross-validation to determine when to stop training.
neutral
train_96504
A set of 16 tags was used to tag 8 name classes (the seven MUC classes plus the additional null class).
we picked up where Spatter left off, with the clustering algorithm of (Brown et al., 1990).
neutral
train_96505
Chelba and Acero (2004) ercased sentence e, they find the label sequence T that maximizes p(T |e).
kim and Woodland (2004) and Roark et al.
neutral
train_96506
We showed that using treelets and a tree-based ordering model results in significantly better translations than a leading phrase-based system (Pharaoh, Koehn 2004), keeping all other models identical.
alignments are seldom purely one-to-one and monotone in practice; Figure 1b displays common behavior such as one-to-many alignments, inserted words, and non-monotone translation.
neutral
train_96507
Source language phrases that included names and numbers were not paraphrased.
koehn and knight (2003) show how monolingual texts and parallel corpora can be used to figure out appropriate places to split German compounds.
neutral
train_96508
We were able to measure a translation improvement for all sizes of training corpora, under both the single word and multi-word conditions, except for the largest Spanish-English corpus.
we can extend the definition of the paraphrase probability to include multiple corpora, as follows: where c is a parallel corpus from a set of parallel corpora C. multiple corpora may be used by summing over all paraphrase probabilities calculated from a single corpus (as in Equation 1) and normalized by the number of parallel corpora.
neutral
train_96509
The training corpus comprised 62,000 FBIS segment alignments, and the development "dev" corpus comprised a disjoint set of 2,306 segment alignments from the same FBIS corpus.
the approach yields trainable, probabilistic distortion models that are global: they assign a probability to each possible phrase reordering.
neutral
train_96510
The way to show that one graph element does not follow from another is to make the cost of aligning them high.
the best-known such work has occurred within the field of question answering (Pasca and Harabagiu, 2001;Moldovan et al., 2003); more recently, such work has continued with greater focus in addressing the PASCAL Recognizing textual Entailment (RtE) Challenge (Dagan et al., 2005) and within the U.S. Government AQUAINt program.
neutral
train_96511
The central assumption behind the method is that verb entailment relations manifest themselves in the regular co-occurrence of two verbs inside locally coherent text.
this seems to suggest that several initial and final clauses of adjacent paragraphs are also likely to contain information useful to the model.
neutral
train_96512
On the one hand, the task mimics many possible practical applications of the entailment resource, such as sentence ordering, where, given a sentence, it is necessary to identify among several alternatives another sentence that either entails or is entailed by the given sentence.
in order to score the double-slot templates in the evaluation material, we used the following procedure.
neutral
train_96513
On the other hand, we frequently observe sentences of the second type in corpora, and our method generates the paraphrases from the verb-verb cooccurrences taken from such sentences.
examples of the inference rules acquired by Score are shown in Figure 7 along with the positions in the ranking and the numbers of judges who judged the rule as being proper.
neutral
train_96514
For all of these corpora, we generated realistic substitutes for the [REMOVED] tokens using dictionaries (e.g., a dictionary of names from US Census Bureau) and patterns (e.g., names of people could be of the formats, "Mr. F. Lastname", "Firstname Lastname", "Lastname", "F. M. Lastname", etc.
we built an SVMbased system that, given a target word (TW), would accurately predict whether the TW was part of PHI.
neutral
train_96515
We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior.
nER is a fundamental task in many natural language processing applications, such as question answering, machine translation, text mining, and information retrieval (Srihari and Li, 1999;Huang and Vogel, 2002).
neutral
train_96516
The smaller the rank r T (f ) is, the more important the feature f is in the training set T .
we use FEX 1 for feature extraction and BBR 2 for logistic regression in our experiments.
neutral
train_96517
In general, when the test data is similar to the training data, IG (or CHI) is advantageous over F (Yang and Pedersen, 1997).
we used a simple string matching method with slight relaxation to tag the gene mentions in the abstracts.
neutral
train_96518
We thank Richard Sproat, ChengXiang Zhai, and Kevin Small for their useful feedback during this work, and the anonymous referees for their helpful comments.
identification of the entity's equivalence class of transliterations is important for obtaining its accurate time sequence.
neutral
train_96519
Both (Cucerzan and Yarowsky, 1999) and (Collins and Singer, 1999) present algorithms to obtain NEs from untagged corpora.
in keeping with our objective to provide as little language knowledge as possible, we introduced a simplistic approach to identifying transliteration equivalence classes, which sometimes produced erroneous groupings (e.g.
neutral
train_96520
In this work, we make two independent observations about Named Entities encountered in such corpora, and use them to develop an algorithm that extracts pairs of NEs across languages.
in order to reduce running time, some limited preprocessing was done on the Russian side.
neutral
train_96521
For time sequence matching, we used a scoring metric novel in this domain.
starting at the 6th iteration, the three are with 3% of one another.
neutral
train_96522
We have demonstrated that using two independent sources of information (transliteration and temporal similarity) together to guide NE extraction gives better performance than using either of them alone (see Figure 3).
many languages lack such resources.
neutral
train_96523
We ranked the bad examples by their credit_inst values and their frequency of decreasing the learner's performance in the 10 rounds.
a feature is regarded as good if its credit_feat value is positive.
neutral
train_96524
Using either method of uncertainty sampling, the computational cost of picking an example from T candidates is: O(TD) where D is the number of model parameters.
this sample selection criterion was enforced by calculating a training utility function.
neutral
train_96525
An example can be bad for many reasons: conflicting features (indicative of different senses), misleading features (indicative of non-intended senses), or just containing random features that are incorrectly incorporated into the model.
when the training set was extended, the learner's performance dropped and eventually returned to the same level of the upper bound.
neutral
train_96526
This difference is perhaps partly attributable to the use of oracle part-of-speech tags.
czech, a morphologically complex language in which root identification is far from straightforward, exhibits the worst performance at small sample sizes.
neutral
train_96527
The best-first search method of estimates Equation 1.
each constituent type found in "S →NP VP ."
neutral
train_96528
We call these probabilities generative probability of a case slot, and they are estimated from case structure analysis results of a large corpus.
they made use of a probabilistic model defined by the product of a probability of having a dependency between two cooccurring words and a distance probability.
neutral
train_96529
The proposed method gives a probability to each possible syntactic structure T and case structure L of the input sentence S, and outputs the syntactic and case structure that have the highest probability.
when b h i is a noun bunsetsu, C i is an embedded clause in b h i .
neutral
train_96530
The similarity is measured using a thesaurus (Ikehara et al., 1997).
the experimental results for syntactic analysis on web sentences show that the proposed model significantly outperforms known syntactic analyzers.
neutral
train_96531
For example, consider the sentence "In relation to Bush's axis of evil remarks, the German Foreign Minister also said, Allies are not satellites, and the French Foreign Minister caustically criticized that the United States' unilateral, simplistic worldview poses a new threat to the world".
after parsing the sentence, it extracts features such as the syntactic path information between each candidate <H> and the expression <E> and a distance between <H> and <E>.
neutral
train_96532
They define the opinion holder identification problem as a sequence tagging task: given a sequence of words ( ) in a sentence, they generate a se- ) indicating whether the word is a holder or not.
they define the task as identifying opinion sources (holders) given a sentence, whereas we define it as identifying opinion sources given an opinion expression in a sentence.
neutral
train_96533
For example, "I think it is an outrage" or "I believe that he is smart" carry both a belief and a judgment.
this is the smallest unit of opinion that can thereafter be used as a clue for sentence-level or text-level opinion detection.
neutral
train_96534
We then read each thread and choose the message that contained the best answer to the initial query as the gold standard.
what makes threaded discussions unique is that users participate asynchronously and in writing.
neutral
train_96535
We again conduct contrastive experiments using both the clean focused read speech and the more challenging broadcast news data.
in the first experiment with the cleanest data, we used only focused syllables from the read Mandarin speech dataset.
neutral
train_96536
Recognition of tone and intonation is essential for speech recognition and language understanding.
since these approaches are integrated with HMM speech recognition models, standard HMM training procedures which rely upon large labeled training sets are used for tone recognition as well.
neutral
train_96537
Carrying complete information with each letter allows the LTS system to be constructed directly and without mistake.
the third (rightmost) column is the ratio of the second divided by the first.
neutral
train_96538
Syntax-based MT offers the potential advantages of enforcing syntaxmotivated constraints in translation and capturing long-distance/non-contiguous dependencies.
we can sidestep this mistake through sisterhood-annotation, which yields the relabeled rules 3 and 4 in Figure 7.
neutral
train_96539
The PTB purposely eliminated such distinctions; here we seek to recover them.
section 3 explores different relabeling approaches and their impact on translation quality.
neutral
train_96540
Section 3 explores different relabeling approaches and their impact on translation quality.
this can lead to tags that are overly generic.
neutral
train_96541
Table 1 shows results for IBM model 4, phrase-based SMT, and LFG-based SMT, where examples that are in coverage of the LFG-based systems are evaluated separately.
the second step is based on features 11-13, which are computed on the strings that were actually generated from the selected n-best f-structures.
neutral
train_96542
It prevents the extraction of a transfer rule that would translate dankbar directly into appreciation since appreciation is aligned also to zutiefst and its f-structure would also have to be included in the transfer.
we compared our system to IBM model 4 as produced by GIZA++ (Och et al., 1999) and a phrasebased SMT model as provided by Pharaoh (2004).
neutral
train_96543
By imagining the left-hand-side trees as special nonterminals, we can virtually create an SCFG with the same generative capacity.
it has been shown by Shapiro and Stephens (1991) and Wu (1997, Sec.
neutral
train_96544
Our work shows how to convert it back to a computationally friendly form without harming much of its expressiveness.
in the first case, we first combine NP with PP: : pq where p and q are the scores of antecedent items.
neutral
train_96545
As shown in Section 3.2, terminals do not play an important role in binarization.
the algorithm will scan through the whole c as if from the empty stack.
neutral
train_96546
• We examine the effect of this binarization method on end-to-end machine translation quality, compared to a more typical baseline method.
we build synchronous trees when parsing the source-language input, as shown in Figure 1.
neutral
train_96547
Second, students write and then may modify their physics essay at least once during each dialogue with ITSPOKE.
once our user affect annotations are complete, we can further investigate their use to predict student learning and user satisfaction.
neutral
train_96548
As shown in Table 1, all subjects in our 3 corpora took the pretest and posttest.
for these 2 corpora, ITSPOKE used an updated speech recognizer further trained on the SYN03 corpus.
neutral
train_96549
User affect parameters further improved the predictive power of one student learning model for both training and testing.
the combined PR05+SYN03 corpora contains subjects drawn from the same subject pool (2005) as the SYN05 test data, and also contains subjects who interacted with the same tutor voice (synthesized) as this test data.
neutral
train_96550
The state with the plusses is the positive final state, and the one at the bottom is the negative final state.
the third columns shows the weighted figures of % Policy Change.
neutral
train_96551
So at the end, 20 random orderings with 20 cuts each provides 400 MDP trials.
counting the number of differences does not completely describe the effect of the feature on the policy.
neutral
train_96552
In all our models, to simplify we assume that the sentence change information is known (as is common with this corpus (Shriberg et al., 2004)).
questions and statements typically have longer, and more complex, discourse structures.
neutral
train_96553
", "merge" is the context word), the context information is helpful.
bunescu and Mooney (2005) propose a shortest path dependency kernel for relation extraction.
neutral
train_96554
Corpus: we use the official ACE corpus for 2003 evaluation from LDC as our test corpus.
although this kernel shows non-trivial performance improvement than that of Culotta and Sorensen (2004), the constraint makes the two dependency kernels share the similar behavior: good precision but much lower recall on the ACE corpus.
neutral
train_96555
The ACE corpus is gathered from various newspaper, newswire and broadcasts.
the effective scope of context is hard to determine.
neutral
train_96556
A system that can accurately discover knowledge that is only implied by the text will dramatically increase the amount of information a user can uncover, effectively providing access to the implications of a corpus.
low precision patterns may have lower weights than high precision patterns, but they will still influence the extractor.
neutral
train_96557
One recent and much more successful approach to part-of-speech learning is contrastive estimation, presented in Smith and Eisner (2005).
unsupervised learning, while minimizing the usage of labeled data, does not necessarily minimize total effort.
neutral
train_96558
For this domain, we utilized a slightly different notion of distributional similarity: we are not interested in the syntactic behavior of a word type, but its topical content.
our general approach is to use distributional similarity to link any given word to similar prototypes.
neutral
train_96559
For example, in "mavi masalı oda" (= the room with a blue table) the adjective "mavi" (= blue) modifies the noun root "masa" (= table) even though the final part of speech of "masalı" is an adjective.
examples of statistical and machine learning approaches that have been used for tagging include transformation based learning (Brill, 1995), memory based learning (Daelemans et al., 1996), and maximum entropy models (Ratnaparkhi, 1996).
neutral
train_96560
An important property for a probabilistic context-free grammar is that it be consistent, that is, the grammar should assign probability of one to the set of all finite strings or parse trees that it generates.
let G = (N, Σ, S, R), and assume that G is not consistent.
neutral
train_96561
Like the algorithm of (Mohri, 1997), this Figure 4: a) Portion of a transducer before determinization; b) The same portion after determinization algorithm will terminate for automata that recognize finite tree languages.
if we choose v then a valid vector pair q w is q , w .
neutral
train_96562
When the top 500 derivations of the translations of our test corpus are summed, only 50.6% of them yield an estimated highest-weighted tree that is the same as the true highest-weighted tree.
for the finite case the previous method produces an automaton with size on the order of the number of derivations, so the technique is limited when applied to real world data.
neutral
train_96563
This is achieved by limiting the number of entry pairs with positive labels for each document: x e i ,e j ≤ m (5) Notice that the number m is not known in advance.
baseline Clustering is a natural baseline model for our partitioning problem.
neutral
train_96564
One possibility relates to the extreme brevity of the summaries: because the summaries are only 350 words in length, it is possible to have two summaries of the same meeting which are equally good but completely non-overlapping in content.
to further gauge speaker activity, we located areas of high speaker interaction and indicated whether or not a given dialogue act immediately preceded this region of activity, with the motivation being that informative utterances are often provocative in eliciting responses and interaction.
neutral
train_96565
We would also like to more closely investigate the relationship between areas of high speaker activity and informative utterances.
they both extract informative dialogue acts, but not the same ones.
neutral
train_96566
The Bayesian model promises settings free of overtraining, and thus more accurate judgements in terms of √ mse and individual nugget classification accuracy.
in fact, we were often tempted to add new nuggets!
neutral
train_96567
As previously discussed, a strict vital/okay split translates into a score of zero for systems that do not return any vital nuggets.
", which cannot be answered by simple named-entities.
neutral
train_96568
We present a methodology for creating a test collection of scientific papers that is based on the Cranfield 2 methodology but uses a current conference as the main vehicle for eliciting relevance judgements from users, i.e., the authors.
or the reference could have been dropped without damaging the informativeness of your paper.
neutral
train_96569
These matter any statistical inference on shallow pools.
rSVM at small qrels of less than 100 documents is the best whilst that is MTF qrels of more than 150 documents.
neutral
train_96570
We now discuss all these in detail.
we thank Jing Jiang and Azadeh Shakery for helping improve the paper writing, and thank the anonymous reviewers for their useful comments.
neutral
train_96571
None of these capabilities are provided by text search engines.
it is obviously an approximative representation.
neutral
train_96572
C := C − {C} 10.
in what follows, we will assume that there is always at least one path in the lattice that satisfies all of the constraints.
neutral
train_96573
Because of the infrequency, the hard constraints still help most of the time.
formally, a soft constraint C: Y * → R − is a mapping from a label sequence to a non-positive penalty.
neutral
train_96574
To deal with this, we present a constraint relaxation algorithm.
constraint relaxation is more than sixteen times faster than ILP despite running on a slower platform.
neutral
train_96575
Another is a CRF, where M (x) is a lattice with sums of logpotentials for arc weights.
we show that in practice, the method is quite effective at rapid decoding under global hard constraints.
neutral
train_96576
Careful error analysis shows that one important cause for this degradation in performance is the fact that there is insufficient training data for the system to reliably separate support verbs from other verbs and determine whether the constituents outside the NP headed by the nominalized predicate are related to the predicate or not.
the best the system can do is to correctly label all arguments that have a constituent with the same text span in the parse tree.
neutral
train_96577
A semantic parser is learned given a set of sentences annotated with their correct meaning representations.
these methods are mostly based on deterministic parsing (Zelle and Mooney, 1996;Kate et al., 2005), which lack the robustness that characterizes recent advances in statistical NLP.
neutral
train_96578
Human efforts are preferred if the evaluation task is easily conducted and managed, and does not need to be performed repeatedly.
the first two levels employ a paraphrase table.
neutral
train_96579
For years, the summarization community has been actively seeking an automatic evaluation methodology that can be readily applied to various summarization tasks.
it is unknown how large a parallel corpus is sufficient in providing a paraphrase collection good enough to help the evaluation process.
neutral
train_96580
Candidate Selection We assume that words from the reference sentence that already occur in the system generated sentence should not be considered for substitution.
for someone born here but has been sentimentally attached to a foreign country far from home, it is difficult to believe this kind of changes.
neutral
train_96581
The name "Buchanan" is represented in Arabic as "by-wkAnAn" and "Richard" is "rytshArd."
in this simple approach, longer pairs of strings are more likely to be matched than shorter pairs of strings with the same number of different characters.
neutral
train_96582
3 is the general smoothing parameter that takes different forms in various smoothing methods.
vector length is assumed to be a constant factor in analyzing the complexity of the clustering algorithms.
neutral
train_96583
This is expected with such a sparse vector representation for documents.
the generation graph can be viewed as a model where documents cite each other.
neutral
train_96584
Vector length is assumed to be a constant factor in analyzing the complexity of the clustering algorithms.
if we use Equation 1, we end up having unnatural probabilities which are irrepresentably small and cause floating point underflow.
neutral
train_96585
The reason for this is not only the language modeling difficulties, but, of course, the lack of suitable speech and text training data resources.
our work was supported by the Academy of Finland in the projects New information processing principles, Adaptive Informatics and New adaptive and learning methods in speech recognition.
neutral
train_96586
For Turkish a conventional n-gram was built by SRILM similarly as for the morphs.
a common approach to find the subword units is to program the language-dependent grammatical rules into a morphological analyzer and utilize that to then split the text corpus into morphemes as in e.g.
neutral
train_96587
Both methods can also be viewed as choosing the correct model complexity for the training data to avoid over-learning.
the memory requirements of network optimization becomes prohibitive for large lexicon and language models as presented in this paper.
neutral
train_96588
Corresponding growing n-gram language models as in Finnish were trained from the Estonian corpora resulting the n-grams in Table 4.
for a subword lexicon suitable for language modeling applications such as speech recognition, several properties are desirable: 1.
neutral
train_96589
The smooth predictor function learned by NLMs can provide good generalization if the test set contains n-grams whose individual words have been seen in similar context in the training data.
though worse in isolation, the word-based NLMs reduce perplexity considerably when interpolated with Model 1.
neutral
train_96590
Interpolation of Model 1, FLM and FNLM yields a further improvement.
there is also no principled way of handling out-of-vocabulary (OOV) words.
neutral
train_96591
TextTiling wrongfully classifies all of these as starts of new topics.
note that lower values of P k are preferred over higher ones.
neutral
train_96592
Following (Olney and Cai, 2005), we built our LSA space using dialogue contributions as the atomic text unit.
existing topic segmentation approaches can be loosely classified into two types: (1) lexical cohesion models, and (2) content-oriented models.
neutral
train_96593
Table 4 gives the WER for the two systems, on the test set.
we use simple heuristic rules for this purpose; 10 rules for reordering and 15 for merging.
neutral
train_96594
Moreover, recognition accuracy is still around 30% on spontaneous speech tasks, in contrast to speech read from text such as broadcast news.
sparsity of the data available for adaptation makes it difficult to obtain reliable estimates of word n-gram probabilities.
neutral
train_96595
Then the task of relation extraction can be formulated as a form of propagation on a graph, where a vertex's label propagates to neighboring vertices according to their proximity.
especially, with small labeled dataset (percentage of labeled data ≤ 25%), this merit is more distinct.
neutral
train_96596
Temporal information is presently underutilised for document and text processing purposes.
this work uses techniques adapted from Seasonal Auto-Regressive Integrated Moving Average models (SARIMA).
neutral
train_96597
This analysis of the input question affects the subset of documents that will be examined and ultimately plays a key role in determining the answers the system chooses to produce.
we initially formulated paraphrase selection as a three-way classification problem, with an attempt to label each paraphrase as being "worse," the "same," or "better" than the original question.
neutral
train_96598
These steps are necessary to avoid decreasing performance with respect to the original question, as we will show in the next section.
the priors for these classes are roughly 30% for "worse," 65% for "same," and 5% for "better".
neutral
train_96599
We plan to study additional variants that these results suggest may be helpful.
arabic is a morphologically complex language with a large set of morphological features.
neutral