id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_20400
|
The algorithm they propose to compute this term is O(n 2 s 3 ) as it requires an extra nested loop in forward/backward.
|
the above form of the gradient is not the only possibility.
|
contrasting
|
train_20401
|
(2006), there are two terms, and the second is easily computable given the feature expectations obtained by forward/backward and the entropy for the sequence.
|
unlike the previous method, here the first term can be efficiently calculated as well.
|
contrasting
|
train_20402
|
2006) suggest this may no longer be the case.
|
the "additional" goals still favor symbolic parsing.
|
contrasting
|
train_20403
|
A syntactic variant of this method (denoted as GI-AMTI) has been used in this work in order to infer the models from training samples as it is summarized in section 3.
|
speech translation has been already carried out by integrating acoustic models into a SFST .
|
contrasting
|
train_20404
|
So far, the capability of the systems have been assessed in terms of time and spatial costs.
|
the quality of the translations they provide is, doubtless, the most relevant evaluation criterion.
|
contrasting
|
train_20405
|
This analysis reveals that both systems produce the same kind of errors in general.
|
some differences were identified.
|
contrasting
|
train_20406
|
Missing words is also an important problem.
|
most of them (approximately two thirds for both systems) are filler words (i.e.
|
contrasting
|
train_20407
|
The N -gram based system seems to be able to produce more accurate translations (reflected by a lower percentage of translation errors).
|
it generates too many additional (and incorrect words) in the process.
|
contrasting
|
train_20408
|
Given an N -best list of the phrase-based (N -grambased) system, we compute the cost of each target sentence of this N -best list for the N -gram-based (phrase-based) system.
|
this computation is not possible in all cases.
|
contrasting
|
train_20409
|
All the participants successfully detected "summary" as an error.
|
"mod pcomp-n" was difficult to detect.
|
contrasting
|
train_20410
|
Extraction using Semi-supervised Parsing: Experiments with purely supervised learning show that our generative model requires a large curated set to minimize the sparse data problem, but domainspecific annotated corpora are always rare and expensive.
|
there is a huge source of unlabeled MEDLINE articles available that may meet our needs, by assuming that any sentence containing BACTERIUM, PROTEIN and LOCATION NEs has the BPL relation.
|
contrasting
|
train_20411
|
Moreover, speech recognition systems can benefit from being trained on hand-transcribed data where all the appropriate word level segmentations (i.e., the exact time of the word boundaries) are known.
|
with increasing amounts of raw speech data being made available, it is both time consuming and expensive to accurately segment every word for every given sentence.
|
contrasting
|
train_20412
|
As the TIMIT corpus provides phone level segmentations, P t is observed during training.
|
for reasons that will become clear in the next section, we treat P t as hidden but make it the parent of a rv C t , with, p(c t = 1|p t ) = δ lt=pt where l t is obtained from the transcriptions (l t ∈ D Pt ).
|
contrasting
|
train_20413
|
From the perspective of a transcriber, this simulates the task of going through an utterance and identifying only one frame that belongs to Figure 3: Virtual Evidence Results each particular phone without having to identify the phone boundary.
|
to the task of determining the phone boundary, identifying one frame per word unit is much simpler, less prone to error or disagreement, and less costly (Greenberg, 1995).
|
contrasting
|
train_20414
|
Most high-tech AAC devices provide the user with an electronic letter and word board to input messages which are output via speech synthesis.
|
even with substantial user interface optimization, communication rate is often less than 10 words per minute (Newell et al., 1998) as compared to about 150-200 words per minute for unimpaired speech.
|
contrasting
|
train_20415
|
However, we have shown that despite significant cognitive load, the reduction in keystroke savings dominates the effect on output rate.
|
to earlier studies, our basic method showed a significantly improved communication rate over no prediction.
|
contrasting
|
train_20416
|
We focus on pitch resets in the ROIs and thus obtain pitch contours for the left and right tonal syllables for each ROIs.
|
the corpus transcription does not provide time annotations for those tonal syllables.
|
contrasting
|
train_20417
|
Considering the difficulty in obtaining high quality transcriptions, some researchers proposed speech summarization systems with non-lexical features (Inoue et al., 2004;Koumpis and Renals, 2005;Maskey and Hirschberg, 2003;Maskey and Hirschberg, 2006).
|
there does not exist any empirical study on speech summarization without lexical features for Mandarin Chinese sources.
|
contrasting
|
train_20418
|
Most commonly, we use sentences to model individual pieces of information.
|
more NLP applications require us to define text units smaller than sentences, essentially decomposing sentences into a collection of phrases.
|
contrasting
|
train_20419
|
In the PYRAMID scheme for manual evaluation of summaries (Nenkova and Passonneau, 2004), machine-generated summaries were compared with human-written ones at the nugget level.
|
automatic creation of the nuggets is not trivial.
|
contrasting
|
train_20420
|
The process of creating nuggets has been automated and we can assume a certain level of consistency based on the usage of the syntactic parser.
|
a more important issue emerges.
|
contrasting
|
train_20421
|
The first round initially appears successful because the two annotators had 100% agreement on nugget groups and their corresponding scores.
|
c2, the novice nuggetizer, was much more conservative than c1, because only 10 nugget groups were created.
|
contrasting
|
train_20422
|
Esuli and Sebastiani (2006) determine the polarity (positive/negative/objective) of word senses in WordNet.
|
there is no evaluation as to the accuracy of their approach.
|
contrasting
|
train_20423
|
Not all WordNet relations we use are subjectivitypreserving to the same degree: for example, hyponyms (such as simpleton) of objective senses (such as person) do not have to be objective.
|
we aim for high graph connectivity and we can assign different weights to different relations to reflect the degree to which they are subjectivitypreserving.
|
contrasting
|
train_20424
|
Many approaches to opinion, sentiment, and subjectivity analysis rely on lexicons of words that may be used to express subjectivity.
|
words may have both subjective and objective senses, which is a source of ambiguity in subjectivity and sentiment analysis.
|
contrasting
|
train_20425
|
These methods acquire contextual information directly from unannotated raw text, and senses can be induced from text using some similarity measure (Lin, 1997).
|
automatically acquired information is often noisy or even erroneous.
|
contrasting
|
train_20426
|
Comparing with tagging or chunking, parsing is relatively expensive and time-consuming.
|
in our method parsing is not performed in real time when we disambiguate words.
|
contrasting
|
train_20427
|
names (Chen Yao) Shijiazhuang-city Branch, the second sales department) and (Science and Technology Commission of China, National Institution on Scientific Information Analysis).
|
it sometimes over-generalized to long words.
|
contrasting
|
train_20428
|
To borrow broad terminology from the Optimality Theory literature (Prince and Smolensky, 1993), such models incorporated faithfulness features, capturing the ways in which successive forms remained similar to one another.
|
each language has certain regular phonotactic patterns which con-strain these changes.
|
contrasting
|
train_20429
|
Only the words in the language at the root of the tree, if any, are explicitly encouraged to be well-formed.
|
we incorporate constraints on markedness for each language with both general and branch-specific constraints on faithfulness.
|
contrasting
|
train_20430
|
Our system's reconstruction had an edit distance of 3.02 to the truth against 3.10 for BCLKG.
|
this difference was not significant (p = 0.15).
|
contrasting
|
train_20431
|
Correlation among θ i and θ j , i = j, cannot be modeled directly, only through the normalization in step 2.
|
lN distributions (Aitchison, 1986) provide a natural way to model such correlation.
|
contrasting
|
train_20432
|
In MCTAG the derivation trees are often drawn with identifiers of entire tree sets as the nodes of the tree because the lexical locality constraints require that each elementary tree set be the derivational child of only one other tree set.
|
if we elaborate the derivation tree to include a node for each tree in the grammar rather than only for each tree set we can see a stark contrast in the derivational Figure 3: An example SL-MCTAG grammar that generates the language ww and associated derivation tree that demonstrating an arbitrarily long derivational distance between the trees of a given tree set and their nearest common ancestor.
|
contrasting
|
train_20433
|
The same complexity analysis applies for re-stricted V-TAG.
|
we can provide a somewhat tighter bound by noting that the rank, r, of the grammar-how many tree sets adjoin in a single tree-and the fan out, f of the grammar-how many trees may be in a single tree set-are limited by t. That is, a complete derivation containing |D| tree sets can contain no more than t |D| individual trees and also no more than rf |D| individual trees.
|
contrasting
|
train_20434
|
DMV was the first unsupervised dependency grammar induction system to achieve accuracy above a right-branching baseline.
|
dMV is not able to capture some of the more complex aspects of language.
|
contrasting
|
train_20435
|
The primary difference between EVG and DMV is that DMV uses valence information to determine the number of arguments a head takes but not their categories.
|
eVG allows different distributions over arguments for different valence slots.
|
contrasting
|
train_20436
|
Maximum likelihood estimation provides a point estimate ofθ.
|
often we want to incorporate information aboutθ by modeling its prior distribution.
|
contrasting
|
train_20437
|
Transductive graph-based regularization has been applied to large-margin learning on structured data (Altun et al., 2005).
|
scalability quickly becomes a problem with these approaches; we solve that issue by working on transitive closures as opposed to entire graphs.
|
contrasting
|
train_20438
|
Matches can be exact or fuzzy; the latter is similar to the identification of graph neighborhoods in our approach.
|
our GBL scheme propagates similarity scores not just from known to unknown sentences but also indirectly, via connections through other unknown sentences.
|
contrasting
|
train_20439
|
Various filtering techniques, such as (Johnson et al., 2007) and (Chen et al., 2008), have been applied to eliminate a large portion of the translation rules that were judged unlikely to be of value for the current translation.
|
these approaches were only able to improve the translation quality slightly.
|
contrasting
|
train_20440
|
The phrase pairs "ihre kinder nicht → their children are not" and "ihre kinder nicht → their children" are both likely also to appear in the phrase table and the former has greater estimated probability.
|
the language model would preferred the latter in this example because the sentence "They love their children are not."
|
contrasting
|
train_20441
|
It is difficult to adjust the system to work differently.
|
as the triangulated filtering procedure does not consider probability distributions in the models, it is possible to further filter the tables according to the probabilities.
|
contrasting
|
train_20442
|
Methodology for eliciting judgments The obvious way to evaluate the precision of our algorithm is to have human annotators judge each output item as to whether it is a DE operator or not.
|
there are some methodological issues that arise.
|
contrasting
|
train_20443
|
We therefore encouraged the judges to try to construct sentences wherein the arguments for candidate DE operators were drawn from a set of phrases and restricted replacements we specified (example: 'singing' vs 'singing loudly').
|
improvisation was still required in a number of cases; for example, the candidate 'act', as either a noun or a verb, cannot take 'singing' as an argument.
|
contrasting
|
train_20444
|
One final remark regarding the annotation: some decisions still seem uncertain, since various factors such as context, Gricean maxims, what should be presupposed 8 and so on come into play.
|
we take comfort in a comment by Eugene Charniak (personal communication) to the effect that if a word causes a native speaker to pause, that word is interesting enough to be included.
|
contrasting
|
train_20445
|
We will refer predicates such as word as observed because they are known in advance.
|
is-Predicate is hidden because we need to infer it at test time.
|
contrasting
|
train_20446
|
This type of algorithm could also be realised for an ILP formulation of SRL.
|
it would require us to write a dedicated separation routine for each type of constraint we want to add.
|
contrasting
|
train_20447
|
33% of the test items in Collins and Singer (1999) were people, as opposed to 21% of ours.
|
even without the pronoun features, that is, using the same feature set, our system scores equivalently to the EM model, at 83% (this score is on dev, 25% people).
|
contrasting
|
train_20448
|
An interesting avenue of research is to construct the vocabulary tree based on WordNet, as a way to inject independent prior knowledge into the model.
|
wordNet has a low coverage problem, i.e.
|
contrasting
|
train_20449
|
Because the CMQD model can easily hypothesize implausible degradations, we see the MAP increases modestly with a few degradations, but then MAP decreases.
|
the MAP of the phrase-based system (PBQD-Fac) increases through to 500 query degradations using multigrams.
|
contrasting
|
train_20450
|
These features provide the means to link messages that may not have sufficient lexical overlap but are nevertheless likely to be topically related.
|
our work is different from them in several aspects: (1) They treat individual messages as the basic elements for clustering, and ignore the social and temporal contexts of the messages.
|
contrasting
|
train_20451
|
The lack of supervised labels makes it even more important to leverage rich features and global dependencies.
|
existing systems use directed generative models (Creutz and Lagus, 2007;Snyder and Barzilay, 2008b), making it difficult to extend them with arbitrary overlapping dependencies that are potentially helpful to segmentation.
|
contrasting
|
train_20452
|
In this way, we can simultaneously emphasize that a lexicon should contain few unique morphemes, and that those morphemes should be short.
|
the lexicon prior alone incorrectly favors the trivial segmentation that shatters each word into characters, which results in the smallest lexicon possible (single characters).
|
contrasting
|
train_20453
|
In this regard, DELORTRANS1 is suitable for POS tagging since deleting a word often results in an ungrammatical sentence.
|
in morphology, a word less a character is often a legitimate word too.
|
contrasting
|
train_20454
|
This makes sense: if a rule has a variable that can be filled by any English preposition, there is a risk that an incorrect preposition will fill it.
|
splitting at a period is a safe bet, and frees the model to use rules that dig deeper into NP and VP trees when constructing a top-level S. Table 5 shows weights for generated English nonterminals: SBAR-C nodes are rewarded and commas are punished.
|
contrasting
|
train_20455
|
The work in this phase is cubic in sentence length.
|
lexical rules in LNF can be applied without binarization, because they only apply to particular spans that contain the appropriate lexical items.
|
contrasting
|
train_20456
|
We can have a simple rule to achieve this.
|
in reality, there are many possible children for a verb.
|
contrasting
|
train_20457
|
Therefore, they would need to rely on reorder units that are likely not violating "phrase" boundaries.
|
since we reorder both training and test data, our system operates in a matched condition.
|
contrasting
|
train_20458
|
Although we use manually written rules in this study, it is possible to learn our rules automatically from alignments, similarly to Habash, 2007.
|
unlike Habash, 2007, our manually written rules handle unseen children and their order naturally because we have a default precedence weight and order type, and we do not need to match an often too specific condition, but rather just treat all children independently.
|
contrasting
|
train_20459
|
Intuitively, one solution is to extend the feature set by considering both boundary words, forming a more complete boundary description.
|
this method is still based on lexicalized features, which causes data sparseness problem and fails to generalize.
|
contrasting
|
train_20460
|
"plans", "events" or "meetings").
|
such features would be treated as unseen by the current ME model, since the training data can not possibly cover all such similar cases.
|
contrasting
|
train_20461
|
For example, in phrase-based SMT systems (Koehn et al., 2003;Koehn, 2004), distortion model is used, in which reordering probabilities depend on relative positions of target side phrases between adjacent blocks.
|
distortion model can not model long-distance reordering, due to the lack of context information, thus is difficult to predict correct orders under different circumstances.
|
contrasting
|
train_20462
|
Boundary POS is considered in LABTG only when source phrases are not syntactic phrases.
|
to the previous works, we present a reordering model for BTG that uses bilingual information including class-level features of POS and word classes.
|
contrasting
|
train_20463
|
Moreover, compared with the Treebank Chinese tagset, the CKIP tagset pro-vides more fine-grained tags, including many tags with semantic information (e.g., Nc for place nouns, Nd for time nouns), and verb transitivity and subcategorization (e.g., VA for intransitive verbs, VC for transitive verbs, VK for verbs that take a clause as object).
|
using the POS features in combination with the lexical features in target language will cause another sparseness problem in the phrase table, since one source phrase would map to multiple target ones with different POS sequences.
|
contrasting
|
train_20464
|
The first sentence contains positive opinion, the second negative opinion.
|
wishful statements like the third sentence are often annotated as non-opinion-bearing in sentiment analysis corpora (Hu and Liu, 2004;Ding et al., 2008), even though they clearly contain important information.
|
contrasting
|
train_20465
|
The 11 topics in Section 2.1 were manually predefined based on domain knowledge.
|
in this section we applied Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to identify the latent topics in the full set of 89,574 English wishes in an unsupervised fashion.
|
contrasting
|
train_20466
|
In terms of average AUC across folds (Table 5), [Words + Templates] is also the best.
|
due to the small size of this corpus, the AUC values have high variance, and the difference between [Words + Templates] and [Words] is not statistically significant under a paired t-test (p = 0.16).
|
contrasting
|
train_20467
|
Many researchers have focused the related problem of predicting sentiment and opinion in text (Pang et al., 2002;Wiebe and Riloff, 2005), sometimes connected to extrinsic values like prediction markets (Lerman et al., 2008).
|
to text regression, text classification comprises a widely studied set of problems involving the prediction of categorial variables related to text.
|
contrasting
|
train_20468
|
In order to minimize the efforts required in the domain transfer, we often expect to use p s (x, y) to approximate p t (x, y).
|
data distribution are often varied with the domains.
|
contrasting
|
train_20469
|
FMM outperforms the SIM method by an average of 4% increase in performance (13% improvement after 10 iterations).
|
both the FMM and the SIM method are able to outperform the baseline method.
|
contrasting
|
train_20470
|
For example, voiceless plosive sounds such as p, t in English, tend to map to both voiced (such as b, d) and voiceless sounds in Chinese.
|
if the sound is voiceless in Chinese, its backtrack English sound must be voiceless.
|
contrasting
|
train_20471
|
Unfortunately, we could not find a published Chinese dataset.
|
our system achieved similar results to other systems, over a different dataset with similar number of training examples.
|
contrasting
|
train_20472
|
Consequently, in a word like vintage [vÁntÁ], we can rule out a syllabification like [vÁ-ntÁ] because [n] is more sonorant than [t].
|
ssP does not tell us whether to prefer [vÁn-tÁ] or [vÁnt-Á].
|
contrasting
|
train_20473
|
Both the Legality Principle and SSP tell us which onsets and codas are permitted in legal syllables, and which are not.
|
neither theory gives us any guidance when deciding between legal onsets.
|
contrasting
|
train_20474
|
The results of the competitive approaches that have been quoted so far (with the exception of tsylb) are not directly comparable, because neither the respective implementations, nor the actual train-test splits are publicly available.
|
we managed to obtain the English and German data sets used by Goldwater and Johnson (2005) in their study, which focused primarily on unsupervised syllabification.
|
contrasting
|
train_20475
|
Ideally, a named entity should correspond to a phrase in the constituency tree.
|
parse trees will occasionally lack some explicit structure, such as with right branching NPs.
|
contrasting
|
train_20476
|
Because we only allow named entity derivations which we have seen in the data, nested entities are impossible.
|
there is clear benefit in a representation allowing nested entities.
|
contrasting
|
train_20477
|
Because the underlying grammar (ignoring the additional named entity information) was the same for both the joint and baseline parsers, it is the case that whenever a sentence is unparseable by either the baseline or joint parser it is in fact unparsable by both of them, and would affect the parse scores of both models equally.
|
the CRF is able to named entity tag any sentence, so these unparsable sentences had an effect on the named entity score.
|
contrasting
|
train_20478
|
Previous work on linguistic annotation pipelines (Finkel et al., 2006;Hollingshead and Roark, 2007) has enforced consistency from one stage to the next.
|
these models are only used at test time; training of the components is still independent.
|
contrasting
|
train_20479
|
As pointed out by Gildea and Temperley (2007), however, finding the unconstrained minimal-length linearization is a well-studied problem with an O(n 1.6 ) solution (Chung, 1984).
|
this approach does not take into account constraints of projectivity or mild context-sensitivity.
|
contrasting
|
train_20480
|
In the unsupervised setting, a variety of successful systems have leveraged lexical cohesion (Halliday and Hasan, 1976) -the idea that topically-coherent segments display consistent lexical distributions (Hearst, 1994;Utiyama and Isahara, 2001; Eisenstein and Barzilay, 2008).
|
such systems almost invariably focus on linear segmentation, while it is widely believed that discourse displays a hierarchical structure (Grosz and Sidner, 1986).
|
contrasting
|
train_20481
|
Ideally we would like to choose the segmentation y = argmax y p(w|y)p(y).
|
we must deal with the hidden language models Θ and scale-level assignments z.
|
contrasting
|
train_20482
|
Given language models Θ, each w t can be thought of as a draw from a Bayesian mixture model, with z t as the index of the component that generates w t .
|
as we are marginalizing the language models, standard mixture model inference techniques do not apply.
|
contrasting
|
train_20483
|
Baseline systems As noted in Section 2, there is little related work on unsupervised hierarchical segmentation.
|
a straightforward baseline is a greedy approach: first segment at the top level, and then recursively feed each top-level segment to the segmenter again.
|
contrasting
|
train_20484
|
When BACKGROUND or DOCSPECIFIC topics are chosen, the model works exactly as in TOPICSUM.
|
when the CONTENT topic is drawn, we must decide whether to emit a general content word (from φ C 0 ) or from one of the specific content distributions (from one of φ C i for i = 1, .
|
contrasting
|
train_20485
|
However, in LDA each word's topic assignment is conditionally independent, following the bag of words view of documents.
|
our constraints on how topics are assigned let us connect word distributional patterns to document-level topic structure.
|
contrasting
|
train_20486
|
The normalization constant of this distribution is unknown, meaning that we cannot directly compute and invert the cumulative distribution function to sample from this distribution.
|
the distribution itself is univariate and unimodal, so we can expect that an MCMC technique such as slice sampling (Neal, 2003) should perform well.
|
contrasting
|
train_20487
|
Thus, we expect to reduce the edit overhead in proportion with ∆.
|
allowing the use of a right context leads to the current hypothesis lagging behind the gold standard.
|
contrasting
|
train_20488
|
This gives the user more flexibility to "say anything at any time".
|
in recent evaluations of one-exchange LBVS we have found that locations are recognized with much higher accuracy than listing names 1 .
|
contrasting
|
train_20489
|
designed to produce a minimal number of local area LMs.
|
if the user is near the edge of the pre-defined local area, the selected LM may exclude businesses close to the user and include businesses far away from the user.
|
contrasting
|
train_20490
|
This approach has the advantage that listings included in the language model will certainly be close to the user.
|
onthe-fly computation of geo-centric language models for large numbers of users is too computationally demanding given current database and processing technology.
|
contrasting
|
train_20491
|
When we combine local and nationwide LMs using LM union, we get small increases in sentence accuracy for nationwide queries compared to local LMs alone.
|
sentence accuracy for local listings decreases.
|
contrasting
|
train_20492
|
For languages with complex letter-to-sound mappings, such dictionaries are typically written by hand.
|
for morphologically rich languages, such as MSA, 1 pronunciation dictionaries are difficult to create by hand, because of the large number of word forms, each of which has a large number of possible pronunciations.
|
contrasting
|
train_20493
|
Therefore, entries in the decoding pronunciation dictionary consist of undiacritized words that are mapped to a set of phoneticallyrepresented diacritizations.
|
every entry in the training pronunciation dictionary is a fully diacritized word mapped to a set of possible contextdependent pronunciations.
|
contrasting
|
train_20494
|
As noted in Section 1, pronunciation dictionaries for ASR systems are usually written by hand.
|
arabic's morphological richness makes it difficult to create a pronunciation dictionary by hand since there are a very large number of word forms, each of which has a large number of possible pronunciations.
|
contrasting
|
train_20495
|
Figure 1 shows two lattices that encode the most linguistically plausible ways of segmenting two prototypical German compounds with compositional meanings.
|
while these words are structurally quite similar, translating them into English would seem to require different amounts of segmentation.
|
contrasting
|
train_20496
|
For each translation, we have access to the phrases used by the decoder to produce that output.
|
there may be islands of out-of-vocabulary (OOV) words that were not in the phrase table and not translated by the decoder as a phrase.
|
contrasting
|
train_20497
|
It has more than 200 million speakers around the world.
|
bangla has few available language resources, and lacks resources for machine translation.
|
contrasting
|
train_20498
|
We gain 35.6%, 5.2%, and 9.4% relative improvements, respectively.
|
the results tend to be worse when 20% and 80% training data were used initially, with 11.6% and 3.0% minimal relative loss.
|
contrasting
|
train_20499
|
With the backpointer (N ′ , x ′ , y ′ ) = BP (N, x, y, r, i), these special arcs are introduced as: are a mix of target language words and lattice pointers (Figure 4, top).
|
each still represents the entire search space of all translation hypotheses covering the span.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.