id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_600
The paucity of the resources suggests that statistical techniques are not suitable for the task.
lexicon-based approaches are highly resource-dependent.
contrasting
train_601
Existing methods that are used to extract loanwords from Korean corpora (Myaeng and Jeong, 1999;Oh and Choi, 2001) use the phonetic differences between conventional Korean words and loanwords.
these methods require manually tagged training corpora, and are expensive.
contrasting
train_602
In phrase (a), there is no inflection, and the suffix is easily segmented.
in phrase (b), a consonant insertion has occurred.
contrasting
train_603
If both of these are consonants, we determine that a vowel was eliminated.
a number of nouns end with two consonants inherently, and therefore, we referred to a textbook on Mongolian grammar (Bayarmaa, 2002) to produce 12 rules to determine when to insert a vowel between two consecutive consonants.
contrasting
train_604
For example, if any of "м", "г", "л", "б", "в", or "р" are at the end of a noun, a vowel is inserted.
if any of "ц", "ж", "з", "с", "д", "т", "ш", "ч", or "х" are the second to last consonant in a noun, a vowel is not inserted.
contrasting
train_605
We used the Hepburn system, because its representation is similar to that used in Mongolian, compared to the Kunrei system.
we adapted 11 Mongolian romanization expressions to the Japanese Hepburn romanization.
contrasting
train_606
Because the N-gram retrieval method does not consider the order of the characters in a target word, the accuracy of matching two words is low, but the computation time is fast.
because DP matching considers the order of the characters in a target word, the accuracy of matching two words is high, but the computation time is slow.
contrasting
train_607
Second, the factorization into four separate steps makes it theoretically possible to modify each step independently in order to investigate the effects of the various modeling assumptions.
the mathematical statement of the model and the approximations necessary for the search procedure make it unclear how to modify the model in any interesting way.
contrasting
train_608
Morphological Behavior Classes: The MBCs are variant-independent, so in theory no changes needed to be implemented.
as Levantine is our first dialect, we expand the MBCs to include two AMs not found in MSA: the aspectual particle and the postfix negation marker.
contrasting
train_609
Previous studies have shown that allowing the parser to resolve POS tag ambiguity does not improve performance.
for grammar formalisms which use more fine-grained grammatical categories, for example TAG and CCG, tagging accuracy is much lower.
contrasting
train_610
This leaves the parser the task of managing the very large parse space resulting from the high degree of lexical category ambiguity (Hockenmaier and Steedman, 2002;Hockenmaier, 2003).
one of the original motivations for supertagging was to significantly reduce the syntactic ambiguity before full parsing begins (Bangalore and Joshi, 1999).
contrasting
train_611
This is because at the lowest levels of ambiguity the extra POS tags can be treated as being of similar reliability.
at higher levels of ambiguity many POS tags are added which are unreliable and should not be trusted equally.
contrasting
train_612
suffixes and character types) of the unknown words.
this approach has limitations in available information.
contrasting
train_613
The optimal Λ can be obtained by quasi-Newton methods using the above L Λ and ∂L Λ ∂λ i,j , and we use L-BFGS (Liu and Nocedal, 1989) for this purpose 2 .
the calculation is intractable because Z Λ (w l ) (see Equation 9)in Equation (16) and a term in Equation 17contain summations over all the possible POS tags.
contrasting
train_614
This procedure is iterated for all the words in the corpus.
this approach is not applicable to our experiments because those words that appear only once in the corpus do not have global information and are useless for learning the global model, so we use the two-fold cross validation method.
contrasting
train_615
Most speech recognition systems perform well when trained for a particular accent (Lawson et al., 2003).
with call centers now being located in different parts of the world, the requirement of handling different ac-cents by the same speech recognition system further increases word error rates.
contrasting
train_616
In the absence of tagged dataset we could not quantify our observation.
when we compared the automatically generated topic specific information to the extracted information from the hand labeled calls, we noted that almost all the issues have been captured.
contrasting
train_617
Being able to bootstrap more training data is of course very useful.
we need to dig deeper to investigate how the increase in data affected the machine learning.
contrasting
train_618
We assume the one with the higher confidence is chosen.
we don't have enough data to reliably estimate rule confidences for the original GTag rules; so, for the purposes of VerbOcean rule integration, we assigned either the original Ver-bOcean rules as having greater confidence than the original GTag rules in case of a conflict (i.e., a preference for the more specific rule), or viceversa.
contrasting
train_619
Discriminative learning from reference translations is inherently problematic because standard discriminative methods need to know which outputs are correct and which are not.
a proposed translation that differs from a reference translation need not be incorrect.
contrasting
train_620
In principle, w could have been tuned by maximizing conditional probability or maximizing margin.
these two options require either marginalization or numerical optimization, neither of which is tractable over the space of output sentences y and correspondences h. In contrast, the perceptron algorithm requires only a decoder that computes f (x; w).
contrasting
train_621
The problem with (c) is that the correspondence h contains an incorrect alignment (', a).
since h is unobserved, the training procedure has no way of knowing this.
contrasting
train_622
(2005) compared their results with Model 4 using "intersection" by looking at AER (with the "Sure" versus "Possible" link distinction), and restricted themselves to considering 1-to-1 alignments.
"union" and "refined" alignments, which are many-to-many, are what are used to build competitive phrasal SMT systems, because "intersection" performs poorly, despite having been shown to have the best AER scores for the French/English corpus we are using (Och and Ney, 2003).
contrasting
train_623
In this paper, we propose a measure of similarity to capture this intuition.
to anchoring, our second algorithm, called the clustering approach, takes a top-down view.
contrasting
train_624
Current approaches often employ machine learning techniques and require supervised data.
many languages lack such resources.
contrasting
train_625
Most successful approaches to NER employ machine learning techniques, which require supervised training data.
for many languages, these resources do not exist.
contrasting
train_626
Moreover, it is often difficult to find experts in these languages both for the expensive annotation effort and even for language specific clues.
comparable multilingual data (such as multilingual news streams) are becoming increasingly available (see section 4).
contrasting
train_627
Transliteration based approaches require a good model, typically handcrafted or trained on a clean set of transliteration pairs.
time sequence similarity based approaches would incorrectly match words which happen to have similar time signatures (e.g., Taliban and Afghanistan in recent news).
contrasting
train_628
In turn, the chosen positive examples contain other characteristic substring pairs, which will be used by the model to select more positive examples on the next round, and so on.
if the initial set is too small, too few of the characteristic transliteration features are extracted to select a clean enough training set on the next round of training.
contrasting
train_629
Moreover, the difference between our method and Bunescu and Mooney (2005) is that their kernel is defined on the shortest path between two entities instead of the entire subtrees.
the path does not maintain the tree structure information.
contrasting
train_630
The small form factor of such devices limits the amount of text that can be displayed.
conciseness exists in tension with completeness.
contrasting
train_631
In particular contexts, it contributes also to the induction of the entailment relation between win and play, as John McEnroe has the property of playing.
as the example shows, classes relevant for acquiring selectional preferences (such as human) are explicit, as they do not depend from the context.
contrasting
train_632
Indeed, this latter property may be relevant only in the context of the previous sentence.
there is a way to overcome this limitation: agentive nouns such as runner make explicit this kind of property and often play subject roles in sentences.
contrasting
train_633
suggests that compose entails write.
it may happen that these correctly detected entailments are accidental, that is, the detected relation is only valid for the given text.
contrasting
train_634
1 reports the happens-before lexico-syntactic patterns (P hb ) as proposed in (Chklovski and Pantel, 2004).
to what is done in (Chklovski and Pantel, 2004) we decided to directly count patterns derived from different verbal forms and not to use an estimation factor.
contrasting
train_635
The results for the "Adventure" corpus are in general better than the results for the "Thief" corpus.
this is due to the "Thief" corpus being smaller and having an infrequent number of "Excellent" and "Poor" stories, as shown in Table 1.
contrasting
train_636
Parallelization might not uniformly reduce training time because different label classifiers train at different rates.
parallelization uniformly reduces memory usage because each label classifier trains only on inferences whose consequent item has that label.
contrasting
train_637
To our knowledge, most top ranked QA systems in TREC are supported by effective NER modules which may identify and classify more than 20 types of named entities (NE), such as abbreviation, music, movie, etc.
developing such named entity recognizer is not trivial.
contrasting
train_638
This may be also used to explain why CorME outperforms ApprMatch in Table 1.
removing answer re-ranking doesn't affect much.
contrasting
train_639
Between the heads of two argument structures there can exist lexical chains of size 0, meaning that the heads of the two structures are in the same synset.
the type of the start structure can be different than the type of the target structure.
contrasting
train_640
Computational systems that learn to transform natural language sentences into formal meaning representations have important practical applications in enabling user-friendly natural language communication with computers.
most of the research in natural language processing (NLP) has been focused on lower-level tasks like syntactic parsing, word-sense disambiguation, information extraction etc.
contrasting
train_641
It is difficult for rule-based methods or even statistical featurebased methods to capture the full range of NL contexts which map to a semantic concept because they tend to enumerate these contexts.
kernel methods allow a convenient mechanism to implicitly work with a potentially infinite number of features which can robustly capture these range of contexts even when the data is noisy.
contrasting
train_642
In each iteration, the positive examples from the previous iteration are first removed so that new positive examples which lead to better correct derivations can take their place.
negative examples are accumulated across iterations for better accuracy because negative examples from each iteration only lead to incorrect derivations and it is always good to include them.
contrasting
train_643
Similarly, in the task of automatic routing of customer emails and automatic answering of some of these, the detection of threats of legal action could be useful.
systems that use cue phrases usually rely on manually compiled lists, the acquisition of which is time-consuming and error-prone and results in cue phrases which are genre-specific.
contrasting
train_644
frame usage improves the generalization ability of the learning algorithm).
the results obtained without the frame information are very poor.
contrasting
train_645
The expected behaviour may only be achieved if : • the system knows in which volume the search is to be performed, • the system knows where, in the volume entry, the headword is to be found, • the system is able to produce a presentation for the retrieved XML structures.
as the Jibiki platform is entirely independent of the underlying dictionary structure <volume-metadata [...] dbname="lexalpfra" dictname="LexALP" name="LexALP_fra" source-language="fra"> <cdm-elements> <cdm-entry-id index="true" xpath="/volume/entry/@id"/> <cdm-headword d:lang="fra" index="true" xpath="/volume/entry/term/text()"/> <cdm-pos d:lang="fra" index="true" xpath="/volume/entry/grammar/text()"/> xhtml"/> </volume-metadata> Figure 9: Excerpt of a volume descriptor (which makes it highly adaptable), the expected result may only be achieved if additional metadata is added to the system.
contrasting
train_646
After each update, the resulting XML structure is stored in the dictionary database.
it is not available to other users until it is marked as finished by the contributor (by clicking on the save button).
contrasting
train_647
We evaluated translation quality using a relatively simple translation system.
more sophisticated systems can draw equal benefit from the same lexical resources.
contrasting
train_648
This approach is based on the simple assumption that if two words are mutual translations, then their most frequent collocates are likely to be mutual translations as well.
the approach requires large comparable corpora, the collection of which presents non-trivial challenges.
contrasting
train_649
The operational value of these keyword phrases was determined by the access they provide to video segments in a large archive of oral histories.
our technique is not limited to this application.
contrasting
train_650
The hybrid method we developed relies on the parser Fips (Wehrli, 2004), that implements the Government and Binding formalism and supports several languages (besides the ones mentioned in 2 "Ideally, in order to identify lexical relations in a corpus one would need to first parse it to verify that the words are used in a single phrase structure.
in practice, freestyle texts contain a great deal of nonstandard features over which automatic parsers would fail.
contrasting
train_651
A rule like (e) is particularly unfortunate, since it allows the word were to be added without any other evidence that the VP should be in passive voice.
the composed-rule derivation of C 4 incorporates more linguistic evidence in its rules, and re-orderings are motivated by more syntactic context.
contrasting
train_652
Also, on the one hand, our AlTemp system represents quite mature technology, and incorporates highly tuned model parameters.
our syntax decoder is still work in progress: only one model was used during search, i.e., the EM-trained root-normalized SBTM, and as yet no language model is incorporated in the search (whereas the search in the AlTemp system uses two phrase-based translation models and 12 other feature functions).
contrasting
train_653
These were used to model generic word-alignment patterns such as noun-adjective re-ordering between English and French (Och, 1998).
we induce fine-grained partitions of the lexicon, conceptually closer to automatic lemmatisation, optimised specifically to assign translation probabilities.
contrasting
train_654
(2004) used an MRF to embody hard constraints within semi-supervised clustering.
we use an iterative EM algorithm to learn soft constraints within the 'prior' monolingual space based on the results of clustering with bilingual statistics.
contrasting
train_655
The disparate elements of such constituents would usually be aligned to the same word in a translation.
when our hierarchical aligner saw two words linked to one word, it ignored one of the two links.
contrasting
train_656
This statement is consistent with our findings.
most of the knowledge loss could be prevented by allowing a gap.
contrasting
train_657
Zhang and Gildea (2004) found that their alignment method, which did not use external syntactic constraints, outperformed the model of Yamada and Knight (2001).
yamada and Knight's model could explain only the data that would pass the nogap test in our experiments with one constraining tree (first column of Table 3).
contrasting
train_658
There is in general no known analytic form for the density of PY(d, θ, G 0 ) when the vocabulary is finite.
this need not deter us as we will instead work with the distribution over sequences of words induced by the Pitman-Yor process, which has a nice tractable form and is sufficient for our purpose of language modelling.
contrasting
train_659
Chat language text appears frequently in chat logs of online education (Heard-White, 2004), customer relationship management (Gianforte, 2003), etc.
wed-based chat rooms and BBS systems are often abused by solicitors of terrorism, pornography and crime (McCullagh, 2004).
contrasting
train_660
Performance equivalent to the methods in existence is achieved consistently.
the issue of normalization is addressed in their work.
contrasting
train_661
This proves that phonetic mapping models used in XSCM are helpful in addressing the dynamic problem.
quality of XSCM in this experiment still drops by 0.05 on the six time-varying test sets.
contrasting
train_662
In these statistical models, language models are essential for word segmentation disambiguation.
an uncom-pressed language model is usually too large for practical use since all realistic applications have memory constraints.
contrasting
train_663
Compared to M1, M2 decreases 0.98% of F-score.
in the two web data sets (i.e., TW and CN), M2 is much better than M1.
contrasting
train_664
Comparing Tables 4 and 6 shows that while partial output almost doubles coverage, this comes at a price of a severe drop in quality (BLEU score drops from 0.7147 to 0.5590).
comparing Tables 5 and 6 shows that lexical smoothing achieves a similar increase in coverage with only a very slight drop in quality.
contrasting
train_665
(2005) Using hand-crafted grammar-based generation systems (Langkilde-Geary, 2002;Callaway, 2003), it is possible to achieve very high results.
hand-crafted systems are expensive to construct and not easily ported to new domains or other languages.
contrasting
train_666
Several GRE algorithms have addressed the issue of generating locative expressions (Dale and Haddock, 1991;Horacek, 1997;Gardent, 2002;Krahmer and Theune, 2002;Varges, 2004).
all these algorithms assume the GRE component has access to a predefined scene model.
contrasting
train_667
We can apply a description like the circle near the square to either circle if none other were present.
if both are present we can interpret the reference based on relative proximity to the landmark the square.
contrasting
train_668
Usually when unlucky errors occur, the system generates a reasonable query and an appropriate answer type, and at least one passage containing the right answer is returned.
there may be returned passages that have a larger number of query terms and an incorrect answer of the right type, or the query terms might just be physically closer to the incorrect answer than to the correct one.
contrasting
train_669
If the scores associated with candidate answers (in both directions) were true probabilities, then a Bayesian approach would be easy to develop.
they are not in our system.
contrasting
train_670
The Natural Language Generation (NLG) community has produced over the years a considerable number of generic sentence realization systems: Penman (Matthiessen and Bateman, 1991), FUF (Elhadad, 1991), Nitrogen (Knight and Hatzivassiloglou, 1995), Fergus (Bangalore and Rambow, 2000), HALogen (Langkilde-Geary, 2002), Amalgam (Corston-Oliver et al., 2002), etc.
when it comes to end-to-end, text-totext applications -Machine Translation, Summarization, Question Answering -these generic systems either cannot be employed, or, in instances where they can be, the results are significantly below that of state-of-the-art, application-specific systems (Hajic et al., 2002;Habash, 2003).
contrasting
train_671
In the first post-hoc test, we found a significant difference between BEST and WORDS-TRI (t a V:H,p < I:VTxIH 12 ), indicating that there is room for improvement of our ranker.
in considering the top scoring feature sets, we did not find a significant difference between WORDS-TRI and WORDS-BI (t a P:Q, p < H:HPP), from which we infer that the difference among all of WORDS-TRI, ALL-BI, ALL-TRI and WORDS-BI is not significant also.
contrasting
train_672
In this paper, we have presented a method for adapting a language generator to the strengths and weaknesses of a particular synthetic voice by training a discriminative reranker to select paraphrases that are predicted to sound natural when synthesized.
to previous work on this topic, our method can be employed with any speech synthesizer in principle, so long as features derived from the synthesizer's unit selection search can be made available.
contrasting
train_673
MT and CLIR systems rely heavily on bilingual lexicons, which are typically compiled manually.
in view of the current information explosion, it is labor intensive, if not impossible, to compile a complete proper nouns lexicon.
contrasting
train_674
Although word-sense disambiguation methods can be applied, these are not free of errors.
methods based on language-independent representation also have limitations.
contrasting
train_675
(2000) aimed at annotating newswire text for analyzing temporal information.
these previous work are different from ours, because these work only dealt with newswire text including a lot of explicit temporal expressions.
contrasting
train_676
They use the handcrafted dictionary and some inference rules to determine the time periods of events.
we do not resort to such a hand-crafted material, which requires much labor and cost.
contrasting
train_677
We used YamCha 4 , the multi-purpose text chunker using Support Vector Machines, as an experimental tool.
any tagging direction and window sizes did not improve the performance of classification.
contrasting
train_678
In future work, we will investigate minimal tree edit distance (Bille, 2005) and related formalisms which are defined on tree structures and can therefore model divergences explicitly.
it is an open question whether cross-linguistic syntactic analyses are similar enough to allow for structure-driven computation of alignments.
contrasting
train_679
For example, the correct answers combination to questions showed in figure-1 is "August 12; 118; Barents Sea".
there is also a combination of "Aug. 12, two; U.S." which has higher pointwise mutual information due to the frequently occurred noisy information of "two U.S. submarines" and "two explosions in the area Aug. 12 at the time".
contrasting
train_680
P (F = E) >> P (F = E), proper names with low counts then are encouraged to link to proper names during training; and consequently, conditional probability mass would be more focused on correct name translations.
names are discouraged to produce non-names.
contrasting
train_681
The semantic constraints between "NULL" and any target words can be derived in the same way.
this is chosen for mostly computational convenience, and is not the only way to address the empty word issue.
contrasting
train_682
The advantages of modeling how a target language syntax tree moves with respect to a source language syntax tree are that (i) we can capture the fact that constituents move as a whole and generally respect the phrasal cohesion constraints (Fox, 2002), and (ii) we can model broad syntactic reordering phenomena, such as subject-verb-object constructions translating into subject-object-verb ones, as is generally the case for English and Japanese.
there is also significant amount of information in the surface strings of the source and target and their alignment.
contrasting
train_683
Syntactic methods are an increasingly promising approach to statistical machine translation, being both algorithmically appealing (Melamed, 2004;Wu, 1997) and empirically successful (Chiang, 2005;Galley et al., 2006).
despite recent progress, almost all syntactic MT systems, indeed statistical MT systems in general, build upon crude legacy models of word alignment.
contrasting
train_684
Under certain precise conditions, as described in (Abney, 2004), we can analyze Algorithm 1 as minimizing the entropy of the distribution over translations of U .
this is true only when the functions Estimate, Score and Select have very prescribed definitions.
contrasting
train_685
full re-training on the NIST data).
these were run on the smaller Eu-roParl corpus.
contrasting
train_686
Experiments on the EuroParl corpus show a decrease in WER.
the selection algorithm applied there is actually supervised because it takes the reference translation into account.
contrasting
train_687
matchWSD is then invoked for c 2 , which is aligned to only one chunk e 3 e 4 e 5 .
since this chunk has already been examined by c 1 with which it is considered as a phrase, no further matching is done for c 2 .
contrasting
train_688
These approaches have shown good results; particularly those using supervised learning (see Mihalcea et al., 2004 for an overview of state-ofthe-art systems).
current approaches rely on limited knowledge representation and modeling techniques: traditional machine learning algorithms and attribute-value vectors to represent disambiguation instances.
contrasting
train_689
Results in Figure 3 show that a-truePred starts off at a higher accuracy and performs consistently better than the a curve.
though a-truePrior starts at a high accuracy, its performance is lower than a-truePred and a after 50% of adaptation examples are added.
contrasting
train_690
In (McCarthy et al., 2004), a method was presented to determine the predominant sense of a word in a corpus.
in (Chan and Ng, 2005), we showed that in a supervised setting where one has access to some annotated training data, the EMbased method in section 5 estimates the sense priors more effectively than the method described in (Mc-Carthy et al., 2004).
contrasting
train_691
(2006) used active learning for 5 verbs using coarse-grained evaluation, and H. T. Dang (2004) employed active learning for another set of 5 verbs.
their work only investigated the use of active learning to reduce the annotation effort necessary for WSD, but 55 did not deal with the porting of a WSD system to a different domain.
contrasting
train_692
Compared to the well known N-gram language models, discriminative language models can achieve more accurate discrimination because they can employ overlapping features and nonlocal information.
discriminative language models have been used only for re-ranking in specific applications because negative examples are not available.
contrasting
train_693
Since the number of parameters in NLM is still large, several smoothing methods are used (Chen and Goodman, 1998) to produce more accurate probabilities, and to assign nonzero probabilities to any word string.
since the probabilities in NLMs depend on the length of the sentence, two sentences of different length cannot be compared directly.
contrasting
train_694
The DLM-PN can be trained by using any binary classification learning methods.
since the number of training examples is very large, batch training has suffered from prohibitively large computational cost in terms of time and memory.
contrasting
train_695
For example, Ngram language model is considered to be effective in writing evaluation (Burstein et al., 1998;Corston-Oliver et al., 2001).
it becomes very expensive if N > 3 and N-grams only consider continuous sequence of words, which is unable to detect the above error "if...will...will".
contrasting
train_696
Some example LSPs discovered from erroneous sentences are <a, NNS> (support:0.39%, confidence:85.71%), <to, VBD> (support:0.11%, confidence:84.21%), and <the, more, the, JJ> (support:0.19%, confidence:0.93%) 7 ; Similarly, we also give some example LSPs mined from correct sentences: <NN, VBZ> (support:2.29%, confidence:75.23%), and <have, VBN, since> (support:0.11%, confidence:85.71%) 8 .
other features are abstract and it is hard to derive some intuitive knowledge from the opaque statistical values of these features.
contrasting
train_697
The performance of the statistical language models is often evaluated by perplexity or cross-entropy.
we decided to only report the real ASR performance, because perplexity does not suit well to the comparison of models that use different lexica, have different OOV rates and have lexical units of different lengths.
contrasting
train_698
LERs are more comparable across some languages than WERs, as WER depends more on factors such as length, morphological complexity, and OOV of the words.
for within-language and between-model comparisons, the RWERR should still be a valid metric, and is also usable in languages that do not use a phonemic writing system.
contrasting
train_699
Both WER and LER are high considering the task.
standard methods such as adaptation were not used, as the intention was only to study the RWERR of the different approaches.
contrasting