id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_13500
The baseline method was not able to insert commas right after the bunsetsu "MtVG5%> (are floated)" or " §¨B{5k= (not decided)" but inserted commas at unnatural positions such as between "©ª% (calling himself)" and "«¬-®> (the vice commander)."
our method was able to insert commas properly at such bunsetsu boundaries.
contrasting
train_13501
Unknown words are a hindrance to the performance of hand-crafted computational grammars of natural language.
words with incomplete and incorrect lexical entries pose an even bigger problem because they can be the cause of a parsing failure despite being listed in the lexicon of the grammar.
contrasting
train_13502
For example, the word afwater (to drain) is listed as a first person singular present verb in the Alpino lexicon.
the error miner identifies this word as the reason for the parsing failure of 9 sentences.
contrasting
train_13503
If there is no analysis spanning the whole sentence, the parser finds all parses for each substring and returns what it considers to be the best sequence of non-overlapping parses.
in the context of this experiment, a sentence will be considered successfully parsed only if it receives a full-span analysis.
contrasting
train_13504
That subcategorization frame requires a noun phrase ('the soup' in (5-c)) and a locative NP ('the bowl' in (5-c)).
in some cases, the entries generated by this lexical rule cannot account for other possible usages of the verbs in question.
contrasting
train_13505
Clearly, this baseline is expected to perform worse than both our model and the universal types one since those are able to cover most of the sentences and thus, they are likely to produce more correct dependency relations.
it gives us an idea how much extra quality is gained when coverage improves.
contrasting
train_13506
It is important to note that this paper should be viewed as a case study where we illustrate the results of the application of what we believe to be a good algorithm for dealing with incomplete or incorrect lexical entries-namely, the combination of error mining and LA.
our method is general enough to be applied to other large-scale grammars and languages.
contrasting
train_13507
TIMEX2 1 and TimeML 2 ), publicly available annotated corpora (ACE, 3 TimeBank 4 ) and a number of automatic taggers (see, for example, (Mani and Wilson, 2000;Schilder, 2004;Hacioglu et al., 2005;Negri and Marseglia, 2005;Saquete, 2005;Han et al., 2006;Ahn et al., 2007)).
existing corpora have their limitations.
contrasting
train_13508
These schemes were used to annotate corpora that are often used in research on temporal expression recognition and normalisation: the series of corpora used for training and evaluation in the Automatic Content Extraction (ACE) program 6 run in 2004, 2005 The ACE corpora were prepared for the development and evaluation of systems participating in the ACE program.
the evaluation corpora have never been publicly released, and thus are currently, for all practical purposes, unavailable.
contrasting
train_13509
This is not to say that Wikipedia content is necessarily of low quality; this is an encyclopedia with many people and bots controlling its quality, and there exist manuals of style for authors to help them avoid errors and ambiguity and to ensure maximum consistency.
10 given the large number of editors with various degrees of fluency and experience in writing and editing, it would not be surprising if some parts of the texts are not perfect.
contrasting
train_13510
Various evaluation metrics have been proposed, and BLEU is now used as the de facto standard metric.
when we consider translation between distant language pairs such as Japanese and English, most popular metrics (e.g., BLEU, NIST, PER, and TER) do not work well.
contrasting
train_13511
(2009) showed that the popular BLEU and NIST do not work well by using the system outputs of the NTCIR-7 PATMT (patent translation) JE task (Fujii et al., 2008).
rOUGE-L (Lin and Hovy, 2003), Word Error rate (WEr), and IMPACT (Echizen-ya and Araki, 2007) worked better.
contrasting
train_13512
Since Pearson's correlation metric assumes linearity, nonlinear monotonic functions can change its score.
spearman's ρ and Kendall's τ uses ranks instead of raw evaluation scores, and simple application of monotonic functions cannot change them (use of other operations such as averaging sentence scores can change them).
contrasting
train_13513
We can combine the above word order metrics with BP, e.g., NKT × BP and NSR × BP.
we cannot very much expect from this solution because BP scores do not correlate with human judgments well.
contrasting
train_13514
Perhaps, NSR penalized alternative good translations too much.
one of the NSR-based metrics, NSRP 1/4 , gave the best Spearman score of 0.947, and the difference between NSRP α and NKTP α was small.
contrasting
train_13515
We preferred NSR because it penalizes global word order mistakes much more than does NKT, and as discussed above, global word order mistakes often lead to incomprehensibility and misunderstanding.
on the other hand, they also tried Hamming distance, and summarized their experiments as follows: the Hamming distance seems to be more informative than Kendall's tau for small amounts of reordering.
contrasting
train_13516
LEXUS offers an interface permitting to the user to define formats of lexical bases with a perspective to enable the construction of lexical bases according to the LMF model.
it does not allow the verification of the compliance between the model and the norm.
contrasting
train_13517
These APIs offer the possibility to work on the structure of the LMF base by adding, modifying or deleting components of LMF model.
there is no interface which facilitates the use of these APIs.
contrasting
train_13518
Rozovskaya and Roth (2010b) show that by introducing into native training data artificial article errors it is possible to improve the performance of the article correction system, when compared to a classifier trained on native data.
to Gamon (2010) and Han et al.
contrasting
train_13519
For every preposition p i , we train a classifier using only examples that are in L1ConfSet(p i ).
to NegAll, for each source preposition, the negative examples are not all other nine types, but only those that belong in L1ConfSet(p i ).
contrasting
train_13520
This setting is realistic since most words in each sentence were already classified (correctly).
when moving to active learning, the situation changes.
contrasting
train_13521
(i) the greatest improvement has been obtained when the TL uses a poor feature-set; and (ii) when the TL baseline model is rich in resources, we still obtain 0.45 points absolute improvement when using n − P ars.
features extracted from automatically-aligned data, in comparison with the ones extracted from the hand aligned data, have helped the MD model to correct many of the TL baseline model false negatives.
contrasting
train_13522
In machine learning-based systems, adapting to a new domain has traditionally involved acquiring additional labeled data and learning a new model from scratch.
recent work has proposed more sophisticated approaches that learn a domain-independent base model, which can later be adapted to specific domains (Florian et Figure 1: Example Customization Requirement Blitzer et al., 2006;Jiang and Zhai, 2006;Arnold et al., 2008;Wu et al., 2009).
contrasting
train_13523
C DD and C DSD identify additional instances for an existing type and therefore mainly rely on FD and CD rules.
the customizations that modify existing instances (C EB ,C AT A ,C G ) require CR and CO rules.
contrasting
train_13524
We demonstrated that a complex NER annotator built using NERL can be effectively customized for different domains, achieving extraction quality superior to the state-of-the-art numbers.
our experience also indicates that the process of designing the rules themselves is still manual and timeconsuming.
contrasting
train_13525
A simple way to address this is a pipeline: first predict entity types, and then condition on these when predicting relations.
this neglects the fact that relations could as well be used to help entity type prediction.
contrasting
train_13526
This allows them to work with exact optimization techniques such as (Integer) Linear Programs and still remain efficient.
2 when working on a sentence level they fail to exploit the redundancy present in a corpus.
contrasting
train_13527
When we compare to an isolated baseline that makes no use of entity types, our joint model improves average precision by 4%.
it does not outperform a pipelined system.
contrasting
train_13528
We also heuristically align our knowledge base to text by making the distant supervision assumption (Bunescu and Mooney, 2007;Mintz et al., 2009).
in contrast to these previous approaches, and other related distant supervision methods (Craven and Kumlien, 1999;Weld et al., 2009;Hoffmann et al., 2010), we perform relation extraction collectively with entity type prediction.
contrasting
train_13529
They also perform cross-document probabilistic inference based on Markov Networks.
they do not infer the types of entities and work in an open IE setting.
contrasting
train_13530
The joint model fails to correctly predict some entity types that the pipeline gets right, but these tend to appear in contexts where relation instances are easy to extract without considering en- Manually evaluated precision for New York Times data can be seen in table 2.
to the Wiki setting, here modelling entity types and relations jointly makes a substantial difference.
contrasting
train_13531
(2007) use a context sensitive kernel in conjunction with features they used in their earlier publication (GuoDong et al., 2005).
we take an approach similar to Nguyen et al.
contrasting
train_13532
Specifically, the "meeting" event, which is an ACE CONTACT event and an INR event according to our definition, is the major cause of overlap.
our type INR has a broader definition than ACE type CONTACT.
contrasting
train_13533
PT is a relaxed version of the SST; SST measures the similarity between two PSTs by counting all subtrees common to the two PSTs.
there is one constraint: all daughter nodes of a node must be included.
contrasting
train_13534
This regularity in the syntactic predicate-argument structure allows us to overcome lexical sparseness.
in order to exploit such regularities, we need to have access to a representation which makes the predicateargument structure clear.
contrasting
train_13535
When using dependency structures, the SST kernel is far less appealing, since it forces us to always consider all daughter nodes of a node.
as we have seen, it is certain daughter nodes, such as the presence of a to PP and a about PP, which are important, while other daughters, such as temporal or locative adjuncts, should be disregarded.
contrasting
train_13536
First, we aim to show the importance of using data sampling when evaluating on F-measure; specifically, we expect under-sampling to outperform no sampling, over-sampling to outperform under-sampling, and over-sampling with transformations to out perform over-sampling without transformations.
the social event classification task does not suffer from data skewness because the INR and COG relations; both occur almost the same number of times.
contrasting
train_13537
Hence the effects of training and testing a machine learning algorithm for sentiment analysis on data from different domains have been analyzed in previous research.
to the best of our knowledge, these effects have not been investigated regarding the extraction of opinion targets.
contrasting
train_13538
We investigated whether the CRF algorithm was overfitting to the training datasets by reducing their size to the size of the cameras dataset.
the reduction of the training data sizes never improved the extraction results regarding F-Measure for the movies, web-serviecs and cars datasets.
contrasting
train_13539
Most prior work on the U.S. Congressional Floor Debates dataset focused on using relationships between speakers such as agreement (Thomas et al., 2006;Bansal et al., 2008), and used a global mincut inference procedure.
they require all test instances to be known in advance (i.e., their formulations are transductive).
contrasting
train_13540
Our approach can, in principle, be applied to any classification task that is well modeled by jointly solving an extraction subtask.
as evidenced by our experiments, proper training does require a reasonable initial guess of the extracted ex-planations, as well as ways to mitigate the risk of the extraction subtask suppressing too much information (such as via feature smoothing).
contrasting
train_13541
Many statistical machine translation systems such as IBM models (Brown et al., 1993) learn word translation probabilities from millions of parallel sentences which are mutual translations.
large scale parallel corpora rarely exist for most language pairs.
contrasting
train_13542
In order to classify documents in the target language, a straightforward approach to transferring the classification model learned from the labeled source language training data is to translate each feature from the bag-of-words model according to the bilingual lexicon.
because of the translation ambiguity of each word, a model in the source language could be potentially translated into many different models in the target language.
contrasting
train_13543
Note that if the average number of translations for a word w is n and v is the number of words in the vocabulary there are n v possible models m ′ t translated from m s .
we can do the following mathematical transformation on the equation which leads to a polynomial time complexity algorithm.
contrasting
train_13544
"UNIGRAM" shows significant improvement over "EQUAL" as the occurrence count of the translation words in the target language can help disambiguate the translations.
occurrence count in a monolingual corpus may not always be the true translation probability.
contrasting
train_13545
For instance, the English word "work" can be translated into "工 作(labor)" and "工厂(factory)" in Chinese.
in our Chinese monolingual news corpus, the count for "工厂(factory)" is more than that of "工 作(labor)" even though "工作(labor)" should be a more likely translation for "work".
contrasting
train_13546
Both of the verbs can be followed by a clause.
the SRL system regards "is", the predicate of the clause, as the patient, resulting in features like "doubt_A1_is" and "suspect_A1_is", which capture nothing about verb usage context.
contrasting
train_13547
For example, (Adar et al., 2007) did a comprehensive corre-lation study among queries, blogs, news and TV results.
different from the content-free analysis above, our work compares the sources based on the content.
contrasting
train_13548
For these unfamiliar topics, users possibly search the web "after" they read the news articles and express their diverse opinions in the blog.
on the topics like "insurance rate" or "consolidation loans," Blog is similar to the queries while N ews is not.
contrasting
train_13549
News or blog articles consist of completed sentences and paragraphs which would contain plenty of meaningful bigrams.
search queries consist of keywords -relatively discrete and regardless of order.
contrasting
train_13550
HOLMES's distinction is that it is domain independent and that its inference time is linear in the size of its input corpus, which enables it to scale to the Web.
hOLMES's Achilles heel is that it requires hand-coded, first-order, horn clauses as input.
contrasting
train_13551
Learning Horn clauses has been studied extensively in the Inductive Logic Programming (ILP) literature (Quinlan, 1990;Muggleton, 1995).
learning Horn clauses from opendomain theories is particularly challenging for several reasons.
contrasting
train_13552
Prior work in relation discovery (Shinyama and Sekine, 2006) has investigated the problem of finding relationships between different classes.
the goal of this work is to learn rules on top of the discovered typed relations.
contrasting
train_13553
For extraction terms with multiple senses (e.g., New York), we split their weight based on how frequently they appear with each class in the Hearst patterns.
many discovered relations are rare and meaningless, arising from either an extraction error or word-sense ambiguity.
contrasting
train_13554
On an arbitrary day, the probability of having a storm is fairly low (p(S) 1).
if we know that the atmospheric pressure on that day is low, this substantially increases the probability of having a storm (although that absolute probability may still be small).
contrasting
train_13555
The more frequently a fact is extracted from the Web, the more likely it is to be true.
facts in E should have a confidence bounded by a threshold p max < 1.
contrasting
train_13556
Setting the weights involves counting the number of true groundings for each rule in the data (Richardson and Domingos, 2006).
the noisy nature of Web extractions will make this an overestimate.
contrasting
train_13557
weights, and performing the inference took 50 minutes on a 72 core cluster.
we note that for half of the relations SHERLOCK accepts no inference rules, and remind the reader that the performance on any particular relation may be substantially different, and depends on the facts observed in the corpus and on the rules learned.
contrasting
train_13558
TAREC overcomes these limitations by searching and selecting the top relevant articles in Wikipedia for each input term; taxonomic relations are then recognized based on the features extracted from these articles.
information extraction bootstrapping algorithms, such as (Pantel and Pennacchiotti, 2006;Kozareva et al., 2008), automatically harvest related terms on large corpora by starting with a few seeds of pre-specified relations (e.g.
contrasting
train_13559
A variety of NLP tasks, including inference, textual entailment (Glickman et al., 2005;Szpektor et al., 2008), and question answering (Moldovan et al., 1999), rely on semantic knowledge derived from term taxonomies and thesauri such as Word-Net.
the coverage of WordNet is still limited in many regions (even well-studied ones such as the concepts and instances below Animals and People), as noted by researchers such as and (Hovy et al., 2009) who perform automated semantic class learning.
contrasting
train_13560
The most productive ones are: "X are Y that" and "X including Y".
the highest yield is obtained when we combine evidence from all patterns.
contrasting
train_13561
During a long period of time, researches on question answering are mainly focused on finding short and concise answers from plain text for factoid questions driven by annual trackes such as CLEF, TREC and NTCIR.
people usually ask more complex questions in real world which cannot be handled by these QA systems tailored to factoid questions.
contrasting
train_13562
Since the code is not publicly available, we followed the same strategy in the original paper and share 0.1K topics across ideologies and then divide the rest of the topics between ideologies 4 .
unlike our model, there are no internal relationships between these two sets of topics.
contrasting
train_13563
Spoken dialect ID relies on speech recognition techniques which may not cope well with dialectal diversity.
the acoustic signal is also available as input.
contrasting
train_13564
Again, we compute the pointwise product of all word maps.
to 5.1, we performed some smoothing in order to prevent erroneous word derivations from completely zeroing out the probabilities.
contrasting
train_13565
This approach is fairly common for language ID and has also been successfully applied to dialect ID (Biadsy et al., 2009).
it requires a certain amount of training data that may not be available for specific dialects, and it is uncertain how it performs with very similar dialects.
contrasting
train_13566
The n-gram system presented above has no geographic knowledge whatsoever; it just consists of six distinct language models that could be located anywhere.
our model yields probability maps of German-speaking Switzerland.
contrasting
train_13567
Another reason could be that Wikipedia articles use a proportionally larger amount of proper nouns and low-frequency words which can- not be found in the lexicon and which therefore reduce the localization potential of a sentence.
one should note that the word-based dialect ID model is not limited on the six dialect regions used for evaluation here.
contrasting
train_13568
In the evaluation presented above, the task consisted of identifying the dialect of single sentences.
one often has access to longer text segments, which makes our evaluation setup harder than necessary.
contrasting
train_13569
Erk and Padó (2010) propose an exemplar-based model for capturing word meaning in context.
to the prototype-based approach, no clustering takes place, it is assumed that there are as many senses as there are instances.
contrasting
train_13570
One potential explanation for the superior performance of the tiered model vs. the DPMM multiprototype model is simply that it allocates more clusters to represent each word (Reisinger and Mooney, 2010).
we find that decreasing the hyperparameter β (decreasing vocabulary smoothing and hence increasing the effective number of clusters) beyond β 0.1 actually hurts multiprototype performance.
contrasting
train_13571
Whether we synthesize an AN for generation or decoding purposes, we would want the synthetic AN to look as much as possible like a real AN in its natural usage contexts, and cooccurrence vectors of observed ANs are a summary of their usage in actual linguistic contexts.
it might be the case that the specific resources we used for our vector construction procedure are not appropriate, so that the specific observed AN vectors we extract are not reliable (e.g., they are so sparse in the original space as to be uninformative, or they are strictly tied to the domains of the input corpora).
contrasting
train_13572
One of the major transformations used in Linguistic Steganography is synonym substitution.
few existing studies have studied the practical application of this approach.
contrasting
train_13573
For example, the words in the WordNet synset {bridge, span} share the meaning of "a structure that allows people or vehicles to cross an obstacle such as a river or canal or railway etc.".
bridge and span cannot be substutited for each other in the sentence "sus- pension bridges are typically ranked by the length of their main span", and doing so would likely raise the suspicion of an observer due to the resulting anomaly in the text.
contrasting
train_13574
The disadvantage of Bolshakov's system is that all words in a synonym transitive closure chain need to be considered, which can lead to very large sets of synonyms, and many which are not synonymous with the original target word.
our proposed method operates on the original synonym sets without extending them unnecessarily.
contrasting
train_13575
In particular the constituent labels are highly ambiguous, firstly we don't know a priori how many there are, and secondly labels that appear high in a tree (e.g., an S category for a clause) rely on the correct inference of all the latent labels below them.
recent work on the induction of dependency grammars has proved more fruitful (Klein and Manning, 2004).
contrasting
train_13576
The IBM models look for an equivalence relationship between lexical items in two languages, whereas DMV addresses functional relationships between two elements with distinct meanings.
both attempt to model a similar set of factors, which they posit will be important to their respective tasks 1 .
contrasting
train_13577
On the latter dataset, even Model 1 outperforms the right-branching baseline.
the Danish dataset is unusual (see Buchholz and Marsi 2006) in that the alternate adjacency baseline of leftbranching (also mentioned by Klein and Manning 2004) is extremely strong and achieves 48.8% directed accuracy.
contrasting
train_13578
These approaches used language-specific templates to propose new lexical items and also required as input a set of hand-engineered lexical entries to model phenomena such as quantification and determiners.
the use of higher-order unification allows UBL to achieve comparable performance while automatically inducing these types of entries.
contrasting
train_13579
As with their work, we also use nonparametric priors for category refinement and em-ploy variational methods for inference.
our goal is to apply category refinement to dependency parsing, rather than to PCFGs, requiring a substantially different model formulation.
contrasting
train_13580
For instance, in the following discussion Alice's sentence has her opinion against something, yet no attitude toward the recipient of the sentence, Bob.
alice: "You know what, he turned out to be a great disappointment" Bob: "You are completely unqualified to judge this great person" Bob shows strong attitude toward alice.
contrasting
train_13581
The fragments we extracted earlier are more relevant to our task and are more suitable for further analysis.
these fragments are completely lexicalized and consequently the performance of any analysis based on them will be limited by data sparsity.
contrasting
train_13582
For example, in some applications we might be interested in very high precision even if we lose recall, while in other applications we might sacrifice precision in order to get high recall.
we notice that the baselines always have low precision regardless of recall.
contrasting
train_13583
Clearly, the source language (English in our experiments) benefits from being paired with any target language.
some languages seem to give substantially better results than others when used as the conjugate language.
contrasting
train_13584
Previous work on this problem has relied on either counting methods or lexico-syntactic patterns.
determining whether a relation is functional, by analyzing mentions of the relation in a corpus, is challenging due to ambiguity, synonymy, anaphora, and other linguistic phenomena.
contrasting
train_13585
Overall, CLEANLISTS finds the very high precision points, because of its use of clean data.
it is unable to make 23.1% of the predictions, primarily because the intersection between the corpus and Freebase entities results in very little data for those relations.
contrasting
train_13586
<Harry Potter 5, was published in, 2003>.
'was published in(Language)' is not functional, e.g.
contrasting
train_13587
We deterministically compute multinomial parameters β by exponentiating and normalizing: This normalization could introduce identifiability problems, as there are multiple settings for η that maximize P (w|η) (Blei and Lafferty, 2006a).
this difficulty is obviated by the priors: given µ and σ 2 , there is only a single η that maximizes P (w|η)P (η|µ, σ 2 ); similarly, only a single µ maximizes P (η|µ)P (µ|a, b 2 ).
contrasting
train_13588
Within this paradigm, computational techniques are often applied to post hoc analysis: logistic regression (Sankoff et al., 2005) and mixed-effects models (Johnson, 2009) are used to measure the contribution of individual variables, while hierarchical clustering and multidimensional scaling enable aggregated inference across multiple variables (Nerbonne, 2009).
in all such work it is assumed that the relevant linguistic variables have already been identified-a timeconsuming process involving considerable linguistic expertise.
contrasting
train_13589
In the general case, finding y * = argmax y∈Y f (y) under this definition of f (y) is an NP-hard problem.
for certain definitions of f i , it is possible to efficiently compute argmax y |i ∈Z i f i (y |i ) for any value of i, typically using dynamic programming.
contrasting
train_13590
Joint models have been proposed to overcome this problem (Poon and Vanderwende, 2010;.
besides not being as accurate as their pipelined competitors, mostly because they do not yet exploit the rich set of features used by Miwa et al.
contrasting
train_13591
An advantage of ILP inference are guaranteed certificates of optimality.
in practice we also gain certificates of optimality for a large fraction of the instances we process.
contrasting
train_13592
It can be verified that this derivation is a valid member of Y .
y(i) = 1 for several values of i: for example, words 1 and 2 are translated 0 times, while word 3 is translated twice.
contrasting
train_13593
This means that a mixture of transition matrices will not necessarily yield a meaningful transition matrix.
for dependency grammar, there are certain universal dependencies which appear in all helper languages, and therefore, a mixture between multinomials for these dependencies still yields a useful multinomial.
contrasting
train_13594
The previous section focused on transferring an English parser to a new target language.
there are over 20 treebanks available for a variety of language groups including Indo-European, Altaic (including Japanese), Semitic, and Sino-Tibetan.
contrasting
train_13595
(2010) also used bilingual treebanks and made use of tree structures on the target side.
the bilingual treebanks are hard to obtain, partly because of the high cost of human translation.
contrasting
train_13596
The above case uses the human translation on the target side.
there are few humanannotated bilingual treebanks and the existing bilingual treebanks are usually small.
contrasting
train_13597
Our method performed better in terms of grammaticality, equally well in meaning preservation, and worse in diversity, but it could be tuned to obtain higher diversity at the cost of lower grammaticality, whereas it is unclear how the system we compare against could be tuned this way.
an advantage of the paraphraser we compared against is that it always produces paraphrases; by contast, our system does not produce paraphrases when no paraphrasing rule applies to the source sentence.
contrasting
train_13598
Towards automatically deriving an aspect hierarchy from the reviews, we could refer to traditional hierarchy generation methods in ontology learning, which first identify concepts from the text, then determine the parent-child relations between these concepts using either pattern-based or clusteringbased methods (Murthy et al., 2010).
pattern-based methods usually suffer from inconsistency of parent-child relationships among the concepts, while clustering-based methods often result in low accuracy.
contrasting
train_13599
Thus, by directly utilizing these methods to generate an aspect hierarchy from consumer reviews, the resulting hierarchy is usually inaccurate, leading to unsatisfactory review organization.
domain knowledge of products is now available on the Web.
contrasting