id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_18100
Decaying Recursive Similarity: We considered neighborhood of an interpretation (hyperlinked entities, parent categories, subcategories, and grand parent pages) in the similarity measurement.
an appropriate weight which decays with distance is set to avoid influence of farther neighborhood nodes.
contrasting
train_18101
Mentioned language is a common form of metalanguage, used to perform the full variety of language tasks discussed in the introduction.
other metalinguistic constructions draw attention to tokens outside of the referring sentence.
contrasting
train_18102
A lack of phraselevel annotations in their corpus as well as substantial noise made it suboptimal for the present effort.
it is possible (if not likely) that indicators of metalanguage differ between written and spoken English, lending importance to the Anderson corpus as a resource.
contrasting
train_18103
This work is applicable to languages with concatenative morphology where suffixes stack one after another.
problems arise when there are phonemic changes in the boundaries of the stems and suffixes (sandhi).
contrasting
train_18104
Dependency parsing based methods have achieved much success in SRL.
due to errors in dependency parsing, there remains a large performance gap between SRL based on oracle parses and SRL based on automatic parses in practice.
contrasting
train_18105
The bulk of previous work on automatic SRL has primarily focused on using full constituent parse of sentences to define argument boundaries and to extract relevant information for training classifiers.
there have been some attempts at relaxing the necessity of using syntactic information derived from full parse trees.
contrasting
train_18106
Table 4 shows the Chinese SRL results after adding the N-best dependency parsing related features.
it is not surprising that SRL can get better performance when N > 1, because the larger N, a more accurate dependency parsing results can be the improvement declines when N = 10.
contrasting
train_18107
This again works well for some entities, where Word-Net contains reasonably specific concepts (e.g., occupations and nationalities for people, industries for organizations) but not too well for specialized concepts in specific domains.
in our approach, WordNet is only used to provide the labels for very few relations (5, 000) that are used in training and (separately) in evaluation.
contrasting
train_18108
The DP is usually placed before hour, such as ling-chen san-dian (3:00am), wu-ye shier-dian (0:00).
the boundaries of different phases are not clear, such as xia-wu/wan-shang liu-dian (6:00 in the afternoon/evening) .
contrasting
train_18109
Chinese lunar time system uses a similar way to denote time as the Gregorian system.
it refers to the movement of the moon to count months.
contrasting
train_18110
The time words are tagged as 'Nd'.
there is no entity information.
contrasting
train_18111
While this improvement is modest, it suggests that features aside from the mathematical term itself can be helpful.
the system works well even without this feature.
contrasting
train_18112
The data generated can then be used to train a classifier that allows automatic sense-tagging of mathematical expressions.
to natural language text, mathematical expressions require specific processing methods.
contrasting
train_18113
For training and development datasets, anaphoric annotations were provided by the organizers.
for test set there was no annotation available.
contrasting
train_18114
(2009), Recasens and Hovy (2010), Chen and Ng (2012a)).
no attempts have been made to address these questions in the context of event coreference.
contrasting
train_18115
Keyphrases are useful for a variety of tasks such as summarization (Zha, 2002), information retrieval (Jones and Staveley, 1999) and document clustering (Han et al., 2007).
many documents do not come with manually assigned keyphrases.
contrasting
train_18116
(Litvak et al., 2011) is one of them, where degree centrality is used to select keyphrases.
they evaluate their method indirectly through a summarization task, and to our knowledge there are no published experiments using other centrality measures for keyphrase extraction.
contrasting
train_18117
(2011) in which TextRank and degree centrality are compared.
both works were evaluated against a summarization dataset by checking whether extracted keyphrases appear in reference summaries.
contrasting
train_18118
A prerequisite of the above methods is that the unknown words must have paraphrases (or full-forms).
many types of unknown words do not have paraphrases (full-forms) naturally.
contrasting
train_18119
(2011) developed a sublexical translation method that translates an unknown word by combining the translations of its sublexicals.
to deal with the reordering problem, the model combines the translations of sublexicals by considering both straight and inverse directions and uses a language model to select the better one.
contrasting
train_18120
In recent years, web services are generating more and more short texts including micro-blogs, customer reviews, chat messages and so on.
a user is often only interested in very small part of these data.
contrasting
train_18121
In English it is easy to determine which words are within a specific window since each word split by spaces usually corresponds to one POS tag.
unlike English, Korean is an agglutinative language and most words consist of more than one morpheme, each with their own part of speech.
contrasting
train_18122
Assuming that tweets are retrieved by a query, such as "apple", the task is to classify whether each retrieved tweet is relevant to the target organization ("Apple Inc.") or not.
constructing such a classifier is a challenging task, as tweets are short and informal.
contrasting
train_18123
Kim and Hovy (2006) proposed a method to identify a reason for the evaluation in an opinion, such as "the service was terrible because the staff was rude" and "in a good location close to the station".
their purpose is to identify grounds that justify the evaluation, which are different from evaluative conditions.
contrasting
train_18124
The two words have different roots and are therefore genetically unrelated.
for language learners the similarity is more evident than for example the English-Italian genetic cognate father-padre.
contrasting
train_18125
For these reasons, both types need to be considered when constructing teaching materials.
existing lists of cognates are usually limited in size and only available for very few language pairs.
contrasting
train_18126
As we have seen in the experiments in Section 4.1, COP is able to learn a production rule from only few training instances.
the test dataset contains a variety of cognates following many different production processes.
contrasting
train_18127
Thus, COP is able to learn stable patterns from relatively few training instances.
even a list of 1,000 cognates is a hard constraint for some language pairs.
contrasting
train_18128
It is clearly that a different set of features could be used for learning the NB classifier at the first classification stage in our system.
as mentioned in section 3.4, it is sufficient to have a good NB classifier learned from an unique DiffPosNeg feature.
contrasting
train_18129
(2012) developed a universal Part-of-Speech (POS) tag-set for twenty five different languages.
at phrasal level, disagreements between the languages remain undefined.
contrasting
train_18130
Most keyword tools can easily identify keywords the old house (謝宅) and the historical city (台南).
article readers might also be interested in less frequent words like life style (生活) and traditional market (市場), and singleoccurrence like rental fees (費用), which are also mentioned in most reader feedback.
contrasting
train_18131
This problem can be traditionally formulated as a text classification task and solved by annotating the data and building a supervised learning system.
rare classes might render annotation even more difficult and expensive.
contrasting
train_18132
Most basic features consider that word pairs are much less likely to have a dependency relation when there are punctuation between them.
based on the fact that dependencies with longer distance always show worse parsing performance (McDonald and Nivre, 2007), distance is another important factor that reflects the difficulty of judging whether two words have a dependency relation.
contrasting
train_18133
This difficulty interferes with efficient language resource preparation and reduces domain portability.
the accuracy of P-A structure analysis increases in accordance with the data size.
contrasting
train_18134
As we can see from Table 3, we still obtain significantly inferior results compared to the original translation if we replace all the Google translations by the most similar examples, which is reflected by an absolute 8.55 point drop on the test set in BLEU score.
our repairing method, which can repair original translation result automatically in word-level, leads to an increase of 0.64 absolute BLEU point on the test set.
contrasting
train_18135
The work of (Riesa and Marcu, 2012) detected parallel fragments using the hierarchical alignment model.
this approach obtains fragments from parallel sentence pairs, which limits its application in comparable corpora.
contrasting
train_18136
Parallel data in the real world is increasing continually.
we cannot always get the translation performance improved by simply enlarging our training data.
contrasting
train_18137
In this paper, we explore the possibility of using dependency structure for anaphora resolution in Hindi.
we do not intend either to propose dependency as an alternative to phrase structure or to compare the usability of the two frameworks.
contrasting
train_18138
For example, the future prefix ‫,سـ(‬ s, "I will") is transformed in TD to ( ‫,با‬ bA$, "I will").
the interrogation prefix ‫"أ"‬ is transformed to a suffix ‫,شي(‬ $y, "what").
contrasting
train_18139
Thus, on the P&N model, the average conditional entropy per feature given the class (how surprising the feature is when we know the answer) increases by 8.8% when the oracle is unavailable.
there is almost no difference between the conditional entropy of the POS model with oracle features and without, indicating that the errors made by the tagger are not confusing in the disambiguation task.
contrasting
train_18140
Predicted parse features also contribute to feature sparsity, because of the greater variability of automatic parses.
they are more expressive than part of speech, and in the example below, where only Lin correctly identifies 'and' as a discourse connective, part of speech simply does not contain enough information.
contrasting
train_18141
These can be better predicted if dependency relations are given as input.
the standard natural language analysis pipeline forbids using parse information during morphological analysis.
contrasting
train_18142
Given that fact, (Burchardt et al., 2009) pointed out that using shallow semantic representations based on predicate-argument structures and frame knowledge is an intuitive and straightforward approach to textual inference tasks.
to previous work which integrate predicate-argument structures as features in machine learning-based systems (Harabagiu et al., 2006;De Marneffe et al., 2008), this paper combines shallow semantic representations derived from semantic role labeling with binary relations extracted from sentences for the CD task.
contrasting
train_18143
The obtained results indicate that for semantic trees with at least 500 nodes, the performance of our method increases consistently.
the F-Score reaches the top and becomes stable for semantic tree sizes between 2,000 and 3,000 nodes.
contrasting
train_18144
(2010) using a phrase-based approach.
incremental speech translation has been addressed in simultaneous translation of lectures and speeches (Hamon et al., 2009;Fügen et al., 2007).
contrasting
train_18145
Processing pipeline and service composition are two approaches for sharing and combining language resources.
each approach has its drawback.
contrasting
train_18146
(2012) proposed a statistical model that was capable of learning how to pre-order word sequences from human annotated or automatically generated alignment data.
this method has very large computational complexity to model long distance reordering.
contrasting
train_18147
In the example, since there is no subject and object, the verbal head "示す show(s)" is moved to immediately before its second dependent.
v has been incorrectly placed in the rightmost in Komachi et al.
contrasting
train_18148
There is still a minor verb agreement error which the verb "represent" is translated as "represents".
the most of errors are given via parsing process.
contrasting
train_18149
Comparative experiments show that our approach could efficiently reduce feature dimensionality and enhance the final F 1 value.
for automatic grammatical error correction, there is still a long way to go.
contrasting
train_18150
(2010), with per-outer-iteration example caching (LCLR); we use a PA large-margin classifier instead of an SVM.
we found that this algorithm severely overfits to our task.
contrasting
train_18151
In general the larger the size of the monolingual corpus, the better and more detailed, we can extract the context, or context vectors for each relevant word (Curran and Moens, 2002).
in a specific domain, the given monolingual corpus might be limited to a small size which leads to sparse context vectors.
contrasting
train_18152
Given a text snippet in which the ambiguous word occurs, their methods select the appropriate sense by finding an appropriate translation.
our method does not use a text snippet to disambiguate the meaning of the query word.
contrasting
train_18153
We could now combine all context vectors additively, similar to monolingual disambiguation like in (Schütze, 1998).
this would ignore that actually some dimensions are difficult to compare across the two languages.
contrasting
train_18154
A variety of methods have been proposed for attribute-value extraction from semistructured text with consistent templates (strict semi-text).
when the templates in semi-structured text are inconsistent (weak semi-text), these methods will work poorly.
contrasting
train_18155
Semistructured text (strict semi-text) often has distinctive HTML tags and consistent templates like HTML tables (eg: Wikipedia infoboxes).
a lot of user-generated semi-structured text with weak structures exist, where their templates generating records are inconsistent and the HTML tags in these templates are less distinctive.
contrasting
train_18156
Traditionally, researchers have tried to estimate the sentiment polarity from only the textual content of the review (Pang and Lee, 2004;.
since reviews are written by a user to express his/her emotion toward a particular product, taking the users and products into consideration would play an important role in solving this task.
contrasting
train_18157
They then classify a given review by referring to ratings given for the same product by other users who are similar to the user in question.
such user networks are not always available in the real world.
contrasting
train_18158
In the example, we want the annotations, (0, 8, Protein) 2 from A and (0,10, Signaling_molecule) from B, to be transferable to C, or to each other.
the variation of text poses a challenge: we need to compute the mapping between variations of text.
contrasting
train_18159
Hewavitharana and Vogel (2011) propose a method that calculates both the inside and outside probabilities for fragments in a comparable sentence pair, and show that the context of the sentence helps fragment extraction.
the proposed method only can be efficient in a controlled manner that supposes the source fragment was known, and search for the target fragment.
contrasting
train_18160
So far, the induced transduction grammar have only been used to derive Viterbi-style word alignments to feed into existing translation system, and there has been no evaluation of the grammars actually learned.
we directly evaluate the grammars that we induce.
contrasting
train_18161
The extended set reaches the best accuracy scores (76.8%) for the system trained on SYSTRAN translations with a 8.8pt absolute improvement over the baseline set.
statistical significance testing show that none of the improvements over the baseline are statistically significant.
contrasting
train_18162
Perhaps the baseline feature set is already diverse enough (surface, LM, word alignment, etc.).
an error analysis shows that including the UGC features does bring useful information, especially when the source segments contain URLs, as shown in Table 5.
contrasting
train_18163
Given a parallel corpus between the source and target language, combining a direct model based on this parallel corpus with a pivot model could lead to better coverage and overall translation quality.
the combination approach needs to be optimized in order to maximize the information gain.
contrasting
train_18164
That's why NEs schemes do not show any improvement using static integration.
s_NN shows some inconsistent improvements depending on the data sparsity.
contrasting
train_18165
this at the cost of relatively low precision (around 70% even when using the information about the correct word sense).
we propose to use a closed word class -i.e.
contrasting
train_18166
Conditional Random Fields were found to provide the best overall results in cue detection.
the relative advantage of sequence taggers In our classification setup, each token is a separate instance, described by a set of features collected from a window of size 2 around the token.
contrasting
train_18167
Chinese Input Method Engine (IME) plays an important role in Chinese language processing.
it has been subjected to lacking a proper evaluation metric for a long time.
contrasting
train_18168
The frequencies of lastly mined entries can be simply accumulated to the existing entries.
we allow the users to upload their input logs to the cloud and execute the mining process to extract single user oriented personally entries.
contrasting
train_18169
From the definition of coherent interaction type, one would expect a higher percentage of links than strong links.
we found almost equal percentage of strong links and links.
contrasting
train_18170
The higher the entropy, the more uncertain the oracle is about the prediction.
there is no readily available provision indicating the magnitude of confusion the oracle encounters during prediction.
contrasting
train_18171
(2012) built supervised POS taggers for 22 European languages using the TNT tagger (Brants, 2000), with an average accuracy of 95.2%.
creating annotated linguistic resources is expensive and time-consuming.
contrasting
train_18172
English is commonly used, because parallel data which has English on one side is often most readily available.
the appropriate source language might depend on the target language.
contrasting
train_18173
We have shown that our predictive model can select a source language -based on only monolingual features of the source and target languages -that improves tagger accuracy compared to choosing the single, best-overall source language.
if parallel data is available, our predictive model is able to leverage this to select a more appropriate source language and obtain further improvements in accuracy.
contrasting
train_18174
This makes it central to the semantics-syntax-intonation interface (Lambrecht, 1994;Hajičová et al., 1998;Steedman, 2000;Mel'čuk, 2001;Erteschik-Shir, 2007) and therefore also to NLP.
despite its prominence, IS has been largely ignored so far in the context of the reference treebanks for data-driven NLP: Penn Treebank (Marcus et al., 1993) and its semantic counterpart PropBank (Palmer et al., 2005) for English, Tiger (Thielen et al., 1999) for German, Ancora (Taulé et al., 2008) for Spanish, etc.
contrasting
train_18175
This is not to say that no proposals have been made for the annotation of IS in general; see, e.g., (Calhoun et al., 2005) for English, (Dipper et al., 2004) for German, (Paggio, 2006) for Danish, etc.
in the light of the above mentioned interface, it is crucial to have the same corpus annotated with semantic, syntactic and IS structures.
contrasting
train_18176
A direct comparison with other works on automatic annotation with IS, as e.g., (Postolache et al., 2005) with TFA, is not possible since the data sets and the annotation schemata are different; see, e.g., (Hajičová, 2007) for a precise outline of the criteria for the annotation of TFA in the Prague school and a juxtaposition of TFA and the CommStr.
it is instructive to observe that the AS we achieve with the transition parser is about the same as Postolache et al.
contrasting
train_18177
At the end of this procedure, a segmentation result that fully matches the demand of the user is returned.
in some complicated cases where segmentation ambiguities exist, the Kalman filter will not converge and keep swapping in two or more states.
contrasting
train_18178
It matches the problem setting in this paper because it can train a new model by altering the original model to suit the additional data.
it usually loses information about old samples (in our case, the original data).
contrasting
train_18179
(Final update formulae are provided in (Wang et al., 2012)).
there are some problems in implementing Equation (3) directly.
contrasting
train_18180
Focusing on case (c), in which training uses both the original and the additional datasets, the advantages of cases (a) and (b) are secured.
although we applied domain adaptation, the improvements from cases (a) and (b) were little.
contrasting
train_18181
This result shows that model adaptation worked effectively.
focusing on the accuracies of Test set 1, Online learning exhibited a smaller degradation from the original model (a) than Transfer.
contrasting
train_18182
The maximum entropy method optimizes parameters based on the maximum a posteriori (MAP), and it is sensitive to probability distribution.
sCW-I used in Online is based on margin criteria, and ignores data outside the margin.
contrasting
train_18183
In the normal case (e), the accuracies of Test set 2 improved with both the Transfer and the Online cases along with dataset size.
the accuracy of Test set 1 with Transfer degraded faster than Online, as described in Section 3.2.
contrasting
train_18184
Here, hyperparameter C OR was set when the original model In the normal case, while the changes to the existing parameters were suppressed (small C AD ), the accuracy of Test set 2 decreased.
it was higher than that of the original model (61.49% → 67.26%), and the accuracy on Test set 1 was almost constant (70.90% → 70.33%).
contrasting
train_18185
For instance, French lacks a case system (Dryer and Haspelmath, 2011), and makes instead use of prepositions.
polish and Czech most extensively use (inflectional) affixes (Kulikov et al., 2006).
contrasting
train_18186
These methods will not detect similarity of sentences that use different words to convey the same meaning.
they achieve improved results by examining word pairs instead of single words (Okazaki et al., 2003).
contrasting
train_18187
They found that the use of deep syntactic features reduces pattern ambiguity and dramatically increases overall relation extraction f -measure by 65%.
they do not model selectional restrictions in their pattern gen-eration step.
contrasting
train_18188
The tagging performance seems low for practical purposes.
this reflects the lexicon quality (i.e., the bootstrapping performance) only partially.
contrasting
train_18189
We already explained that running original Basilisk on the whole EPO corpus is not feasible due to its size and the length of the sentences.
for a comparison of Basilisk-G and Basilisk-C with the original Basilisk, we parsed sentences up to a size of 100 tokens from 25,000 sample patents.
contrasting
train_18190
In predominant classes such as SUBSTANCE, shorter coordinations do not harm precision.
for classes like DISEASE, precision decreases when shorter coordinations are used, as illustrated in Table 3.
contrasting
train_18191
Previous work in TempEval-2 (Verhagen et al., 2010) and our work (Hovy et al., 2012) have shown that accurate relation classifiers can be modeled with supervised approaches, provided that the expressions are limited to be in the same sentence.
there is almost no previous work on inter-sentence TERE (ISTERE), for three main reasons: • Across a document, the number of time-event pairs to consider is large, as they are quadratic in the number of time and event expressions.
contrasting
train_18192
In particular, product attributes and their values (PAVs) are crucial for many applications such as faceted navigation and recommendation.
since structured information is not always provided by the merchants, it is important to build technologies to create this structured information (such as PAVs) from unstructured data (such as a product description).
contrasting
train_18193
Instead, the grape variety, production area, and vintage of the wine would be of greater interest.
(2) Freebase contains PAVs for limited types of products such as digital cameras 2 .
contrasting
train_18194
(2) On the other hand, Freebase contains PAVs for limited types of products such as digital cameras 2 .
since Freebase is currently only available in English, we cannot use Freebase in a distant supervision method for other languages.
contrasting
train_18195
The goal of these approaches is to extract information from documents semi-structured by any mark up language such as HTML.
our method aims at extracting (product) information from full texts although the method leverages semi-structured documents to induce KBs.
contrasting
train_18196
6 For instance, for the Abortion domain, the phrase I support abortion indicates the author's support for abortion.
i think abortion should be banned is indicative of the author's stance against abortion.
contrasting
train_18197
For ABO and GAY, the improvement that we obtain out of the noisy data decreases as we increase the number of (cleanly labeled) debate posts.
for OBA and MAR, we do not see such diminishing returns.
contrasting
train_18198
For example, the Halo project (Angele et al., 2003) targeted Chemical tests, while IBM's Deep QA (Ferrucci, 2012) employed factoid-style quizzes.
their benchmark data sets are not open, and therefore collaborative research based on shared standard data cannot be pursued.
contrasting
train_18199
This makes sense, because in a linguistically agnostic situation, all the links have the same weight, and hence the weight assigned to C i j will be the same regardless of which |C i j | − 1 links in C i j are chosen.
the same is no longer true in a linguistically aware setting: since the links may not necessarily have the same weight, the weight assigned to C i j depends on which |C i j | − 1 links are chosen.
contrasting