id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8100
Although the acquisition of bilingual data also targets news text, the noisy mined corpus can never compete with the well prepared B1 dataset.
the NIST 2008 test set contains a large portion of out-of-domain text, and so the B1 set does not gain any advantage over Web mined corpora.
contrasting
train_8101
In this case, the CL-HYB approach simply chose to group together entities having clicks to 'aaa.com' and appearing in contexts as 'auto club'.
cL-Web grouped according to contexts such as 'selling' and 'company'.
contrasting
train_8102
All our systems return a high number of nonobvious suggestions (all above 50%).
6 GOO and YAH show low performance, as both systems are heavily based on the substring matching technique.
contrasting
train_8103
Examples (1) and (2) show two sentences from the MPQA corpus where DSEs The task of marking up these expressions has usually been approached using straightforward sequence labeling techniques using using simple features in a small contextual window (Choi et al., 2006;Breck et al., 2007).
due to the simplicity of the feature sets, this approach fails to take into account the fact that the semantic and pragmatic interpretation of sentences is not only determined by words but also by syntactic and shallow-semantic relations.
contrasting
train_8104
CONTEXT WORDS AND POS FOR HOLDER.
there are also differences compared to typical argument extraction in SRL.
contrasting
train_8105
The most visible effect of the reranker is that the recall is greatly improved.
this does not seem to have an adverse effect on the precision until the candidate set size goes above 16 -in fact, the precision actually improves over the baseline for small candidate set sizes.
contrasting
train_8106
We can see that the estimates lead to a slightly Figure 13: Items produced by the parser lower F-score.
while the losses in terms of F 1 are small, the gains in parsing time are substantial, as Fig.
contrasting
train_8107
Secondly, we note that all of the above approaches that use language models train a language model for each difficulty level using the training data for that level.
since the amount of training data annotated with levels is limited, they can not train higher-order language models, and most just use unigram models.
contrasting
train_8108
(2009) has shown that it is possible to project the parameters learnt from the annotation work of one language to another language provided aligned Wordnets for two languages are available.
their work does not address the question of further improving the accuracy of WSD by using a small amount of training data from the target language.
contrasting
train_8109
The simplest strategy is to randomly annotate text from the target language and use it as training data.
this strategy of random sampling may not be the most optimum in terms of cost.
contrasting
train_8110
As per their model the best F-score achieved using manual cross-linking for ALL words was 73.34% for both Tourism and Health domain at a cost of 36K and 18K respectively.
using our model we obtain higher accuracies of 76.96% in the Tourism domain (using 1/3rd manual cross-links and 2K injection) at a lower total cost (32K rupees) and 75.57% in the Health domain (using only 1/3rd cross-linking and 1K injection) at a lower cost (16K rupees).
contrasting
train_8111
Many projected instances were filtered out by heuristics, and only 32.6% of the instances were left.
several instances were rescued by dictionary-based alignment correction and the number of projected instances increased from 31,652 to 39,891.
contrasting
train_8112
• The heuristic-based alignment filtering helps to improve the performance.
it is much worse than the baseline performance because of a falling-off in recall.
contrasting
train_8113
88% of the edges contain the correct semantic relations among the alternatives.
the baseline has pruned away 24% of the correct types and 26% of the correct semantic relations.
contrasting
train_8114
The previous section provided evidence that the document-to-document linking algorithm is capable of achieving high performance when parameters α, β are well selected.
section 5 indicated that it is more difficult to discover links across long document pairs.
contrasting
train_8115
Experiments on the ACE 2003 corpus showed that their method improved the overall performance by 2.8, 2.2 and 4.5 to 54.5, 64.0 and 60.8 in F1-measure on the NWIRE, NPAPER and BNEWS domains, respectively.
he did not look into the contribution of anaphoricity determi-nation on coreference resolution of different NP types.
contrasting
train_8116
2003) and improved the performance by 2.9 and 1.6 to 67.3 and 67.2 in F1-measure on the MUC-6 and MUC-7 corpora, respectively.
their experiments show that eliminating non-anaphors using an anaphoricity determination module in advance harms the performance.
contrasting
train_8117
Experiments on the ACE 2003 corpus showed that this joint anaphoricity-coreference ILP formulation improved the F1-measure by 3.7-5.3 on various domains.
their experiments assume true ACE mentions (i.e.
contrasting
train_8118
The phrase clustering algorithm in this paper outputs groups of source-language and targetlanguage phrases with similar meanings: paraphrases.
previous work on paraphrases for SMT has aimed at finding translations for source-language phrases in the system's input that weren't seen during system training.
contrasting
train_8119
Count-based metrics can deduce from the similar translations of two phrases that they have similar meanings, despite dissimilarity between the two word sequencese.g., they can deduce that "red" and "burgundy" belong in the same cluster.
these metrics are unreliable when total counts are low, since phrase co-occurrences are determined by a noisy alignment process.
contrasting
train_8120
Edit-based metrics are independent of how often phrases were observed.
sometimes they can be fooled by phrases that have similar word sequences but different meanings (e.g., "the dog bit the man" and "the man bit the dog", or "walk on the beach" and "don't walk on the beach").
contrasting
train_8121
4In our initial experiments, APL worked better than PL.
aPL had a strange side-effect.
contrasting
train_8122
Only 2-4% of the total phrases in each language end up in a cluster (that's 6.5-9% of eligible phrases, i.e., of phrases that aren't "count 1").
about 20-25% of translation probabilities are smoothed for both language pairs.
contrasting
train_8123
There are several possibilities for future work based on new applications for phrase clusters: • In the experiments above, we used phrase clusters to smooth P(t|s) and P(s|t) when the pair (s,t) was observed in training data.
the phrase clusters often give non-zero probabilities for P(t|s) and P(s|t) when s and t were both in the training data, but didn't co-occur.
contrasting
train_8124
As Figure 5 shows for the variant LO cosine sentence, terms that are more frequent have a greater chance of being correctly translated at better ranks.
figure 5: Average rank of correct translation according to average source term frequency the relative performance of the different parametric configurations still holds.
contrasting
train_8125
The step for choosing similar document pairs in this work resembles some of our steps.
their work focuses on high quality and specific documents pairs, as opposed to the entire corpus of guaranteed quality we want to build.
contrasting
train_8126
We also notice that there is a relationship "conj" in syntactic dependency tree.
we find that it only connects two head words for a few coordinating conjunction, such as "and", "or", "but".
contrasting
train_8127
For rule based method, the seeds are selected in the review domain, which is more suitable for domain specific task.
both methods achieve low performance.
contrasting
train_8128
MaxEnt classifier is a discrimitive model, which can incorporate various features.
it independently classifies each word, and ignores the dependency among successive words.
contrasting
train_8129
Table 3: Performance improvement (%) of including the additional features in an incremental way on the development data (of the abstracts subcorpus).
table 3 shows that the additional features behave quite differently in terms of PCLB and PCRB measures.
contrasting
train_8130
2) autoparse(test) consistently outperforms autoparse(t&t) on both the abstracts and the full papers subcorpora.
it is surprising to find that autoparse(t&t) achieves better performance on the clinical reports subcorpus than autoparse(test).
contrasting
train_8131
An effective semi-supervised extractor will have good performance over a range of extraction tasks and corpora.
many of the learning procedures just cited have been tested on only one or two extraction tasks, so their generality is uncertain.
contrasting
train_8132
S&G used WordNet 1 to provide word similarity information.
in the similarity-centric approach, lexical polysemy can lead the bootstrapping down false paths.
contrasting
train_8133
The document relevance score is first applied to rank the patterns in relevant documents, then the patterns with lexical similarity scores below a similarity threshold will be removed from the ranking; only patterns above threshold will be added to the seeds.
if in the current iteration, no pattern meets the threshold, the threshold will be lowered until new patterns can be found.
contrasting
train_8134
By then, "Earthquake" was trending on Twitter Search with thousands of updates 2 .
it is a daunting task for people to find out information they are interested in from such a huge number of news tweets, thus motivating us to conduct some kind of information extraction such as event mining, where SRL plays a crucial role (Surdeanu et al., 2003).
contrasting
train_8135
Given the simplicity of the baseline, the results obtained are quite high.
our approach F β=1 significantly improves baseline by 19% for recognition and 30% for classification.
contrasting
train_8136
On the one hand, it consists of a set of manually encoded rules based on morphosyntactic information.
it includes a Bayesian learned disambiguation module to identify nominal events.
contrasting
train_8137
It is especially interesting to integrate and summarize scattered opinions in blog articles and forums as they tend to represent the general opinions of a large number of people and get refreshed quickly as people dynamically generate new content, making them valuable for understanding the current views of a topic.
opinions in blogs and forums are usually fragmental, scattered around, and buried among other off-topic content, so it is quite challenging to organize them in a meaningful way.
contrasting
train_8138
Intuitively, the aspects should be concise phrases that can both be easily interpreted in the context of the topic under consideration and capture the major opinions.
where can we find such phrases and which phrases should we select as aspects?
contrasting
train_8139
(Zhao and He, 2006;Mei et al., 2007).
these methods suffer from the problem of producing trivial aspects.
contrasting
train_8140
The closest work to ours are (Lu and Zhai, 2008;Sauper and Barzilay, 2009); both try to use well-written articles for summarization.
(Lu and Zhai, 2008) assumes the well-written article is structured with explicit or implicit aspect information, which does not always hold in practice, while (Sauper and Barzilay, 2009) needs a relatively large amount of training data in the given domain.
contrasting
train_8141
This indicates that people vary a lot in their preferences as which aspects should be presented first.
in cases when the random baseline outperforms others the margin is fairly small, while Freebase order and coherence-based order have a much larger margin of improvement when showing superior performance.
contrasting
train_8142
In our example, the morpheme-level intersection alignment is better as it has no misalignments and adds new alignments.
it misses some key links.
contrasting
train_8143
POMDPs, which make it possible to learn a policy that can maximize the averaged reward in partially observable environments (Pineau et al., 2003), have been successfully adopted in task-oriented dialogue systems for learning a dialogue control module from data (Williams and Young, 2007).
no work has attempted to use POMDPs for less (or non-) task-oriented dialogue systems, such as listening agents, because user goals are not as welldefined as task-oriented ones, complicating the finding of a reasonable reward function.
contrasting
train_8144
The perplexity of the human dialogues is less than that of the random system, but humans also exhibit a certain degree of freedom.
pOMDp's perplexity is less than the human dialogues; they still have some freedom, which probably led to their reasonable evaluation scores.
contrasting
train_8145
The parser outputs in SD and CoNLL can be assumed to be trees, so each node in the tree have only one parent node.
in the converted tree nodes can have more than one parent.
contrasting
train_8146
These results serve as a reference point for extrinsic evaluation results.
it should be Table 2: Comparison of the F-score results with different SD variants on the development data set with the MC parser.
contrasting
train_8147
The lexicons that we use to perform lookups are collected by mining Wikipedia and other online resources (Mukund et al., 2010).
lexicon lookups will fail for Out-Of-Vocabulary words.
contrasting
train_8148
Light verbs are those which contribute to the tense and agreement of the verb (Butt and Geuder, 2001).
despite the existence of a light verb tag, it is noticed that in several sentences, verbs followed by auxiliary verbs need to be grouped as a single predicate.
contrasting
train_8149
In the case of judgment and appreciation, the use of the polarity reversal rule is straightforward ('POS jud' <=> 'NEG jud', 'POS app' <=> 'NEG app').
it is not trivial to find pairs of opposite emotions in the case of a fine-grained classification, except for 'joy' and 'sadness'.
contrasting
train_8150
Since long dependencies and those near to the root are typically the last constructed in transition-based parsing systems, it was concluded that MaltParser does suffer from some form of error propagation.
the richer feature representations of MaltParser led to improved performance in cases where error propagation has not occurred.
contrasting
train_8151
Since MSTParser and Malt-Parser produced Stanford dependencies for this experiment, evaluation required less manual examination than for some of the other parsers, as was also the case for the output of the Stanford parser in the original evaluation.
a manual evaluation was still performed in order to resolve questionable cases.
contrasting
train_8152
Genuine common instances are hyponymy relation candidates found in both S and U (G = X S ∩ X U ).
term pairs are obtained as virtual common instances when: • 1) they are extracted as hyponymy relation candidates in either S or U and; • 2) they do not seem to be a hyponymy relation in the other text The first condition corresponds to X S ⊕ X U .
contrasting
train_8153
Thus many virtual common instances would be a negative example for hyponymy relation acquisition.
genuine common instances (hyponymy relation candidates found in both S and U ) are more likely to hold a hyponymy relation than virtual common instances.
contrasting
train_8154
• B2 is the same as B1, except that both classifiers are trained with all available training data -WikiSet and WebSet are combined (27,500 training instances in total).
each classifier only uses its own feature set (WikiFeature or WebFeature) 5 .
contrasting
train_8155
Computing T (s) as an average of belief in its claims overestimates the trustworthiness of a source with relatively few claims; certainly a source with 90% accuracy over a hundred examples is more trustworthy than a source with 90% accuracy over ten.
summing the belief in claims allows a source with 10% accuracy to obtain a high trustworthiness score by simply making many claims.
contrasting
train_8156
Then, for each disjunctive clause consisting of a set P of positive literals (claims) and a set N of negations of literals, we add the constraint The left-hand side is the union bound of at least one of the claims being true (or false, in the case of negated literals); if this bound is at least 1, the constraint is satisfied.
this optimism can dilute the strength of our constraints by ignoring potential dependence among claims: x ⇒ y, x ∨ y implies y is true, but since we demand only when the claims are mutually exclusive, the union bound is exact; a common constraint is of the form q ⇒ r 1 ∨r 2 ∨.
contrasting
train_8157
Wikipedia Infoboxes (Wu and Weld, 2007) are a semi-structured source covering many domains with readily available authorship, and we produced our city population and basic biographic datasets from the most recent full-history dump of the English Wikipedia (taken January 2008).
attribution is difficult: if an author edits the page but not the claim within the infobox, is the author implicitly agreeing with (and asserting) the claim?
contrasting
train_8158
More importantly, Figure 1 also indicates that this method shows more stable results and low variation in summary quality when keyphrases of size 3 or smaller are employed.
mmR shows high variation in summary qualities making summaries that obtain pyramid scores as low as 0.15.
contrasting
train_8159
A promising way of overcoming this weakness is to include n-grams, generalizing the bag-ofwords model into a bag-of-phrases model (Baccianella et al., 2009;Pang and Lee, 2008).
regression models over the feature space of all n-grams (for either fixed maximal n or variable-length phrases) are computationally expensive in their training phase.
contrasting
train_8160
As shown in table 2, when we only use Word Feature(WF), the F-value of task (a) achieved a high value (96.3).
the F-values of task (b) and (c) are relative low, that means the problem of recognizing the eight basic emotions for emotion words is a lot more difficult than the problem of recognizing emotion and unemotion words, so we focus on task (b) and (c).
contrasting
train_8161
", it would be helpful for a QA system to know which NEs are cities.
virtually all of the existing NE recognizers and mention detectors can only determine whether an NE is a location or not.
contrasting
train_8162
To do so, we create a pairwise factor node that connects two variable nodes if the aforementioned relation between the underlying NPs is satisfied.
to implement this idea, we need to address two questions.
contrasting
train_8163
Ravi and Knight (2009) mention that it is possible to interrupt the IP solver and obtain a suboptimal solution faster.
the IP solver did not return any solution when provided the same amount of time as taken by MIN-GREEDY for any of the data settings.
contrasting
train_8164
The figure shows that the greedy approach can scale comfortably to large data sizes, and a complete run on the entire Penn Treebank data finishes in just 1485 seconds.
the IP method does not scale well-on average, it takes 93 seconds to finish on the 24k test (versus 34 seconds for MIN-GREEDY) and on the larger PTB test data, the IP solver runs for It is interesting to see that for the 24k dataset, the greedy strategy finds a grammar set (containing only 478 tag bigrams).
contrasting
train_8165
The intuition behind "more informative" is that these instances support the learning process, so we might need fewer annotated instances to achieve a comparable classifier performance, which could decrease the cost of annotation.
"more informative" also means that these instances might be more difficult to annotate, so it is only fair to assume that they might need more time for annotation, which increases annotation cost.
contrasting
train_8166
Overall, the sentences selected by the classifier during AL are longer (26.2 vs. 28.1 token per sentence), and thus may take the annotators more time to read.
3 we could not find a significant correlation (Spearman rank correlation test) between sentence length and annotation time, nor between sentence length and classifier confidence.
contrasting
train_8167
Various aspects of route directions have been subject of research in computational linguistics, ranging from instructional dialogues in MapTask (Anderson et al., 1991) to recent work on learning to follow route directions (Vogel and Jurafsky, 2010).
little work has been done on generating NL directions based on data from Geographical Information Systems (Dale et al., 2005;Roth and Frank, 2009).
contrasting
train_8168
In addition, (semi-)supervised models could be used to assess the gain we may achieve in comparison to the minimally supervised setting.
we still see potential for improving our current models by integrating refinements based on the observations outlined above: Missing alignment targets on the linguistic side -especially due to anaphora, elliptical or aggregating constructions -constitute the main error source.
contrasting
train_8169
That was because the number of resulting clus-ters should be known as a parameter in the latter.
the number of corpus domains might be unknown in our case.
contrasting
train_8170
In comparison to Turkish, isiZulu is a tonal language.
to East Asian languages, in isiZulu there are three steps for tone assignment: lexical, morphemic and terraced.
contrasting
train_8171
Similar to Harris (Harris, 1955), the algorithm is based on letter frequencies.
when Harris uses successor and predecessor frequencies, they use position-independent n-gram statistics to merge single letters into morphemes until a stopping criterion is fulfilled.
contrasting
train_8172
Therefore the summarization problem can be formulated as the minimum dominating set problem.
usually there is a length restriction for generating the summary.
contrasting
train_8173
Progress has been made and promising results have been reported in the past years for both DS and PB approaches.
most previous research work (some exceptions are discussed in related work) involves solely one category of approach.
contrasting
train_8174
The consensus is that the lexical items exposing similar behavior in a large body of text most likely have the same meaning.
the concepts of marriage and political regime, that are also observed in similar lexico-syntactic environments, albeit having quite distinct meanings are likewise assigned by such methods to the same cluster.
contrasting
train_8175
Since there is no large metaphor-annotated corpus available, it was impossible for us to reliably evaluate the recall of the system.
the system identified a total number of 4456 metaphorical expressions in the BNC starting with a seed set of only 62, which is a promising result.
contrasting
train_8176
Thanks to its simplicity, services with social tagging features have attracted a lot of users and have accumulated huge amount of annotations.
comparing to taxonomies, social tagging has an inherent shortcoming, that there is no explicit hierarchical relations between tags.
contrasting
train_8177
One of TAG-TAG's benefits is that it does not rely on the content of the annotated document, thus it can be applied to tags for non-text objects, such as images and music.
when coming to text documents, this benefit is also a shortcoming, that TAG-TAG makes no use of the content when it is available.
contrasting
train_8178
A person searching for forms containing the potential aspect would have to search for 'nga<asp> + ng<asp>'.
there should be no ambiguity, as the orthographic form would eliminate this.
contrasting
train_8179
On the one hand, the restrictions thus imposed by bipartite matching penalize sets of proposed analyses that do not differentiate between surface-identical syncretic morphemes.
the same one-to-one matching restrictions penalize proposed analyses that do not conflate allomorphs of the same underlying morpheme, whether those allomorphs are phonologi-cally induced or not.
contrasting
train_8180
For example, the LTI index, computed over automatically extracted local topics, produces Topic Control assignments with the average precision of 80% when compared to assignments derived from human-annotated data using the strict accuracy metric.
automated prediction of Involvement based on NPI index is far less reliable, although we can still pick the most involved speaker with 67% accuracy.
contrasting
train_8181
Recent research tries to automatically align the bilingual syntactic sub-trees.
most of these works suffer from the following problems.
contrasting
train_8182
There are cases that the correctly-aligned tree pairs have very few links, while we have a bunch of candidates with lower alignment probabilities.
the sum of the lower probabilities is larger than that of the correct links', since the number of correct links is much fewer.
contrasting
train_8183
On one hand, GIZA++ is offline trained on a large amount of bilingual sentences to compute the lexical and word alignment features.
the tree structural features, similar to word and phrase penalty features in phrase based SMT models, are computed online for both training and testing.
contrasting
train_8184
Also, SCFs perform considerably better than COs in the English experiment (we only have the result for F4 available, but it is considerably lower than the result for F3).
earlier English studies have reported contradictory results (e.g.
contrasting
train_8185
When considering the general level of performance, our best performance for French (65.4 F) is lower than the best performance for English in the experiment of Sun and Korhonen (2009).
it does compare favourably to the performance of other stateof-the-art (even supervised) English systems (Joanis et al., 2008;Li and Brew, 2008;Ó Séaghdha and Copestake, 2008;Vlachos et al., 2009).
contrasting
train_8186
Our results contrast with those of Ferrer who showed that a clustering approach does not transfer well from English to Spanish.
she used basic SCF and named entity features only, and a clustering algorithm less suitable for high dimensional data.
contrasting
train_8187
A dependency forest has a structure of a hypergraph such as packed forest (Klein and Manning, 2001;Huang and Chiang, 2005).
while each hyperedge in a packed forest naturally treats the corresponding PCFG rule probability as its weight, it is challenging to make dependency forest to be a weighted hypergraph because dependency parsers usually only output a score, which can be either positive or negative, for each edge in a dependency tree rather than a hyperedge in a dependency forest.
contrasting
train_8188
Input: a source sentence ψ, a forest F , an alignment a, and k Output: minimal initial phrase set R 1: for each node v ∈ V in a bottom-up order do 2: for each hyperedge e ∈ E and head(e) = v do 3: W ← ∅ 4: fixs ← EnumFixed(v, modif iers(e)) 5: floatings ← EnumFloating(modif iers(e)) 6: add structures fixs, floatings to W 7: for each ω ∈ W do 8: if ω is consistent with a then 9: generate a rule r 10: R.append(r) 11: keep k-best dependency structures for v In tree-based rule extraction, one just needs to first enumerate all bilingual phrases that are consistent with word alignment and then check whether the dependency structures over the target phrases are well-formed.
this algorithm fails to work in the forest scenario because there are usually exponentially many well-formed structures over a target phrase.
contrasting
train_8189
(2008) show that string-to-dependency system achieves 1.48 point improvement in BLEU along with dependency language model, while no improvement without it.
the string-todependency system still commits to using dependency language model from noisy 1-best trees.
contrasting
train_8190
The Prague Dependency Treebank also contains annotation for light verb constructions (Cinková and Kolářová, 2005) and NomBank (Meyers et al., 2004b) provides the argument structure of common nouns, paying attention to those occurring in support verb constructions as well.
zarrieß and Kuhn (2009) make use of translational correspondences when identifying multiword expressions (among them, light verb constructions).
contrasting
train_8191
The recognition of light verb constructions cannot be solely based on syntactic patterns for other (productive or idiomatic) combinations may exhibit the same verb + noun scheme (see section 2).
in agglutinative languages such as Hungarian, nouns can have several grammatical cases, some of which typically occur in a light verb construction when paired with a certain verb.
contrasting
train_8192
For the above reasons, a single light verb construction manifests in several different forms in the corpus.
each occurrence is manually paired with its prototypical (i.e.
contrasting
train_8193
occurs 5.8 times in the corpus on average.
the participle form irányadó occurs in 607 instances (e.g.
contrasting
train_8194
In this case, the computer must first recognize that the parts of the collocation form one unit (Oravecz et al., 2004), for which the multiword context of the given word must be considered.
the lack (or lower degree) of compositionality blocks the possibility of word-by-word translation (Siepmann, 2005;Siepmann, 2006).
contrasting
train_8195
Clearly, for the time being this process cannot be done by a computer at the level of a human expert.
important tasks may be automated such as market forecasting, which relies on identifying and aggregating relevant information from the World Wide Web (Berekoven et.
contrasting
train_8196
Under a market we unite branches, products, and technologies, because the distinction between these is not clear in general (e.g., for semiconductors).
we define a criterion to be a metric attribute that can be measured over time.
contrasting
train_8197
decreasing or increasing) cannot be derived from the given values.
this information can mostly be obtained from a nearby indicator word (e.g.
contrasting
train_8198
W C : This affinity matrix aims to reflect the cross-document relationships between sentences in the document set.
the relationships in this matrix are used for carrying the influences of the sentences in other documents on the local saliency of the sentences in a particular document.
contrasting
train_8199
Overall, the proposed unified graph-based approach is effective for both single document summarization and multi-document summarization.
the performance improvement for singledocument summarization is more significant than that for multi-document summarization, which shows that the global information in a document set is very beneficial to summarization of each single document in the document set.
contrasting