id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8200
A distinct advantage of rulebased bots is that they have very high precision.
they suffer from fixed-size knowledge bases and use only rigid rules.
contrasting
train_8201
are highly reliable to be real questions.
the initial training data still contain many sentences ending with "?"
contrasting
train_8202
Work on answer patterns includes the web based pattern mining (Zhang and Lee, 2002;Du et al., 2005) and a combination of syntactic and semantic elements (Soubbotin and Soubbotin, 2002) etc.
to previous work, we do not only focus on standard language corpus, but extensively explore characteristics of online questions.
contrasting
train_8203
Since the vocabulary size of possible character-tag-pairs is limited, the characterbased models can tolerate out-of-vocabulary (OOV) words and have become the dominant technique for CWS in recent years.
statistical approaches can also be classified as either adopting a generative model or adopting a discriminative model.
contrasting
train_8204
Having analyzed the scores of the model scanning from both directions, we found that the original scores (from left-to-right scan) at the stages "者" and "宿" indeed get better if the model scans from right-to-left.
the score at the stage "露" deteriorates because the useful feature "者" (a past non-adjacent character for "露" when scans form right-toleft) still cannot be utilized when the past context "宿者" as a whole is unseen, when the related probabilities are estimated via modified Kneser-Ney smoothing (Chen and Goodman, 1998) technique.
contrasting
train_8205
Two scanning modes seem not complementing each other, which is out of our original expectation.
we found that the character-based generative model and the discriminative one complement each other much more than the two scanning modes do.
contrasting
train_8206
Generally speaking, if the "G(or D)+" has a strong preference on the desired candidate, but the "D(or G)-" has a weak preference on its top-1 incorrect candidate, then this combining method would correct most "G+D-(also G-D+)" errors.
the advantage of combining two models would vanish if the "G(or D)+" has a weak preference while the "D(or G)-" has a strong preference over their top-1 candidates.
contrasting
train_8207
From the results, it can be seen that the generative model achieves comparable results with the discriminative one and they outperform each other on different corpus.
the generative model exceeds the discriminative one on R IV (0.973 vs. 0.956) but loses on R OOV (0.511 vs. 0.680).
contrasting
train_8208
Last, it is found that weighting various features differently would give better result.
further study is required to find out the true reason for this strange but interesting ph The authors extend sincere thanks to Wenbing Jiang for his helps with our experiments.
contrasting
train_8209
They adopt a holistic lexicon-based approach to solve this problem, which exploits external information and evidences in other sentences and other reviews.
in this paper, we obtain the prior knowledge of a product by mining the web, and then use such knowledge to determine the SO of DSAAs.
contrasting
train_8210
(2006) is similar to ours because they also define the sentiment score of a word by its composite characters.
their algorithm is based only on frequency, while we exploit point mutual information that can capture the character-sentiment association.
contrasting
train_8211
For a positive expectation noun, people usually expect the thing referred to by the noun to be bigger, higher or happen frequently.
for a negative expectation noun, people usually expect the thing referred to by the noun to be smaller, lower or don' t happen .
contrasting
train_8212
Obviously, we can construct the potential tokenizations and translations by only using the extracted rules, in line with traditional translation decoding.
it may limits the potential tokenization space.
contrasting
train_8213
3 Using all rule table, our joint method significantly outperforms the best single system SF by +1.96 and +1.66 points on MT04 and MT05 respectively, and also outperforms the lattice-based system by +1.46 and +0.93 points.
the 8 tokenization features have small impact on the lattice system, probably because the tokenization space limited Table 3: Effect of tokenization features on Chinese-English translation task. "
contrasting
train_8214
With high-quality emotion lexicons, systems using simple methods can achieve competitive performance.
to manually build an emotion lexicon is timeconsuming.
contrasting
train_8215
The above section introduced a method to rank words with a few seed emotion words.
to build emotion lexicons requires that we manually remove the noises incurred by the automatic ranking method.
contrasting
train_8216
In fact, most of words in the emotion nouns extracted can be used as verbs or adjectives in Chinese.
since CCD is not designed for emotion analysis, words which are expression of emotions such as 哭泣 (cry) or evaluation such as 胆小 (cowardice) were included.
contrasting
train_8217
The side-effect of introducing false positives for AM-1 is to lower accuracy.
the AM-1 model outperforms both AM-2 and BL-2 models on F-measure ( Figure 6), with an average increase of 5.2 and 3.4 percentage points respectively.
contrasting
train_8218
Recently, contextual source-language features have been incorporated into translation models to predict translation phrases for traveling domain tasks (Stroppa et al., 2007;Haque et al., 2009).
we are not aware of any work addressing contextual modeling for statistical translation of spoken meeting-style interactions, not least due to the lack of a relevant corpus.
contrasting
train_8219
System 6 shows a more even performance across dev and eval sets than our trained system, which may reflect some degree of overtuning of our systems to the relatively small development set (about 7K words).
the PER scores of System 6 are significantly worse compared to our in-house systems.
contrasting
train_8220
After extracting the phrasal (rule) tables for each data source, they were combined into a single phrasal (rule) table using the same combination approach as for the basic phrase-based system.
the translation results (BLEU/PER of 24.0/46.6 (dev) and 20.8/47.6 (eval), respectively) did not show any improvement over the basic phrasebased system.
contrasting
train_8221
Our goal was not to determine how translation of meeting style data can be improved in general -better translations could certainly be generated by better syntactic modeling, addressing morphological variation in German, and generally improving phrasal coverage, in particular for sentences involving colloquial expressions.
these are fairly general problems of SMT that have been studied previously.
contrasting
train_8222
Because the training takes polynomial time in the number of common features in x (|x at each round, we need to set N to a smaller value when we take higher-order conjunctive features into consideration.
since the margin computation takes linear time in the number of support vectors |S R | relevant to rare features f R ∈ F \F C , we need to set N to a larger value when we handle a larger number of training examples.
contrasting
train_8223
Near-synonyms are useful knowledge resources for many natural language applications such as query expansion for information retrieval (IR) and paraphrasing for text generation.
near-synonyms are not necessarily interchangeable in contexts due to their specific usage and syntactic constraints.
contrasting
train_8224
In this paper, we consider the near-synonym substitution task as a classification task, where a classifier is trained for each near-synonym set to classify test examples into one of the nearsynonyms in the set.
near-synonyms share more common context words (features) than semantically dissimilar words in nature.
contrasting
train_8225
Ideally, the correct answers should be provided by human experts.
such data is usually unavailable, especially for a large set of test examples.
contrasting
train_8226
Table 1 shows some examples, where Example 3 is an example of misclassification.
although Example 2 is a correct classification, it might be an ambiguous case to classifiers since the scores are close among classes.
contrasting
train_8227
A zero weight indicates that the dimension word does not occur in the input sentence, thus the corresponding dimension of each column vector will not be adjusted.
the corresponding dimension of the column vector of the correct class ( j k = ) is adjusted by adding a value, while those of the competing classes ( j k ≠ ) are adjusted by subtracting a value from them.
contrasting
train_8228
Let the general distributional meaning of the word w be w. Their model computes a different vector w s that represents the specific distributional meaning of w with respect to s, i.e.
: In general, this operator gives different vectors for each word w i in the sequence The model of Erk and Padó (2008) was designed to disambiguate the distributional meaning of a word w in the context of the sequence s. substituting the word w with the semantic head h of s, allows to compute the distributional meaning of sequence s as shaped by the word that is governing the sequence (c.f.
contrasting
train_8229
To suit our need, we tested the k-means clustering with distributional similarity.
it does not perform as well as the proposed method.
contrasting
train_8230
In the first iteration of EM, soft-labeled examples SL are treated in the same way as the labeled examples in L. Thus both SL and L are used as labeled examples to learn the initial classifier f 0 .
in the subsequent iterations, SL is treated in the same way as any examples in U.
contrasting
train_8231
Note that, in baseline system, all the new entities are found by the empty candidate set of name variation process, while the disambiguation component has no contribution.
our approach finds the new entities not only by the empty candidate set, but also leveraging on disambiguation component which also contributes to the performance improvement.
contrasting
train_8232
Traditionally, without any training data available, the solution is to rank the candidates based on similarity.
it is difficult for the ranking approach to detect a new entity that is not present in KB, and it is also difficult to combine different features.
contrasting
train_8233
A large body of prior research on coreference resolution recasts the problem as a two-class classification problem.
standard supervised machine learning algorithms that minimize classification errors on the training instances do not always lead to maximizing the F-measure of the chosen evaluation metric for coreference resolution.
contrasting
train_8234
An SVM-based ranker then picks the output that is likely to have the highest F-measure.
this approach is time-consuming during testing, as F-measure maximization is performed during testing.
contrasting
train_8235
Second, unlike the incremental local loss in Daume III (2006), we evaluate the metric score globally.
to Ng (2005), Ng and Cardie (2002a) proposed a rule-induction system with rule pruning.
contrasting
train_8236
Ng (2004) varied different components of coreference resolution, choosing the combination of components that results in a classifier with the highest F-measure on a held-out development set during training.
our proposed approach employs instance weighting and beam search to maximize the Fmeasure of the evaluation metric during training.
contrasting
train_8237
All our results in this paper were obtained using this reduced feature set and J48 decision tree learning.
given sufficient computational resources, our proposed approach is able to apply to any supervised machine learning algorithms.
contrasting
train_8238
The pair is classified wrongly and none of the other pairs in the article can link the two NPs together through clustering.
with MMST, this probability increases to 0.54, which leads to the correct classification.
contrasting
train_8239
This paper is partly inspired by their studies.
we do not simply use click information as clues for mining similar queries.
contrasting
train_8240
Note that, based on H2 and H3, paraphrase Q-Q and T-T can be directly extracted from raw Q-T pairs.
in consideration of precision, we extract them from paraphrase Q-T. We call our paraphrase Q-Q and T-T extraction approach as a pivot approach, since we use titles as pivots (queries as targets) when extracting paraphrase Q-Q and use queries as pivots (titles as targets) when extracting paraphrase T-T. Our paraphrase extraction algorithm is shown in Table 2.
contrasting
train_8241
As expected, the paraphrases we extract cover a variety of domains.
around 50% of them are in the 7 most popular domains 7 , including: (1) health and medicine, (2) documentary download, (3) entertainment, (4) software, (5) ed-ucation and study, (6) computer game, (7) economy and finance.
contrasting
train_8242
Milne and Witten's work is most related to what we propose here in that we also employ features similar to their relatedness and commonness features.
we add to this a much richer set of features which are extracted from Web-scale data sources beyond Wikipedia, and we develop a machine learning approach to automatically blend our features using completely automatically generated training data.
contrasting
train_8243
There often exist multiple corpora for the same natural language processing (NLP) tasks.
such corpora are generally used independently due to distinctions in annotation standards.
contrasting
train_8244
At first sight, a direct combination of multiple corpora is a way to this end.
corpora created for the same NLP tasks are generally built by different organizations.
contrasting
train_8245
Consensus information can be incorporated during the combination of the output (n-best list of full parse trees following distinct annotation standards) of individual parsers.
despite the success of n-best combination methods, they suffer from the limited scope of n-best list.
contrasting
train_8246
It is therefore necessary to consider the simplification process as a combination of different operations and treat them as a whole.
most of the existing models only consider one of these operations.
contrasting
train_8247
And previous research on English SRL shows that combination is a robust and effective method to alleviate SRL's dependency on parsing results (Màrquez et al., 2005;Koomen et al., 2005;Pradhan et al., 2005;Surdeanu et al., 2007;Toutanova et al., 2008).
the effect of combination for Chinese SRL task is still unknown.
contrasting
train_8248
(2009) used sentences with golden segmentation and POS tags as input to their SRL system.
we use sentences with only golden segmentation as input.
contrasting
train_8249
Therefore, SO is more complementary to FO1 than other outputs.
fO2 is least complementary to fO1.
contrasting
train_8250
Each token is only assessed once given a set value of n, so we do not suffer from early prefixes being assessed more often.
larger values of n do not take all tokens into account, since the last y tokens of an utterance will not play a part in the accuracy when y < n. Since we evaluate given a gold standard disfluency, this measurement has recall-like properties.
contrasting
train_8251
The model considers whether the utterance continuation after the disfluency is probable given the language model; the relevant bigram here is p(rr i+3 |w i ), continuing with p(rr i+4 |rr i+3 ).
under the incremental model, it is possible the utterance has only been read as far as token i + 3, in which case the probability p(w i+4 |w i+3 ) is undefined.
contrasting
train_8252
The GENIA tagger (Tsuruoka et al., 2005) is particularly relevant in this respect (as could be the GENIA Treebank proper 4 ).
we found that GENIA tokenization does not match the PTB conventions in about one out of five sentences (for example wrongly splitting tokens like '390,926' or 'Ca(2+)'); also in tagging proper nouns, GENIA systematically deviates from the PTB.
contrasting
train_8253
Existing systems which evaluate each passage separately against the question would view each passage as having a similar degree of support for either hypertrophic cardiomyopathy or aortic stenosis as the answer to the question.
these systems lose sight of a crucial fact, namely, that even though each passage covers half of the facts in the question, (2.1 a) and (2.1 b) cover disjoint subsets of the facts, while (2.2 a) and (2.2 b) address the same set of facts.
contrasting
train_8254
It will give a very small score (typically 0) to x 4 , x 5 , and x 6 , because the passage says nothing about elephants having large ears.
some passage scorers may be mislead by the fact that the term "large" appears twice question and either one could align to the one occurrence in the passage.
contrasting
train_8255
Watson's final answer merging and ranking component considers a pre-defined set of features and applies a machine learned model to score each candidate answer.
since each candidate has multiple, and generally a varying number of supporting passages, we use a merger to combine passage scores for < candidate answer, passage > pairs into a fixed set of features.
contrasting
train_8256
A merger strategy that takes maximum across passages will choose M AX (s 1.1 , s 1.2 ) as the optimal supporting passage.
since these passages have complementary information to offer, it would be better to somehow aggregate this information.
contrasting
train_8257
In this paper, we only considered merging evidence across passages and question terms.
this may be easily extended to merging evidence across passage scorers.
contrasting
train_8258
POV differences are also related to work on sentiment analysis in natural language processing (NLP).
to most prior work in sentiment analysis, we are concerned only with objective language in this paper.
contrasting
train_8259
2 Both articles are objective and meet the neutral POV criteria of Wikipedia.
there is a POV difference between them: the French article is more positive than the Spanish article.
contrasting
train_8260
Second, reading and evaluating an entire document takes a long time for an annotator and would make gold standard creation expensive.
our units cannot be too small -e.g., words or phrases -because POV is a complex phenomenon that cannot be judged reliably at such a low level; the sentences about Napoleon in the introduction are examples for this.
contrasting
train_8261
For many objective statements, it is clear which POV class -positive, neutral or negativeapplies to them.
there is a certain subset of statements for which the decision is difficult.
contrasting
train_8262
We use light stemming to extract the stem: only frequent suffixes/prefixes are removed.
a word is reduced to its corresponding root by removing all affixes, not just frequent affixes (Al Ameed et al., 2005).
contrasting
train_8263
All differences in accuracy and F 1 between the classifier and the baseline are statistically significant at p < .01.
8 the differences in accuracy and F 1 between using BOW and n-grams features are not significant.
contrasting
train_8264
The immediate effect of the shortcomings of a BOW-based feature representation is an incorrect estimation of absolute POV.
since these effects are somewhat random and will in most cases not affect Arabic and English to the same extent, the BOW problem can also give rise to incorrect POV differences.
contrasting
train_8265
Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems.
the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora.
contrasting
train_8266
Figure 1a shows that Stanford POS tagger has improved throughout the years, increasing its speed by more than 10 times between 2006 and 2012.
the current speed is twice slower than the SENNA POS tagger.
contrasting
train_8267
For example, we do not need to split a number like 1, 000.54 into more units whereas we need to split a comma-separated list of words.
tokenization is important as it reduces the size of the vocabulary and improves the accuracy of the taggers by producing similar vocabulary to the one used for training.
contrasting
train_8268
First row shows that, given chunked input, the classification phase is able to achieve close scores to the state-of-art classifiers.
given the chunks generated by SpeedRead, the scores drop around 9.5% in F1 scores.
contrasting
train_8269
In their compound translation task, they use a dictionary to avoid out-of-domain translation.
to address this problem, which frequently arises in domain-specific translation we decided to generate our own customised lexicon; which we constructed from the multilingual Wikipedia and its dense inter-article link structure.
contrasting
train_8270
do not mention unknown words specifically, the fact that they use a character-based classification model and tokenization indicates that they can handle unknown words and perform stemming on them.
they do not present any evaluation on unknown words specifically.
contrasting
train_8271
The calculation follows the formula of the Workshop in Machine Translation (Callison-Burch et al., 2012), in order to be comparable with other methods: Pairwise comparisons with reference translations and pairwise ties in the human-annotated test-set are ignored.
every tie on the machine-predicted rankings is penalized by being counted as a discordant pair.
contrasting
train_8272
In this respect, the two knowledge sources of the UMLS, semantic network and Metathesaurus, are semantically linked to structure the semantics of biomedicine.
the integration of several vocabulary sources into UMLS has been made using experts with a goal to create a semantic link among the different biomedical resources by preserving the semantics and terms in the original resources.
contrasting
train_8273
Moreover, these forums have also become effective sources of self-service, thus providing an alternative to traditional customer service options (Roturier and Bensadoun, 2011).
a major challenge in building a system for forum content translation is the lack of parallel forum data for training.
contrasting
train_8274
The phrase-table merging (PTM) technique outlined in Section 3 was developed to rapidly combine incremental and baseline TrMs to aid our iterative data selection method.
here we use it as an alternative technique to combine the in-domain and out-of-domain phrasetables.
contrasting
train_8275
The results show weighted linear interpolation to be the best-performing system for different datasets and language pairs.
the difference in the evaluation scores between the different combination techniques are mostly statistically insignificant.
contrasting
train_8276
Since perplexity or cross-entropy have low correlation with actual translation quality, sentences selected using such techniques are not guaranteed to improve translation quality.
the TQS method only selects groups of sentences which improve translation quality, which is our overall objective.
contrasting
train_8277
A multitude of text similarity measures have been proposed for computing similarity based on surface-level and/or semantic features (Mihalcea et al., 2006;Landauer et al., 1998;Gabrilovich and Markovitch, 2007).
existing similarity measures typically exhibit a major limitation: They compute similarity only on features which can be derived from the content of the given texts.
contrasting
train_8278
By following this approach, they inherently imply that the similarity computation process does not need to take any other text characteristics into account.
we propose that text reuse detection indeed benefits from also assessing similarity along other text characteristics (dimensions, henceforth).
contrasting
train_8279
Various parts of the source text have been reused, either verbatim (underlined) or using similar words or phrases (wavy underlined).
the editor has split the source text into two individual sentences and changed the order of the reused parts.
contrasting
train_8280
In this paper, we thus overcome the traditional limitation of text similarity measures to content features.
we adopt ideas of seminal studies by cognitive scientists (Tversky, 1977;Goodman, 1972;Gärdenfors, 2000) and discuss the role of three similarity dimensions for the task of text reuse detection: content, structure, and style, as proposed in our previous work (Bär et al., 2011).
contrasting
train_8281
In total, 50 texts out of 253 have been classified incorrectly: 30 instances of text reuse have not been identified by the classifier, and 20 non-reused texts have been mistakenly labeled as such.
the original annotations have been carried out by only a single annotator (Gaizauskas et al., 2001) which may have resulted in subjective judgments.
contrasting
train_8282
The newspaper article about the English singer-songwriter Liam Gallagher, for example, is originally labeled as text reuse.
our classifier falsely assigned the label no reuse.
contrasting
train_8283
Out of these, our classifier mistakenly labeled 759 instances of negative samples as true paraphrases, while 413 cases of true paraphrases were not recognized.
in our opinion the 759 false positives are less severe errors in our envisioned semi-supervised application setting, as user intentions and the current task at hand may highly influence a user's decision to consider texts as reused or not.
contrasting
train_8284
For the latter, we see great potential for improvements by including, for example, measures for grammar analysis, lexical complexity, or measures assessing text organization with respect to the discourse elements.
each task exhibits particular characteristics which influence the choice of a suitable set of similarity dimensions.
contrasting
train_8285
Note that, since we are using the contextsensitive lemmas for matching, one can think of that as matching words on the sense level.
aMIRaN was trained mostly with morpho-syntactic features and therefore achieves good performance in identifying the common lemma of a context-sensitive part-of-speech tag for every word.
contrasting
train_8286
They submit carefully constructed queries to a search engine that might yield potential parallel webpages.
such a procedure depends on the quality of the query strategy as well as the Web index provided by the search engine.
contrasting
train_8287
In the previous section, we demonstrated the utility of our multilingual crawler for languages with reasonable resources.
such data may not be present for several other language pairs.
contrasting
train_8288
Even though the RBMT system was tuned to the domain of the TM via domain specific lexical resources (Section 3.2), most of the errors appear to be due to the RBMT system's inability to pick the right term for the technical domain data set.
compared to SMT and SMT+SPE, both the RBMT and RBMT+SMT system seem to produce a significantly lower number of grammatical errors, according to our evaluators.
contrasting
train_8289
If they become popular, various additional, more-or-less ad hoc linked projects end up being piled upon the original simple design.
there are very complex theories that have been being developed for decades to take into account all of the possible phenomena of natural language but have not yet undergone the ultimate test of large-scale treebanking (e.g.
contrasting
train_8290
It is available for use through the Grammar Matrix customization system's web-based The Grammar Matrix core grammar provides support for a wide variety of semantic valences.
the present system only exposes simple intransitive and transitive valences to the customization page.
contrasting
train_8291
This work is similar in spirit to Bender's (2008) development of an implemented grammar for Wambaya (ISO639-3: wmb) based on the Grammar Matrix and a descriptive grammar.
that work focused on hand-development of the grammar and included a manually entered lexicon, in contrast to our work on automatically populating the lexicon for the implemented grammar.
contrasting
train_8292
In fact, it turns out that this indicator conforms well to what one would expect: higher-order n-gram models generate a better approximation of real texts.
no n-gram model is able to capture long-range semantic dependencies well, which we will exploit in our analysis techniques.
contrasting
train_8293
Given a tree T , this allows us to calculate up the tree, even though it is through a downwards pointing conditional model, so the ith element of p can be predicted by: It is important to note that here c 1 , c 2 and p are binary sampled values, not their probabilities or averages, whereas the resulting probabilities are distributions.
used as a mean-field approximation, Equation 12is essentially the same as that of the RNN.
contrasting
train_8294
Although there are only 22 species of bumblebees in the UK, this means that there are almost 500 bee comparisons to be generated, and the overall number of possible texts would be orders of magnitude greater when contextual information (e.g., based on location and time of sighting) is included in the feedback.
the NLG Thank you for submitting this photo.
contrasting
train_8295
The mix of positive and negative values for the first fifteen identifications show that neither of the groups are consistently more accurate than the other.
the last five identifications show continually positive values, indicating that the richer feedback received by Group B was beginning to take effect.
contrasting
train_8296
As users who received the richer Type 2 or Type 3 feedback submitted more records on average than users who received Type 1 feedback, it appears that increasing the richness of information provision through NLG feedback has a positive effect on return rates of participants to the website.
this analysis is preliminary and these figures are potentially skewed by the presence of a small number of dedicated volunteers in each feedback group.
contrasting
train_8297
In addition, recording for some of the children in the BRB corpus began as late as month 21 and for others as early as month 13, making the data available for some of the children hard to compare.
the Providence Corpus provides data for all of the children starting from month 16 at the latest and starting from month 11 at the earliest and thus constitutes a much more homogeneous data set.
contrasting
train_8298
The collocSyll model shows the most dramatic drop in performance, from around 75% at month 11 (973 utterances) to just above 60% for month 21.
both the colloc and the Bigram model start around 60% at month 11 but, for month 21, reach around 70% so that there is no single model that comes in third for all amounts of input size.
contrasting
train_8299
We use this resource for the corpus analysis because it allows easy categorisation of words according to their frequency and elegant presentation and interpretation of results.
in Section 5 this method is abandoned and relative frequencies are calculated based on occurrences of given words in the training corpus, so as to ensure that words not found in the above mentioned dictionary are also covered.
contrasting