id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8000
The altered metric aims to capture the visual and semantic salience of radicals in kanji perception, and to also take into account some basic shape similarity.
to the previous approach, we can consider kanji as arbitrary symbols rendered in print or on screen, and then attempt to measure their similarity.
contrasting
train_8001
In Figure 5, the baseline parser incorrectly assigned 签订/NN (signing) as the head of 合作/NN (cooperation).
after using the case element 项目/NN (project) 合作/NN, the correct head of 合作/NN was found by the proposed parser.
contrasting
train_8002
Gold standard word segmentation and pos-tag are applied in previous experiments.
parsing accuracy will be affected by the incorrect word segmentation and pos-tag in the real applications.
contrasting
train_8003
Relation extraction was treated as a sequence labeling problem and relational patterns were discovered to boost the performance.
this model extracts relations without considering dependencies between entities, and the best reported F-measure is 67.91, which is significantly (by 2.5%) lower than our MLN system when evaluated on the same training and testing sets.
contrasting
train_8004
Engström (2004) reports decreased accuracy in cross-domain classification since sentiment in different domains is often expressed in different ways.
it is impossible in practice to have annotated data for all possible domains of interest.
contrasting
train_8005
Unaligned words in either language (an empty row or column in the alignment matrix, not present in our example) will be attached as high as possible in our tree.
other ways of handling unaligned words are possible given the decomposition tree.
contrasting
train_8006
Many reordering approaches have been proposed for the statistical machine translation (SMT) system.
the information about the type of source sentence is ignored in the previous works.
contrasting
train_8007
(2002) proposed the log-linear model.
reordering is always a key issue in the decoding process.
contrasting
train_8008
A chunk-level reordering model was first proposed by .
all the existing models make no distinction between the different types of the source sentence.
contrasting
train_8009
Obviously, the Chinese question phrase "什 么样 的 座位 (What kind of seats)" should be put at the beginning of its English translation.
many phrase-based systems fail to do this.
contrasting
train_8010
The fundamental assumption underlying much recent work on syntax-based modeling, which is considered to be one of next technology breakthroughs in SMT, is that translational equivalence can be well modeled by structural transformation.
as discussed in prior arts (Galley et al., 2004) and this paper, linguistically-informed SCFG is an inadequate model for parallel corpora due to its nature that only allowing child-node reorderings.
contrasting
train_8011
More recently, the hidden Markov support vector machines (HM-SVMs) (Altun et al., 2003) have been proposed which combine the flexibility of kernel methods with the idea of HMMs to predict a label sequence given an input sequence.
hM-SVMs require full annotated corpora for training which are difficult to obtain in practical applications.
contrasting
train_8012
However, HM-SVMs require full annotated corpora for training which are difficult to obtain in practical applications.
the HVS model can be easily trained from only lightly annotated corpora.
contrasting
train_8013
In uncertainty sampling scheme, the most uncertain unlabeled example is considered as the most informative case selected by active learner at each learning cycle.
an uncertain example for one classifier may be not an uncer-tain example for other classifiers.
contrasting
train_8014
The motivation behind the overall-uncertainty method is similar to that of the max-confidence method.
the max-confidence method only considers the most informative example at each learning cycle.
contrasting
train_8015
(2008) proposed a minimum expected error strategy to learn a stopping criterion through es-timation of the classifier's expected error on future unlabeled examples.
both two studies did not give an answer to the problem of how to define an appropriate threshold for the stopping criterion in a specific task.
contrasting
train_8016
Vlachos (2008) also studied a stopping criterion of active learning based on the estimate of the classifier's confidence, in which a separate and large dataset is prepared in advance to estimate the classifier's confidence.
there is a risk to be misleading because how many examples are required for this pregiven separate dataset is an open question in real-world applications, and it can not guarantee that the classifier shows a rise-peak-drop confidence pattern during active learning process.
contrasting
train_8017
In previous studies on active learning, the initial training set is generally generated by random sampling from the whole unlabeled corpus.
random sampling technique can not guarantee selecting a most rep-resentative subset, because the size of initial training set is generally too small (e.g.
contrasting
train_8018
We think selecting examples from each cluster can alleviate the redundancy problem.
this sampling scheme works poorly for WSD and TC on the three data sets, compared to traditional uncertainty sampling.
contrasting
train_8019
Actually these proposed techniques can be easily applied for committee-based sampling for active learning.
to do so, we should adopt a new uncertainty measurement such as vote entropy to measure the uncertaity of each unlabled example in committee-based sampling scheme.
contrasting
train_8020
While Bleu is a reasonable choice for evaluating the quality of the overall composite set of translation sentences, it is not suitable for sentence-level decisions.
in line with Nomoto (2003)'s motivation for developing m-precision as an alternative to Bleu, we make the following observation.
contrasting
train_8021
This means it is a measure for how much the occurrence of word A makes the occurrence of word B more likely, which we term positive association, and how much the absence of word A makes the occurrence of word B more likely, which we term negative association.
our experiments show that only positive association is beneficial for aligning words cross-lingually.
contrasting
train_8022
We can see that a query and its translation shares some pivots which are associated with statistical significance 20 .
it also illustrates that the actual LLR value is less insightful and can hardly be compared across these two corpora.
contrasting
train_8023
In Table 6, we see that the first query "gear" 21 is highly associated with "shift".
on the English side we see that gear is most highly associated with the pivot word gear.
contrasting
train_8024
In summary, we can see that in both cases the degree of associations are rather different, and cannot be compared without preprocessing.
it is also apparent that in both examples a simple L1 normalization of the degree of associations does not lead to more similarity, since the relative differences remain.
contrasting
train_8025
Section 1); and (ii) even a single sentence could be considered a case of plagiarism, as it transmits a complete idea.
a plagiarised sentence is usually not enough to automatically negate the validity of an entire document.
contrasting
train_8026
Our hypothesis is that whenever there is a lexical class motivated by a particular distributional behaviour, a learner can be trained to identify the members of this class.
there are two main problems to lexical classification: noise and silence, as we will see in section 4.
contrasting
train_8027
Brent's hypothesis, followed by most authors afterwards, is that noise can be eliminated by statistical methods because of its low frequency.
the fact is that in our test set significant information is as sparse as noise, and the DT cannot correctly handle this.
contrasting
train_8028
This was the main reason to introduce negative contexts as well as positive ones, as we mentioned in section 3.
these systematic sources of error can be taken as an advantage when assessing the usability of the resulting resources.
contrasting
train_8029
Sample similarity in the multi-way sentiment detection setting has previously been considered by using Support Vector Machines (SVMs) in conjunction with a metric labeling metaalgorithm (Pang and Lee, 2005); by taking a semisupervised graph-based learning approach (Goldberg and Zhu, 2006); and by using "optimal stacks" of SVMs (Koppel and Schler, 2006).
each of these methods have shortcomings (Section 2).
contrasting
train_8030
This problem may be addressed by considering SVM regression (SVM-R) (Smola and Schölkopf, 1998), where class labels are assumed to come from a discretisation of a continuous function that maps the feature space to a metric space.
sVM-R, like the sVM schemes described here, trains on the entire feature set for all the classes in the dataset.
contrasting
train_8031
We see that for high β, there are few n-grams with p(u|E) ≥ β; this is as expected.
even at a high threshold of β = 0.9 there are still on average three 4-grams per sentence with posterior probabilities that exceed β.
contrasting
train_8032
favour the shorter but disfluent hypothesis; normalising by length was not effective.
the stupid-backoff LM has better coverage and the backing-off behaviour is a clue to the presence of disfluency.
contrasting
train_8033
The only exceptions are articles without an infobox which cannot be used for training.
this is not a real issue because the amount of remaining data is sufficient: 9000 articles can be used for this task.
contrasting
train_8034
The complexity of the parsing algorithm is usually considered the reason for long parsing times.
it is not the most time consuming component as proven by the above analysis.
contrasting
train_8035
This provides an advantage for small training corpora.
this is probably not the main reason for the high improvement, since for languages with only slightly larger training sets such as Chinese the improvement is much lower and the gradient at the end of the curve is so that a huge amount of training data would be needed to make the curve reach zero.
contrasting
train_8036
The scores for Catalan, Chinese and Japanese are still lower than the top scores.
the parser would have ranked second for these languages.
contrasting
train_8037
The figures are not directly comparable since HALogen takes as input syntactic structures.
it gives us an idea where our generator is situated.
contrasting
train_8038
In previous studies in IR, decaying weight along with distance within a text window have been proposed.
the decaying functions are defined manually.
contrasting
train_8039
If we consider the question Q 24 below as reference, question Q 26 will be deemed more useful than Q 25 when using cos or mcs because of the higher relative lexical and conceptual overlap with Q 24 .
this is contrary to the actual ordering Q 25 ≻ Q 26 |Q 24 , which reflects that fact that Q 25 , which expects the same answer type as Q 24 , should be deemed more useful than Q 26 , which has a different answer type.
contrasting
train_8040
The analysis above shows the importance of using the answer type when computing the similarity between two questions.
instead of relying exclusively on a predefined hierarchy of answer types, we have decided to identify the question focus of a question, defined as the set of maximal noun phrases in the question that corefer with the expected answer.
contrasting
train_8041
It does not take any information of other mentions into account.
it turned out that it is difficult to improve upon their results just by applying a more sophisticated learning method and without improving the features.
contrasting
train_8042
Since flat-K partitioning did not perform as well we focus here on recursive 2way partitioning.
to flat-K partitioning, this method does not need any information about the number of target sets.
contrasting
train_8043
The CEAF-algorithm aligns entities in key and response by means of a similarity metric, which is motivated by B 3 's shortcoming of using one entity multiple times (Luo, 2005).
although CEAF theoretically does not require to have the same number of mentions in key and response, the algorithm still cannot be directly applied to end-to-end coreference resolution systems, because the similarity metric is influenced by the number of mentions in key and response.
contrasting
train_8044
We implemented distance as weights on hyperedges which resulted in decent performance.
this is limited to pairwise relations and thus does not exploit the power of the high degree relations available in COPA.
contrasting
train_8045
lexical or structural) of the target sentences.
in recognizing relations, humans are not thus constrained and rely on an abundance of implicit world knowledge or background information.
contrasting
train_8046
What quantifies as world or background knowledge is rarely explored in the RE literature and we do not attempt to provide complete nor precise definitions in this paper.
we show that by considering the relationship between our relations of interest, as well as how they relate to some existing knowledge resources, we improve the performance of RE.
contrasting
train_8047
(2008) took advantage of the hierarchical ontology of relations by proposing methods customized for the perceptron learning algorithm and support vector machines.
we propose a generic way of using the relation hierarchy which at the same time, gives globally coherent predictions and allows for easy injection of knowledge as constraints.
contrasting
train_8048
More people used predicate senses in semantic role labeling (Hajič et al., 2009;Surdeanu et al., 2008).
both of the pipeline methods ignore possible dependencies between the word senses and semantic roles, and can result in the error propagation problem.
contrasting
train_8049
Besides jointly learning semantic role assignment of different constituents for one task (semantic role labeling), their methods have been used to jointly learn for two tasks (semantic role labeling and syntactic parsing).
it is easy for the re-ranking model to loss the optimal result, if it is not included in the top n results.
contrasting
train_8050
The semantic role labeling not only can help predicate sense disambiguation, but also argument sense disambiguation (a little).
because of the limitation of the pipeline model, it is difficult to make semantic role labeling to help predicate and argument sense disambiguation simultaneously.
contrasting
train_8051
First, most works identify the polarity of adjectives and adverbs because the syntactic constructs generally express sentimental semantics.
our method identifies the polarity of person names.
contrasting
train_8052
For instance, knowing the existence of an emotion is often insufficient to predict future events or decide on the best reaction.
if the emotion cause is known in addition to the type of emotion, prediction of future events or assessment of potential implications can be done more reliably.
contrasting
train_8053
It does not necessarily occur in the.
the hyper-plane is used to separate positive and negative instances during classification process without consideration of the margin.
contrasting
train_8054
Normally, parsing is done at sentence level.
in many cases a pronoun and its antecedent candidate do not occur in the same sentence.
contrasting
train_8055
It is same as our strategy without negative instances from nonevent anaphoric pronouns.
our study showed an improvement by adding in negative instances from non-event anaphoric pronouns as showed in table 4.
contrasting
train_8056
Synthesis works better for subjects in Person category, because the biographical structure provides a specific and fairly unrelated content in each section, making the synthesis less redundancy-prone.
there is arbitrariness when organizing articles in Event and Culture category.
contrasting
train_8057
One of the aims of this grammar is to be precision-oriented: it tries to give detailed analyses of the German language, and reject ungrammatical sentences as much as possible.
this precision comes at the cost of lower coverage, as we will see later in this paper.
contrasting
train_8058
The intuition is that when the restricted part of the grammar can find a solution, that solution will indeed be found, and preferred by the statistical models.
when the sentence is extragrammatical, the robustness rules may be able to overcome the barriers.
contrasting
train_8059
The former is hard to establish, because of the missing lexical item.
the latter should be doable: the lexicon knows that 'yesterday' is an adverb that modifies verbs.
contrasting
train_8060
As expected, pivot translations yield lower quality scores than the corresponding direct translations hypotheses.
pivot hypotheses may contain better lexical predictions, that the additional model helps transfer into the baseline system, yielding translations with a higher quality, as shown in many cases the +auxLM systems results.
contrasting
train_8061
As can be seen, the (f r → de) system is still improved by using the additional language model.
the absolute value of the gain under the full condition (+0.61) is lower than that of the intersection data condition (+0.96).
contrasting
train_8062
This could be explained, in addition to the lower Bleu3 and Bleu4 precision, by the fact that the quality of the translation of grammatical words may have decreased.
italian brings little improvement for content words save for nouns.
contrasting
train_8063
Both deal with multilingual information extracted from the Web.
the majority of CLIR studies pursue different targets.
contrasting
train_8064
As discussed in Section 3.1, we dismissed terms for which no translation was found in any of the available dictionaries, so each term in each of the obtained pairs has at least a single translation to the target language.
in many cases the available translations represent the wrong word sense, since both the source terms and their translations can be ambiguous.
contrasting
train_8065
Then we use the pointwise mutual information between a noun phrase and an opinion word to measure the association.
this PMI value cannot be encoded directly as a feature as it only captures the local information between antecedent candidates and opinion words.
contrasting
train_8066
Thus we did not use the following features: semantic class agreement features, the gender agreement feature, and appositive feature.
we added some specific features, which are based on two extracted entities, i and j , where i is the potential antecedent and j is the potential anaphor: Is-between feature: Its possible values are true and false.
contrasting
train_8067
(2008) built a cross document coreference system using features from encyclopedic sources like Wikipedia.
successful coreference resolution is insufficient for correct entity linking, as the coreference chain must still be correctly mapped to the proper KB entry.
contrasting
train_8068
Most readers will immediately think of the 42nd US president.
the only two William Clintons in Wikipedia are "William de Clinton" the 1st Earl of Huntingdon, and "William Henry Clinton" the British general.
contrasting
train_8069
All of the above features are general for any KB.
since our evaluation used a KB derived from Wikipedia, we included a few Wikipedia specific features.
contrasting
train_8070
The performance of ranking decreases with a significance level of 0.05 when removing it from the best feature combination.
other features do not show significant contribution.
contrasting
train_8071
All work mentioned above share a common setting: an MBR decoder is built based on one and only one MAP decoder.
recent research has shown that substantial improvements can be achieved by utilizing consensus statistics over multiple SMT systems (Rosti et al., 2007;Li et al., 2009a;Li et al., 2009b;Liu et al., 2009).
contrasting
train_8072
Indeed, a link between a finite verb and an article does not correspond to any grammatical relation between the two.
the premise for our work is that redundancy should be sufficient to identify not only important words but also salient links between words.
contrasting
train_8073
This procedure is similar to the one used by Barzilay & Lee (2003) in that we also first identify "backbone nodes" (unambiguous alignments) and then add mappings for which several possibilities exist.
they build lattices, i.e., directed acyclic graphs, whereas our graphs may contain cycles.
contrasting
train_8074
For example, if some sentences speak of president Barack Obama or president of the US Barack Obama, and some sentences are about president Obama, we want to add some reward to the edge between president and Obama.
longer paths between words are weak signals of word association.
contrasting
train_8075
In particular, we use the Viterbi algorithm to find the sequence of words of a predefined length n which maximizes the bigram probability (MLEbased): Similar to the shortest path implementation, we specify compression length and set it also here to eight tokens.
the compressions obtained with this method are often unrelated to the main theme.
contrasting
train_8076
An interesting difference in the performance for Spanish and English is that shortest path generates more grammatical sentences than the improved version of it.
the price for higher grammaticality scores is a huge drop in informativity: half of such summaries are not related to the main theme at all, whereas 40% of the summaries generated by the improved version got the highest rating.
contrasting
train_8077
One may notice that the summaries produced by the baseline are shorter than those generated by the shortest paths which might look like a reason for its comparatively poor performance.
the main source of errors for the baseline was its inability to keep track of the words already present in the summary, so it is unlikely that longer sequences would be of a much higher quality.
contrasting
train_8078
Perhaps the work of Barzilay & Lee (2003) who align comparable sentences to generate sentencelevel paraphrases seems closest to ours in that we both use word graphs for text generation.
this is a fairly general similarity, as both the goal and the implementation are different.
contrasting
train_8079
ant spellings found in the dictionary.
for LNK, we use all the remaining relations, namely hypernyms, domains, etc.
contrasting
train_8080
With polysemous words (PC, PA), expansion works more effectively, and helps to supply appropriate images for each sense.
with MC, both LNK and SYN have less precision.
contrasting
train_8081
(2009) undertook expansion using hypernyms and this may be an appropriate way to obtain many more images for each sense.
because our aim is employ several suitable images for each sense, high precision is preferable to high recall.
contrasting
train_8082
Now, we focus on LNK shared by Lexeed, and then we analyze the reasons for F (Table 10).
to Lexeed, no sense is classified as "difficult to portray the sense using images".
contrasting
train_8083
Thus, the positional information is not required nor is it maintained.
we maintain positional information at each node as this is critical for the selection of candidate paths.
contrasting
train_8084
These methods are proven to be effective in improving the quality of alignments.
the discriminative training in these methods is restricted in using the model components of generative models, in other words, incorporating new features is difficult.
contrasting
train_8085
For simpler models such as Model 1 and Model 2, it is possible to obtain sufficient statistics from all possible alignments in the E-step.
for fertility-based models such as Models 3, 4, and 5, enumerating all possible alignments is NP-complete.
contrasting
train_8086
On the one hand, two constituents are combined by the algorithm's inference rules only if they cover disjoint parts of the input semantics.
the semantic indices present in both the input formula and the lexically retrieved RTG trees are used to prevent the generation of intermediate structures that are not compatible with the input semantics.
contrasting
train_8087
An ideal metric would be the actual BLEU score that the system would obtain under this reordering rule on the development set.
since each rule affects word alignment, phrase extraction, optimal feature weights, and the actual translation, it would be necessary to retrain the entire phrase-based system for each possible rule, which is impractical.
contrasting
train_8088
While genetically related languages tend to have similar typological features as they could inherit the features from their common ancestor, they could also differ a lot due to language change over time.
languages with no common ancestor could share many features due to language contact and other factors.
contrasting
train_8089
Table 9 shows that MADA produces a high quality segmentation, and that the effect of cascading segmentation errors on parsing is only 1.92% F1.
mADA is language-specific and relies on manually constructed dictionaries.
contrasting
train_8090
It has been observed that words are similar if their contexts are similar (Freitag et al., 2005) and so synonymy detection has received a lot of attention during the last decades.
words used in the same context are not necessarily synonyms and can embody different semantic relationships such as hyponyms, meronyms or co-hyponyms (Heylen et al., 2008).
contrasting
train_8091
One obvious way to verify all the possible connections between words of the vocabulary employs an exhaustive search.
comparison based on word usage can only highlight those terms that are highly similar in meaning.
contrasting
train_8092
To obtain a paraphrase corpus, we compute all sentence pairs similarities Sumo(S a , S b ) and select only those pairs exceeding a given threshold, in our case threshold = 0.85, which is quite restrictive, ensuring the selection of pairs strongly connected 3 .
to take into account the normalization of the corpus, little adjustments had to be integrated in the methodology proposed in (Cordeiro et al., 2007a).
contrasting
train_8093
Documents are generated only based on a BOW assumption.
word order information is very important for most textrelated tasks, and simply discarding the order information is inappropriate.
contrasting
train_8094
grammatical relations) to form the virtual words.
we do not incorporate all the dependencies.
contrasting
train_8095
On the other hand, it is common for researchers to rephrase and republish their research, tailoring it for different academic journals and conference articles, to disseminate their research to the widest possible interested public.
these researchers must include in each publication a meaningful or an important portion of a new material (Wikipedia, 2010).
contrasting
train_8096
That is, the CA and CR are very close in their quailty to the best methods.
the CA and the CR have a clear advantage on the other methods.
contrasting
train_8097
Especially CA and also CR are among the best methods for identification of various levels of plagiarism.
to the best full and selective fingerprint methods, CA and CR check a rather small portion of the papers, and therefore, their run time is much more smaller.
contrasting
train_8098
As the previous discussion has shown, Wordnet-LMF provides a very useful basis for converting GermaNet into LMF.
a number of modifications to Wordnet-LMF are needed if this conversion is to preserve all information present in the original resource.
contrasting
train_8099
In WordNet both syntactic frames and examples are linked to synsets.
at least in the case of syntactic frames the linkage to synsets seems problematic since different members of the same synset may well have different valence frames.
contrasting