id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_17700
This is not surprising since the meta features aggregate posts from different authors depending on similarity and thus predicting authorship only on these features does not work as well as the standard approach of using first level features.
these meta features do outperform the bag of words baseline results (compare column MSMF in Table 6 with results shown in Table 4), underscoring the fact that the CHE data set is harder than the typical text classification task where the lexical features by themselves can solve the problem with very high accuracy.
contrasting
train_17701
The experimental evaluation presented here shows that a relatively inexpensive approach based on PCFGs can scale up to a larger number of authors, even if the documents are only a couple of sentences long.
this syntactically driven approach is outperformed by our proposed modality specific meta features framework.
contrasting
train_17702
Gerber and Chai (2010) proposed analysis of English nominal predicates with this similarity to take discourse context into account.
the similarity measure they used has drawbacks: it requires a co-reference resolver and a large number of documents.
contrasting
train_17703
Though the topic changes between two sentences, A and B cannot take it into account this and output " (Marriages)" which is an argument of " (increase)" because the frequencybased feature is active.
c and D handle this because the similarity between the nominative case of " " and " " is low.
contrasting
train_17704
There is not much evidence to skew the distribution from uniform for rare words.
when more evidence is available, the distribution becomes smoothly skewed to reflect the different syntactic preferences of the individual words, and it can eventually become as spiky as in the other approaches given sufficient evidence.
contrasting
train_17705
Intuitively, creating more subcategories can increase parsing accuracy.
oversplitting can be a serious problem, the details were presented in Klein and Manning (2003).
contrasting
train_17706
As with their work, we also use semantic knowledge for parsing.
our gold is to employ HowNet hierarchical semantic knowledge to generate fine-grained features to dependency parsing, rather than to PCFGs, requiring a substantially different model formulation.
contrasting
train_17707
Blazing depends on the fact that the ERG and the GTB both have a theoretical linguistic underpinning, and so we expect they would share many assumptions about phrase structure, particularly for phenomena such as PP-attachment and co-ordination.
there are also disparities, even between the unlabelled bracketing of the GTB and ERG trees.
contrasting
train_17708
This is partially to be expected, since some of the treebanking will be taken up with unavoidable tasks such as evaluating whether the final tree is acceptable that blazing cannot avoid.
the 8% reduction in mean annotation time for annotator 2 is still fairly modest.
contrasting
train_17709
6 Additional Experiments We believe that the entities in a document that cooccur with a target entity are important clues for disambiguating entities.
while we had ready access to named entity recognizers in English, we did not have NER capability in all of the languages of our collection.
contrasting
train_17710
(2008) used seed sets of entities and search engines to collect NER training data from the web.
constructing of a high-quality seed list is also a time-consuming work.
contrasting
train_17711
From Figure 4(a), it is seen that the perplexity values decreases dramatically when the iteration times are below 200. number of topics starts to increase.
after a certain point, the perplexity values start to increase.
contrasting
train_17712
TRLM learns the word-to-word translation probabilities from parallel corpus collected from question answer archives.
tMC models wordcategory-topic distribution from the whole question answer content.
contrasting
train_17713
Previous studies (Krogel and Scheffer, 2004) show that if the independence assumption of co-training approach is violated, the co-training approach can yield negative results.
our results show that co-training yields improvement even though are classifiers are not independent.
contrasting
train_17714
In this paper, we aim at detecting event-relevant sentences in a news corpus given an event query.
to previous work that needs expensive annotation for query-answer pairs, we use unlabeled news summaries that are readily available in large quantities online and design an efficient semi-supervised learning strategy for event identification based on co-training with Bayesian network classifiers.
contrasting
train_17715
Recently, corpus-based approaches have been successfully adopted in the field of Japanese Linguistics.
the central part of the fields has been occupied by historical research that uses ancient material, on which fundamental annotations are often not yet available.
contrasting
train_17716
They are spelled with Kana and a voiced consonant mark ( ) to the upper right (see Figure 1).
confusingly, it was not ungrammatical to put down the character without the mark to represent voiced syllable 2 In historical linguistics, the phrase "modern Japanese" refers to the language from 1600 on to the present in a broad sense.
contrasting
train_17717
In the modern Japanese literary text, orthographic variations are not only the unmarked.
unmarked characters appear a lot in the text and can be annotated easily by hand.
contrasting
train_17718
This is another piece of evidence that discriminative language model works quit well for this task.
both Character type n-gram and Markedness n-gram contribute to improvement of precision.
contrasting
train_17719
(2007) proposed a character-based joint method for word segmentation and POS tagging, in which they introduced an unsupervised method for unknown word learning.
they only learned the unknown words from the test set.
contrasting
train_17720
Hence we note the loss of dual inflection marking and nominative case marking.
clitization is more complex in DA as follows: EGY mAHkytl-hAlhw$, , 'she did not recount it to him' is expressed in three words in MSA as lm tHkyhA lh .
contrasting
train_17721
In the next example, BEDM gives a lower recall than LEDM.
the combined metric does better than the individual edit distance metrics in this example too.
contrasting
train_17722
For example, for a sentence commenting on a digital camera like "40D handles noise very well", terms such as "noise" and "well" are significant feature terms for the node "noise".
the term "noise" becomes insignificant for its child nodes "noise +" and "noise -", since the hierarchical classification characteristic of the HL-SOT approach that a node only processes target texts which are labeled as true by its parent node ensures that each target text handled by the nodes "noise +" and "noise -" is already classified as related to "noise".
contrasting
train_17723
In a most recent research work (Wei and Gulla, 2010), Wei and Gulla proposed the HL-SOT approach that sufficiently utilizes the hierarchical relationships among a product attributes and solves the sentiment analysis problem in a hierarchical classification process.
the HL-SOT approach proposed in (Wei and Gulla, 2010) uses a globally unified index term space to encode target texts for different nodes which is deemed to limit the performance of the HL-SOT approach.
contrasting
train_17724
Evaluating the correctness of discourse parsing is a hard task.
it is not of prime importance for our task that the segments are correct according to any discourse theory but that they do not include passages containing differing labels according to the gold standard.
contrasting
train_17725
We believe that it is because the opinion annotation on dependency relations is more difficult than on words, sentences, or documents.
because of this alignment, we are able to see the distribution of different dependency relations in opinion sentences and opinion segments (opinion trios).
contrasting
train_17726
NTCIR annotated opinions, polarities, sources, and targets for its multilingual opinion analysis task (MOAT, Seki et al., 2008).
none of them were annotated on materials with syntac-tic structures, and it caused the lack of analysis of opinion syntactic structures.
contrasting
train_17727
The polarity of an adjectival modifier is often propagated to the polarity of the modified noun or noun phrases with no inherent polarity.
sometimes the polarity is not propagated to that of the enclosing clause or sentence at all.
contrasting
train_17728
Our work is similar to their work in that we followed the idea of the Principle of Compositionality.
our focus is on examining the characteristics of context surrounding a given adjectival modifier when its polarity is either propagated or not propagated and seeing how this propagation result affects the overall polarity of the clause.
contrasting
train_17729
We believe that this is because if the polarity of the top node word is explicitly 'positive' because of its inherent polarity the overall polarity of the clause is obviously 'positive' regardless of the result of the polarity propagation decision.
in the case of 'neutral' clause, the correct polarity propagation decision for 'UNSHIFT' is critical for detecting the overall polarity.
contrasting
train_17730
The detection results of the overall sentiment at the clause level are meaningfully enhanced as compared to those based on the previous polarity propagation rules regarding especially 'neutral' sentences.
despite the correct decision for 'UNSHIFT', we found that such polarity of the modifiers may also help to identify the implicit sentiment without further deeper linguistic analysis.
contrasting
train_17731
Therefore, when a user issues a query, recommending tweets of good quality has become extremely important to satisfy the user's information need: how can we retrieve trustworthy and informative posts to users?
we must note that Twitter is a social networking service that encourages various content such as news reports, personal updates, babbles, conversations, etc.
contrasting
train_17732
However, because of the diversity of the extracted relations and the domain independence, open relation extraction is probably not suitable for populating relational databases or knowledgebases.
the task of extracting relation descriptors as we have proposed still assumes a pre-defined general relation type, which ensures that the extracted tuples follow the same relation definition and thus can be used in applications such as populating relational databases.
contrasting
train_17733
In the early days of the speller, the dictionary was manually compiled by lexicographers.
it is time consuming to construct a broad coverage dictionary, and domain knowledge is required to achieve high quality.
contrasting
train_17734
Their reranking method had the advantageous ability to incorporate clickthrough logs to a translation model learned as a ranking-feature.
their methods are based on edit distance, and thus they did not deal with the task of synonym replacement and acronym expansion.
contrasting
train_17735
We consider that this issue can be solved at some level by generating a language model using the first term only, after splitting queries separated by a space in search query logs.
attribute words are not always separated by a space, and sometimes appear as the first term in the query 5 .
contrasting
train_17736
A keyword may exist in multiple articles.
several keywords cam uniquely identify a document if they are grouped together as a keyword set (Jiang et al., 2009).
contrasting
train_17737
(2010) proposed the use of crosslinguistic knowledge represented as a set of allowable head-dependent pairs.
this method still requires provision of language-specific rules to boost accuracy.
contrasting
train_17738
Each constituent type is denoted by C : w, where C is a syntactic category and w is the head word, such as s\ > np/ < np : eats.
a simple parametric syntactic prototype will give rise to parsing failures when faced with parametrically exceptional items, which occur in most if not all languages.
contrasting
train_17739
Each clue has been empirically proven to be effective for coordination disambiguation.
a unified approach that combines both clues has not been explored comprehensively.
contrasting
train_17740
Recent research efforts have led to the development of a state-of-the-art supervised coreference model that can address all of the aforementioned problems, namely the joint cluster-ranking (CR) model (Rahman and Ng, 2009).
other than its superior empirical performance to competing coreference models (such as the MP model), little is known about the joint CR model.
contrasting
train_17741
If NP k is anaphoric, the rank of i(c j , NP k ) is HIGH if NP k belongs to c j , and LOW otherwise.
if NP k is non-anaphoric, the rank of i(c j , NP k ) is LOW unless c j corresponds to the NULL cluster, in which case its rank is HIGH.
contrasting
train_17742
Hence, if we were to train an SVM classifier, all we need to do is to design a kernel.
we are given a ranking problem, and it is not immediately clear how an SVM can learn a ranking model in the presence of tree-based features.
contrasting
train_17743
If both feature vectors contain only flat features, the subtraction is straightforward, since each flat feature is real-valued.
if one of the feature vectors has a tree-based feature 3 (which happens when c i or c j is NULL), we handle the flat features and the tree-based feature separately.
contrasting
train_17744
If both instances contain only flat features, we simply employ a normalized linear kernel, which computes similarity as the cosine of their feature vectors.
if one or both of them has a tree-based feature, a linear ker-nel is not directly applicable.
contrasting
train_17745
Recently, Woodsend and Lapata (2011) proposed a framework to combine treebased simplification with ILP.
to sentence compression, sentence simplification generates multiple sentences from one input sentence and tries to preserve the meaning of the original sentence.
contrasting
train_17746
Our first attempt is to collect data automatically from original English and Simple English Wikipedia, based on the suggestions of Napoles and Dredze (2010).
we found that the collected corpus is unsuitable for our model.
contrasting
train_17747
The expected outcome is when we use a larger stack-size the decoder may has more chance to find better hypotheses.
a larger stack-size will obviously cost more memory and run time is slower.
contrasting
train_17748
The rich information offered by these systems provides additional clues for collaboratively summarizing online documents in a social context.
most existing methods generate a summary based only on the information within each document or its neighboring documents, while the social context is usually ignored.
contrasting
train_17749
Without doubt, effective recognition of them provides a good basis for theme-based summarization.
summaries generated in such a way are not guaran-teed to cater to the user's information need and therefore may not always be in line with his/her expectations.
contrasting
train_17750
Some other works (Hirao et al., 2002;Iwasaki et al., 2005;Murray et al., 2005;Shen et al., 2007;Byrd et al., 2008;Fujii et al., 2008) can generate summaries satisfying such requirements.
these methods generally require manual work of creating rules or training data.
contrasting
train_17751
"contact reason", "response"), where the method or lead method is used.
the method requires training data to identify the sentence type of each sentence.
contrasting
train_17752
The lead method extracts sentences from the top in order under the assumption that the important points are described first.
important parts such as the customer's requirements and agent's responses can be located anywhere because the customer's requirements are identified through conversation interactions, which differ according to customers.
contrasting
train_17753
A pre-order walk of bullet tree on slides is actually a natural choice, since speakers of presentations often follow such an order to develop their talks, i.e., they discuss a parent bullet first and then each of its children in sequence.
although some remedies may be taken (Zhu et al., 2010), sequentializing the hierarchies before alignment, in principle, enforces a full linearity/monotonicity between transcripts and slide trees, which violates some basic properties of the problem that we will discuss.
contrasting
train_17754
Second, the better performance of HieCut and SeqCut shows that HieCut further benefits from avoiding sequentializing the bullet trees.
this two aspects of benefit do not come independently, since the former (performance of an alignment objective) can significantly affect the latter (whether a model can benefit from avoiding sequentializing bullets).
contrasting
train_17755
Obviously, such a test would take years to perform, and would need to be done again, each time the resources for a language are updated.
a statistical machine translation (SMT) system can attempt this acquisition automatically, and its mistakes highlight any shortcomings in the documentation while there is still time to collect more.
contrasting
train_17756
This is painstaking work, and transcription is usually a slow process given the issues with orthography just identified.
such transcriptions are an essential step to the creation of other language resources such as lexicons and grammars.
contrasting
train_17757
The algorithm as defined is only applicable to aligning two sentences at a time.
it has a simple extension to allow alignments of N transcriptions simultaneously.
contrasting
train_17758
Equation (52) is only correct if word boundaries are treated as states in the HMM, which they are not; state sequence lengths are pre-drawn from a gamma distribution.
the assumption becomes more accurate as the mean word length increases, and the average source block length decreases.
contrasting
train_17759
Note that this method selects sentences from Web search results using the keyword queries determined according to TF-IDF.
our main goal is to efficiently identify questions covering a wide range of topics while matching a certain style, often represented by colloquial textual fragments and therefore consisting of frequent words.
contrasting
train_17760
The peak performance was obtained when the corpus size had 80 million words consistently in both test data.
the best performance was worse than when we trained on www (16.35% in g1 and 15.28% in g2).
contrasting
train_17761
If these are used as training corpus, they may be harmful.
as seeds they only derive trigram probabilities for sentence selection, the selected sentences being natural, real life sentences from the Web.
contrasting
train_17762
They present a method to calculate the minimum number of borrowings required to admit that tree.
the method does not actually construct a tree from the data, and it may be computationally intractable when the number of borrowings is large.
contrasting
train_17763
Then s and t are not present in any of the same languages, but there are two languages l i , l j ∈ L such that l i has character s but not t, and language l j has character t but not s. If s and t are only present within the language grouping, they are not informative when language family grouping is used.
if both s and t are present at an internal node ancestral to language grouping L, then this will make the data closer to admitting a CDP by decreasing the number of borrowings that we need to posit.
contrasting
train_17764
LangID as a computational task is usually attributed to Gold (1967), who sought to investigate language learnability from a language theory perspective.
its current form is much more recognizable in the work of Cavnar and Trenkle (1994), where the authors classified documents according to rank order statistics over byte n-grams between a document and a global language profile.
contrasting
train_17765
TextCat (CT ) selects features per language by term frequency, whereas langid.py uses LD bin (the focus of this work).
the learning algorithm used by TextCat is a nearest-prototype method using the token rank difference metric of Cavnar and Trenkle (1994), whereas langid.py uses multinomial naive Bayes.
contrasting
train_17766
For name disambiguation in entity linking, there has been much previous work which demonstrates modeling context is an important part of measuring document similarity.
the traditional approach for entity linking treats the context as a bag of words, n-grams, noun phrases or/and co-occurring named entities, and measures context similarity by the comparison of the weighted literal term vectors (Varma et al., 2009;Li et al., 2009;Zhang et al., 2010;Zheng et al., 2010;Dredze et al., 2010).
contrasting
train_17767
Each article in Wikipedia is assigned several categories by the contributors as requested.
from our observation some categories in Wikipedia may not be suitable to model the topics of a document.
contrasting
train_17768
This should be because isa_all includes more categories than isa_class and isa_instance, and thus can capture more semantic information.
although All and Alladmin include even more categories, they introduce many categories which are unsuitable to model the topics of a news article or blog text, such as the two categories mentioned in Section 3.3, "people by status" which is not in an is-a relation and "Wikipedia editing guidelines" which is used for encyclopedia management.
contrasting
train_17769
(2010), the system uses pseudorelevance feedback to expand queries.
the two systems do not take account relations in a query.
contrasting
train_17770
Next, other nodes (web pages) of their network are activated.
the above Constrained-SA (CSA) models do not use relations in a given query to constrain spreading.
contrasting
train_17771
(2005), the authors use the relations in a query to expand the query.
the work only exploits spatial relations (e.g.
contrasting
train_17772
Theoretically, topic detection from microblog text is similar to that from news articles.
the microblog text is rather different 1 http://projects.ldc.upenn.edu/TDT/ 2 http://techcrunch.com/2011/04/06/twitter-q1-stats/ from news articles.
contrasting
train_17773
The thread tree has three sub-trees, namely, there are three subtopics within thread .
seen from Figure 4, the right sub-tree is obviously not relevant to the dominating topic.
contrasting
train_17774
The notable contribution lies in that the serious sparse data problem in microblog processing is alleviated to great extent.
the reported work is still preliminary.
contrasting
train_17775
The unigram language model captures the familiarity of individual words.
we expect the perplexity computed using higher order models to distinguish between common word transitions in the domain, and those that are unexpected and evoke surprise.
contrasting
train_17776
We have drawn the conclusion in (Xia et al., 2011) that Sum rule is a low-cost yet effective approach for sentiment classification.
this conclusion may not hold in the cross-domain tasks.
contrasting
train_17777
This makes our approach suitable for document filtering purposes.
classification of out-ofdomain data seems difficult because of the covariate shift (Shimodaira, 2000) in feature distribution between domains.
contrasting
train_17778
Similar problems refer to the analysis of speaker intentions in conversation (Kadoya et al., 2005) and to the area of textual entailment (Michael, 2009), which is about the information implied by text.
lFA is about the question why a text was written and, thus, refers to the authorial intention behind a text.
contrasting
train_17779
Additionally, Boese and Howe (2005) recognized that, in the web, genres may evolve over time to other genres.
language functions still represent one important aspect of genres.
contrasting
train_17780
Adding WS even led to a decrease of one percentage point.
this does not mean that the writing style features failed, as we see later on, but seems to be only noise from the optimization process of the SVM.
contrasting
train_17781
This indicates that language functions relate to the writing style of a text.
the correlation with sentiment appeared to be low.
contrasting
train_17782
According to (Ehling et al., 2007), for each source sentence with N different translations, we could select the final translation based on the following Minimum Bayes Risk principal: e = arg min e { e (P r(e |f ) • (1 − BLEU (e , e)))} (6) Here P r(e |f ) denotes the posterior probability for translation e and BLEU (e , e) represents the sentence-level BLEU score for e using e as reference.
since the translation hypotheses are generated under different groups of weights, the corresponding posterior probability is no longer comparable.
contrasting
train_17783
It seems that we need at least 64 particles for a stable tuning in our setting.
when MERT-PSO employed sufficiently many particles, it outperformed Moses-MERT and provided stable convergence even though MERT-PSO is a stochastic optimization method.
contrasting
train_17784
Although our new method does produce a significant gain over the best single system, it does not perform as well as our confusion network decoding on any condition that we tried.
this new procedure was not designed to replace our existing method, but rather to complement it.
contrasting
train_17785
1 Confusion network based methods align the various input hypotheses against one another to form the confusion network, and then generate the most likely path through this network to produce a combined 1-best (Bangalore et al., 2001).
as mentioned previously, we already have state-ofthe-art confusion network system in place, which is based on (Rosti et al., 2007;Rosti et al., 2009).
contrasting
train_17786
When this feature was used by Snover, it combined the standard language model with a test-sentence-specific language model that was trained on several hundred documents.
in the case of system combination, we are adapting towards a much smaller amount of data, so P (q j |q j−1 , ..., q j−n+1 ) will not be well estimated.
contrasting
train_17787
Because we use a large number of discriminative features in our baseline MT system, there is a moderate-to-significant over-fitting effect when optimizing on any new set.
in the past we have found that even a large amount of overfitting (e.g., 3-4 BLEU points) on the tuning set does not have a negative affect on the test set results.
contrasting
train_17788
( 3)pens ACC pick up feel TOP become not did "(I) did not get into start writing" Because of our strict condition, the size and variation of the extracted labeled data are limited.
this method gave us longer and more natural reliable labeled data.
contrasting
train_17789
As shown in § 3, in Iwanami, there are 1,450 original example sentences, such as in a sentence (2), for the target words.
we could use only 362 example sentences to extract labeled instances, such as in a sentence (3).
contrasting
train_17790
Step-1 provided superior performance (80.2 %) to the state-of-the-art result (76.4 %), and the high effectiveness of this method is proved.
it may be difficult to achieve any further improvement because the extracted data may have an unnatural sense distribution and limited variations.
contrasting
train_17791
The widely used knowledge resource in such methods is WordNet (Fellbaum, 1998).
wordNet-based wSD methods usually achieved lower performance compared to supervised methods, mainly due the fact that the lexical and semantic knowledge contained in wordNet is not sufficient for wSD.
contrasting
train_17792
Therefore, if extending WordNet with the large amounts of semantic relations contained in ConceptNet, it is desirable to improve the performances of WordNet-based WSD methods.
conceptNet cannot be directly used for WSD purpose due to the existence of polysemy and synonymy of the concepts in it.
contrasting
train_17793
It is obvious that the concepts related to airplane should have the same relation with plane.
it is not the case in ConceptNet.
contrasting
train_17794
Therefore some recent work (Mihalcea, 2007;Ponzetto&Navigli, 2010) exploits Wikipedia, a large collaborative Web encyclopedia, to extract the knowledge for WSD.
the type of semantic relations extracted from Wikipedia is uncertain.
contrasting
train_17795
Ideally, the NGD score of any term in the WSP of the correct sense is lower than that in the WSP of incorrect sense, thus we can simply use the arithmetic mean of the scores to evaluate the relatedness of WSP and the assertion.
it is inevitable that there are some noisy terms in WSPs, which will dramatically decrease the performance of disambiguating ConceptNet.
contrasting
train_17796
There is no problem for the occurrence of handball in WSP( 1 n handball ).
it is because the computation of NGD score does not consider different senses of the same term in different occurrences, different handballs in different WSPs actually have the same NGD scores though they correspond to different senses.
contrasting
train_17797
On one hand, even a simple knowledge-based WSD algorithm using the enriched WordNet can perform as well as the highest-performing supervised ones.
more sophisticated approaches (Agirre&Soroa, 2009; Navigli&Lapata, 2010) may achieve even higher performance by using such enriched WordNet.
contrasting
train_17798
This method does not require a parallel corpus.
it requires sense marked corpus for one of the two languages.
contrasting
train_17799
This algorithm when tested on 60 polysemous words (using English as L 1 and Japanese as L 2 ) delivered high accuracies (coverage=88.5% and precision=77.7%).
when used in an all-words scenario on our dataset, this algorithm performed poorly (see section 6).
contrasting