id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_93000
Given the observation that more recent training data seems to be more important than older one, we apply an exponential decay function: where λ is the decay factor and ∆t is the discretized time distance (0 for most recent part, 1 for the next one, etc.).
of course, we expect to achieve better results by finding the optimal weighting between recent and ancient data.
neutral
train_93001
We extend the traditional HMM to allow a broader range of phone mapping configurations.
we propose to estimate this probability by a combination function of similarity function and translation table.
neutral
train_93002
To test CLIR with multiple transliterations, we need a document collection with controlled multiple transliterations.
the null transition (Bahl et al., 1982) is used to represent skipping a state without consuming any observations.
neutral
train_93003
2003) on a development set of 500 sentences.
during testing, as the reference translations cannot be used, test sentences are converted into a lattice (Dyer et al., 2008) where two alternate paths are included, one with the redundant source word removed and another with the redundant source word as is.
neutral
train_93004
The following is an illustration of the kind of improvements clause-based translation brings: Input: America claims that Iran wants to continue its nuclear programme, and secretly builds atomic weapons.
simply translating non-finite clauses separately with reordering constraints around them, will not lead to good translation, because the translation of these clauses is often dependent on the superordinate clause, and also there is reordering between these clauses and the superordinate clause.
neutral
train_93005
Apart from the BTEC 3 corpus available through the IWSLT 4 competition and Holy Bible datasets described in (Paul, 2008) and , respectively, there is a recent release of a six language parallel corpus (including both Chinese and Spanish) from United Nations (UN) for research purposes (Rafalovith and Dale, 2009).
this popular system implements a log-linear model in which a source language sentence f J = f 1 , f 2 , .
neutral
train_93006
We experiment with the parallel Chinese-Spanish corpus (United Nations) to explore alternatives of SMT strategies which consist on using a pivot language.
in this study we use the UN corpus taking advantage of the fact that (as far as we are concerned) it is the biggest parallel corpus freely-available in Chinese-Spanish and it contains the same sentences in six other languages, therefore we can experiment with different pivot languages.
neutral
train_93007
The near-synonyms were then removed from the test examples for FITB evaluation.
near-synonyms are not necessarily interchangeable in contexts due to their specific usage and syntactic constraints.
neutral
train_93008
This paper addresses the issue of cluster labeling and presents a method for assigning labels by using concepts in a machinereadable dictionary.
the data we used is RWCP corpus labeled with UDC codes selected from 1994 Mainichi newspaper (RWC, 1998).
neutral
train_93009
(4), hy(x) refers to the hypernym of a word x. min(dis(hy(w i ), w i )) shows the minimum distance between hy(w i ) and w i .
the suggested terms, even when related to each other, tend to represent different aspects of the topic underlying the cluster, and it is often the case that a good label does not occur directly in the document.
neutral
train_93010
And the probability of a token that has occurred c times in that context before is c − 1 2 n This allocation strategy is called PPMD (Howard, 1993) and has shown great performance in text compression.
extended to adopt more elaborated patterns such as Kozareva et al.
neutral
train_93011
Each prediction takes the form of a probability distribution that is provided to an encoder, which is usually an arithmetic coder.
such a task aims to extract instances belonging to a specific category such as acquiring Tom Hanks and Al Pacino into a list containing other actors.
neutral
train_93012
TinySVM version 0.09 1 was used for SVM training and classification.
it is not clear how we can effectively use the probabilities of different cases c; ρ i (c) for some cases would be reliable, while some others less reliable.
neutral
train_93013
The two vector-based WSI approaches use k-means algorithm to cluster the context vectors of target words and the maximum number of k-means iterations is set to 100.
for each cluster C j , we generate instance pairs, in which is the total number of instances that belong to C j .
neutral
train_93014
Our findings indicate that the lexicon-based method is highly competitive with the supervised, task-specific method.
an even more individualized vector classification model, in the form of individual k values for each pair, does not improve performance.
neutral
train_93015
We also removed pairs where one of the words was quite rare (fewer than 150 tokens in the entire corpus) or where, based on examples pulled from the corpus, there was a common confounding homonym-for instance the word prob, which is a common clipped form of both problem and probably.
the agreement is surprisingly low; even if "I'm not sure" responses are discounted, agreement with the writer's choice gold standard is just 71.7% for the remaining datapoints.
neutral
train_93016
But neither ESA nor WNP recognize collocations: the former because of the bag-of-words principle underlying tfidf, and the latter only in the case where the collocational pair is a concept on its own.
we found that this measure provides bad results in its lower range (since the path length between distant nodes strongly depends on the density of wordNet for each knowledge domain).
neutral
train_93017
Most of the product attribute extraction techniques in literature work on structured descriptions using several text analysis tools.
paid WAAAAY to much for it at the time th0.. it sellz now fer like a third the price I paid.. heheh.. oh well....the fact that I didn't wait a year er so to buy a bigger model for half the price.. most likely from a different store.. ..not namin any namez th0.. *cough*BBHOSEDMe*cough* The italicized terms are some product attributes discussed in this review.
neutral
train_93018
We call such words as Wikipedia words and if cannot be mapped, we refer them as Non-Wikipedia words in later sections of this paper.
for a Wikipedia words set {x1, x2, x3, ....x k }, semantic relatedness of xi to the context is given by The applicability of CR feature is justified in terms of high scalability and the ever growing knowledge of Wikipedia.
neutral
train_93019
"d‚k:$| price is a little low" obtains 1080 hits while "d‚k:p| price is a little high" obtains 19400 hits.
we will just focus on CPs in this paper and leave WPs for future work.
neutral
train_93020
This kind of answer can be extracted by using a method based on machine learning techniques (Isogai et al., 2009).
because of this large numbers of questions, questioners had better submit questions which give enough information to answerers.
neutral
train_93021
In other 90 cases, questioners resubmitted TYPE (Q2-1), (Q2-2), and (Q2-3) questions, in other words, they knew or could find out easily what answerers indicated.
for example, in (Q 6), the questioner described the solution received from the answerer of (A 5) was unhelpful to solve his/her problem.
neutral
train_93022
Then, there were 17 of these 107 cases where questioners resubmitted TYPE (Q2-4) questions, in other words, they had unknown points in indications received from answerers and needed to ask about what they were.
there were 107 cases where questioners resubmitted TYPE (Q2-1), (Q2-2), (Q2-3), and (Q2-4) questions, in other words, they accepted indications from answerers and modified their questions.
neutral
train_93023
The reason is that since the predicted types of a question are already diversified by CogQTaxo, incorporating it into question re-ranking already enables us to diversify the infoNeeds in the results implicitly.
it reflects the real user infoNeeds distribution.
neutral
train_93024
In this data set, an argument is not always a sentence.
in this data set, an argument is not always a sentence.
neutral
train_93025
In this paper, we show that the accuracy of a discriminative sequential POS tagger can be substantially improved by exploring syntactic features.
this model can be regarded as the integrated model of both Perceptron-based and CRF-based models.
neutral
train_93026
Given an input sentence x = w 1 ...w n , we denote its POS tag sequence by t = t 1 ...t n , where t i ∈ T , 1 ≤ i ≤ n, and T is the POS tag set.
pOS tagging is traditionally considered as a supporting task for dependency parsing.
neutral
train_93027
Recently, extensive research on Chinese POS tagging has been done.
the introduction of long-distance dependencies can largely reduce this difficulty.
neutral
train_93028
As the confidence values are low, major arcs in these sentences might be wrong.
we consider two types of raw corpus, one from same domain as of training and testing data and the other from different domain.
neutral
train_93029
All the above mentioned works are on phrase structure parsing of English.
these sentences were giving negative impact on the parser performance.
neutral
train_93030
The usefulness of the corpus may be gauged in two ways.
this corpus is intended to lay the foundation for this direction of research.
neutral
train_93031
For any contrastive studies between Cantonese and Mandarin, a corpus balanced between formal and colloquial registers would be desirable.
the three differences for which no examples exist are the following.
neutral
train_93032
For example, Cantonese is spoken by more than 52 million people, mostly in southern China and overseas Chinese communities.
for some expressions of kinship, the marker is not simply omitted but replaced by a classifier, such as 我個仔 ngo-go-zai 'I CL son' 'my son'.
neutral
train_93033
The µ parameter was tested on the [100,1200] range for Res-PubliQA and Yahoo!
they use the same WordNet-based relatedness method in order to expand documents, following the BM25 probabilistic method for IR, obtaining some improvements, specially when parameters had not been optimized.
neutral
train_93034
All senses of the ambiguous word are provided as optional answers.
here, the two sentences are "根据这 一 方 案 (according to the scheme)" and "这 种 状况带来了两个弊端 (this situation brings two drawbacks)" respectively.
neutral
train_93035
Although it is important to use powerful machine learning algorithms, latest studies have found that large-scale and high-quality corpora are more important for WSD (Agirre and Edmonds, 2006).
the two users are asked to input as many synonyms of the same ambiguous word in the sentence as possible within a limited pe-riod of game round time.
neutral
train_93036
In relation extraction, syntactic relatedness between the candidate entities of a relation is usually considered an important cue (Zhou et al., 2005;Mintz et al., 2009).
we also investigate whether healthiness correlates with sentiment.
neutral
train_93037
Note that this data set does not have any annotations.
we say that q 1 and q 2 are inconsistent in segmentation if there exist more than one common subsequence of tokens having different segment boundaries.
neutral
train_93038
for query q is then defined as a sequence of nonoverlapping segments.
it is worth studying other methods that can address such performance gap.
neutral
train_93039
At each node, we select children to which edge classifiers return positive scores (Lines 4-7).
it seems safe to conclude that local training caused overfitting.
neutral
train_93040
contextually bound, (EER: 1:100), see Example (6) from PCEDT.
the sentence members that appear only implicitly in the sentence (as the Addressee in this case) are not supposed to carry some new, important information (because their presence in the /surface part of the/ sentence is not necessary) and therefore they are automatically pre-annotated as contextually bound (furand the tectogrammatical layer) are displayed as small circles in the figure.
neutral
train_93041
ko is approximated to dative while se is approximated to instrumental case.
the extraction of distributional counts is simple and straightforward in Hindi.
neutral
train_93042
• Nominal Ambiguity: As a matter of fact, animacy is an inherent and a non-varying property of nominal referents.
the ambiguity in these case makers, however, has a profound impact on our results as discussed in Section 5.
neutral
train_93043
A clustering experiment is performed with FCM clustering algorithm on SET-1 and SET-2, with parameters c 'number of clusters' and m 'degree of fuzziness' set to 2.
our clustering of these nominals is justified.
neutral
train_93044
To extract mention properties, we have to compute the head.
to generate a training example we append "C-" to one NP node, keeping all the remaining nodes as-is.
neutral
train_93045
If a spurious mention is introduced, the CR system might still assign it to no coreference chain and thus discard from the output partition.
any MD method for OntoNotes would rely on parsing.
neutral
train_93046
Another shortcoming of previous classification approaches is that they only focus on detecting the overall topical category of a document.
table 2 presents the word statistics of the prior lexicons generated using these two different methods 6 .
neutral
train_93047
We propose a novel approach of deriving word prior knowledge using the measurement of relative entropy of words (RWE).
the task of classifying tweets as violence-related poses different challenges including: high topical diversity; irregular and ill-formed words; event-dependent vocabulary characterising violence-related content; and an evolving jargon emerging from violent events.
neutral
train_93048
(1)Linkbased models rely solely on hyperlinks, without considering content-based features.
the first observation is that all of the content features are discriminative.
neutral
train_93049
Our experimental results show that the proposed method is significantly effective in reducing the feature space compared to the wellknown feature selection methods, and yet the overall effectiveness is similar to or sometimes better than a state-of-the-art approach depending on the PoS of the events.
it refers to the dependent words and their hypernyms in MODifier relation (amod, advmod, partmod, tmod and so on).
neutral
train_93050
(9) sa-mwu-sil ('office) -> sam-sil (10) key-im ('game') -> keym Sentiment analysis or opinion mining techniques that utilize retrieval tasks to obtain the training sets or corpus data have to extract subjective chunks or morphemes from the real-world data.
as the number of the users using social networking services increases rapidly, sentiment analysis or opinion mining capable of automatically extracting the sentiment orientation from online posts has been gaining attention from NLP researchers (Hu and Liu, 2004;Kim and Hovy, 2004;Wiebe, 2000;Pak and Paroubek, 2010).
neutral
train_93051
Summarization of properties in Korean SMS text Ling and Baron (2007) reported that lexical shortening is the one of the most significant characteristics one can see in text messages.
in summary, we will argue that such a Romanization-based retrieval method has several advantages since it provides an easier way to preprocess the data with a variety of linguistic rules.
neutral
train_93052
A Romanization transliteration scheme is used in this study because it naturally represents the phonetic properties of Korean syllables while providing a more intuitive way to apply a set of defined rules to the sequence.
han (2006) applied the transliteration method to perform part-of-speech tagging for Korean texts using Xerox Finite State Tool.
neutral
train_93053
The dictionary-based features are fired if a string in a sentence is registered as a word in a dictionary, and they encode whether the string begins with or ends before the target character, or includes the target character.
although additional time is required to perform the arg max operation, it is practically negligible because the lattice generated in this framework is generally small.
neutral
train_93054
We call this phenomenon substitution with lowercases.
in order to approximate the practical coverage of our method, we classified unknown words that occur more than two times in the Kyoto University and NTT Blog (KNB) corpus 5 into four types: words that are covered by the lexicon created by Murawaki and Kurohashi (2008) (Murawaki's Lexicon), words that are not covered by Murawaki's Lexicon but have entries in Wikipedia, words that are covered only by our method, and the others.
neutral
train_93055
Since long sound symbols and lowercases rarely appear in the lexicon, there are few likely candidates other than the correct analysis.
for example, if a sentence " ."
neutral
train_93056
Most previous work on this approach has aimed at developing a single general-purpose unknown word model.
in addition, as mentioned above, since we Figure 2: Example of a word lattice with new nodes " ," " ," and " ."
neutral
train_93057
The estimated number of negative changes for 100,000 sentences: N * 100kS .
there are few works that focus on certain types of unknown words.
neutral
train_93058
If the new nodes and their costs are plausible, the conventional process for finding the optimal path will select the path with added nodes.
this is not true for onomatopoeias.
neutral
train_93059
We call this phenomenon substitution with long sound symbols.
in this paper, we introduce derivation rules and onomatopoeia patterns to the unknown word processing in Japanese morphological analysis, and aim to resolve 1) unknown words derived from words in a pre-defined lexicon and 2) unknown onomatopoeias.
neutral
train_93060
(Sun and Uszkoreit, 2012) introduced a Bagging model to effectively combine the outputs of individual systems.
the large-scale unlabeled data we use in our experiments comes from the Chinese Gigaword (LDC2005t14), which is a comprehensive archive of newswire text data that has been acquired over several years by the Linguistic Data Consortium (LDC).
neutral
train_93061
For example, previous work has shown that sequence models alone cannot deal with syntactic ambiguities well (Clark and Curran, 2004;Tsuruoka et al., 2009).
the syntax-free hybrid system is more appealing for many NLP applications.
neutral
train_93062
Identifying the complex predicates has turned to be rewarding.
the semantics of light verbs is, however, kept as such.
neutral
train_93063
In the first experiment, the first sense of each lexical item is selected while in the second, WSD is used to pick the contextually most appropriate sense.
to identify the temporal sense of a numeral, used as nominal, like 2013 is challenging.
neutral
train_93064
As we would expect, we obtain results for WEB TDs that are consistently worse than those in Table 2 (not shown), with one exception: a slight increase for |w| > 8 in email.
for two TDs, tf-idf yields the best result (grp on line 3, BIO on line 24), for four TDs tf (rev, blog, answer, email: line 14).
neutral
train_93065
The identification of non-contiguous LVCs proved to be more difficult for both methods than that of contiguous LVCs.
since different phenomena proved to be difficult for the two systems, a possible direction for future work may be to combine the two approaches in order to minimize prediction errors.
neutral
train_93066
The largest distance between the noun and the verb is 21 tokens and the average distance between the two non-adjacent components is 4.28 tokens.
it was difficult for both the dependency parser and the classifier to recognize rare LVCs or those that included a nonfrequent light verb.
neutral
train_93067
In several NLP applications like information retrieval or machine translation it is important to identify LVCs in context since they require special treatment, particularly because of their semantic features.
uralom alá jut rule under get "to get under rule" or hatás alattáll effect under stand "to be under effect").
neutral
train_93068
Note the increase in FRecall up to rank @∞ for known ASR and unknown-OOV Text and ASR, which indicates that correct interpretations are returned at very high ranks when input words are not identified (NDCG increases only modestly, as it penalizes high ranks).
2 These situations, which affect the performance of an SLU system, are characterized along the following two dimensions: accuracy and knowledge.
neutral
train_93069
However, this mode of evaluation, which we call Generative, does not address whether a system's interpretations are plausible (even if they are wrong).
in the next section, we discuss related work, and in Section 3, we outline our system Scusi?.
neutral
train_93070
's "punitive" at-titude to attributes that do not match reality, such as a bookcase not being under any portrait, may need to be moderated.
notFound@K counts the number of representable utterances for which no correct interpretation was found within rank K. notFound@∞ considers all the interpretations returned by an SLU system.
neutral
train_93071
Our study had 26 participants, who generated a total of 432 spoken descriptions (average length was 10 words, median 8, and the longest description had 21 words).
descriptive prepositional phrases starting with "with" or "of" may be judiciously ignored, or the referent may be disambiguated by asking a clarification question.
neutral
train_93072
If they match, no replacement is performed.
the second stage applies Charniak's probabilistic parser (http://bllip.cs.brown.edu/ resources.shtml#software) to syntactically analyze the texts in order of their probability, yielding at most 50 different parse trees per text.
neutral
train_93073
In both cases, the probabilities assigned to the labels by the classifier are moderated by the classifier's accuracy.
both Scusi?+N+P+R and Scusi?+P+R outperform the original version of Average of Scusi?
neutral
train_93074
One obvious benefit of triangulation is to increase the coverage of the model on the input text.
a tuning procedure should assign higher weights to the models that produce higher quality translations and lower weights to weak models in order to control their noise propagation in the ensemble.
neutral
train_93075
For our experiments, we used the Europarl corpus (v7) (Koehn, 2005) for training sets and ACL/WMT 2005 1 data for dev/test sets (2k sentence pairs) following Cohn and Lapata (2007).
instead of using only the best translation, they took the n-best translations and translated them into the target language.
neutral
train_93076
The composition of this data is shown in Table 1.
the most effective settings were to use Ex-tWeight on a sentence level context.
neutral
train_93077
Confidence estimation systems usually do not have gold standard data and are mostly a linear interpolation of a large group of scores.
these features consist of the average of the sentence-level features described above.
neutral
train_93078
L 1 regularization method (Tsuruoka et al., 2009) model with the +id feature setting on both the development set and training data set, respectively, and their hyperameters are tuned on the dev-test set.
experiments on IWSLT Chineseto-english translation tasks show that, with the help of grouping these features, our method can overcome the above pitfalls and thus achieves significant improvements.
neutral
train_93079
Then, it begins to calculate W initialized as 0 and G .
in the first step, we learn a group structure of atomic features in the large training data for better coverage.
neutral
train_93080
Our ASR system is a five-pass system based on the open-source CMU Sphinx toolkit 3 (version 3 and 4), similar to the LIUM'08 French ASR system described in (Deléglise et al., 2009).
this transcription is then split into phrases and translated by a baseline SMt system into language L2.
neutral
train_93081
Our multimodal comparable corpus consists of spoken talks in English (audio) and written texts in French.
we filter the selected sentences or phrases in each condition with different TER thresholds ranging from 0 to 100 by steps of 10.
neutral
train_93082
For the English data, NEs are limited to those beginning with English characters and consisting of only English characters and some specific symbols ( .− :, !#).
in this section, we design collaborative learning mechanism, which contains inter-class and intraclass scoring criteria, to better control the quality of the patterns and NE instances bootstrapped in iterations.
neutral
train_93083
To improve the accuracy of wrappers, a lot of constraints such as part-of-speech tags (Etzioni et al., 2005) and trigger words (Talukdar et al., 2006) were introduced to tackle the tricky conditions.
likewise, if a pattern of class c i can also be generated by seeds from other categories, this pattern is obviously not a high-quality pattern for category c i .
neutral
train_93084
The samples with different annotations are then reviewed by both annotators to produce the final result.
precision is defined as the percentage of correct NEs of a given class from the automatically extracted ones.
neutral
train_93085
The experimental result of Twevent (precision 75.7%) is lower than that reported by Li et al.
(2012b) for an named entity recognition system on Twitter.
neutral
train_93086
(2012) tried to generate queries for a planned event to relax the limitation.
4249 event clusters in Gset were manually labeled into 804 news events and 3445 nonevents.
neutral
train_93087
DM: dictionary matching.
make a mistake), a literal verb + noun combination (e.g.
neutral
train_93088
The results published in Tu and Roth (2011) are good on the positive class with an F-score of 75.36 but the worst with an F-score of 56.41 on the negative class.
it coincided with a stem of a verb.
neutral
train_93089
Since the syntactic and the semantic head of the construction are not the same, they require special treatment when parsing.
in the second approach, the goal is to detect individual LVC token instances in a running text, taking contextual information into account (Diab and Bhutada, 2009;Tu and Roth, 2011;.
neutral
train_93090
Most of the above criteria are consistent with the original setup of collecting the truthful reviews in the op spam v1.3 dataset.
baseline features: 3,009 unigrams, 10,538 bigrams, and 6,571 syntactic production rules, as described in Section 5.1.
neutral
train_93091
obtained their best performance using unigrams alone, together with deep syntax features.
s a i is computed as the cosine similarity between D i and D i .
neutral
train_93092
With respect to the op spam v1.3 dataset used in our work, human judges were only able to achieve 60% accuracy, and the inter-annotator agreement was low: 0.11 computed by Fleiss's kappa .
much previous work on detecting deceptive opinions usually relies on some meta-information, such as the IP address of the reviewer or the average rating of the product, rather than the actual content of the review Liu (2012).
neutral
train_93093
Intuitively, in order to define a new epoch, both a big social impact of a series of events and new issues, which arouse the social interest, must be observed.
we investigated the significant changes in the distribution of terms in the Google N-gram corpus and their relationships with emotion words.
neutral
train_93094
If Least Square is positive then also the others are positive as well.
there is no the epoch distinction and statistical support.
neutral
train_93095
As shown in Table 3, the average length for COMMENTS is only 10.5 words, on par with TWITTER-1/2 (but according to this evidence, more carefully constructed).
in Table 4 we show the results of parsing 4000 randomly selected English sentences from each corpus using the ERG with the parsing setup we have described.
neutral
train_93096
This means that it is possible that some of the sentences have been spuriously identified as grammatical, since the very general types for unknown words give the grammar great flexibility in fitting a parse tree to the sentence, even where it may not be appropriate.
in the longer sentences of FORUMS and BLOGS, there is more scope for the authors to introduce anomalies into the text, increasing the chances of the sentence being unparseable.
neutral
train_93097
The overall best performing features were WD, QD, MP and OIT, which is in line with the findings in the correlation study in Section 5.2.
many interactions happen outside the context of a pre-defined static power structure or hierarchy.
neutral
train_93098
They can be called scalar uncertainties since in both cases, a scale is involved in the interpretation of the uncertain term.
although they are similar, we suggest that peacocks and hedges be differentiated in our classification because peacocks are related to subjectivity while hedges are more neutral, hence they can be relevant for different NLP applications (e.g.
neutral
train_93099
Associated software is to be given to help reconstruct tweets using their IDs.
 Only 2% of scopes are implicit.
neutral