id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_17900
Table 2 contains the results of the strict evaluation for both domains.
since strict evaluation is more likely to suit the evaluation of scientific keyphrase extraction better, i.e.
contrasting
train_17901
For example, in "this camera uses a lot of battery power", "battery power" clearly indicates battery life, which is an aspect of the camera entity.
there are some important differences between resources and other types of aspects.
contrasting
train_17902
The process continues until no more resource terms or verbs can be found.
this simple strategy has some major problems.
contrasting
train_17903
Using SDRT as a formal framework, we have the following discourse relations: Contrast(a,b) in (1) marked by although, Continuation(a,b) in (2) marked by and, and Attribution(a,b) in (4).We observe in our corpus that segments related by a Contrast or Continuation relation often share the same subjective orientation (about 80 %).
discourse connectors are not the only indicator for deciding whether a segment is opinionated or not.
contrasting
train_17904
Indeed, the longer a text is, the higher difficulty for human subjects is in detecting discourse context in longer texts.
the study of this hypothesis falls out of the scope of this paper and is therefore left for future work.
contrasting
train_17905
In the Chinese dictionary, there are more than 800 words which begin with the same character "" meaning "one", which indicates that the corresponding node will have more than 800 children.
in an English trie, the number of child nodes for any node doesn't exceed 52 for there are only 26 letters in the alphabet.
contrasting
train_17906
In Method 1, we make code mapping according to their original order in Unicode.
different characters have different frequencies in the dictionary, then what will happen if we map two characters with the same frequency to two target jump codes of which the numerical difference is as small as possible?
contrasting
train_17907
It spreads from 80 (hex) to 3730 (hex), for there are more than 14,000 Chinese characters are used in the dictionary.
in a English doublearray, the jump codes vary in a much smaller interval, which is from 41 (hex of letter "A") to 7A (hex of letter "z") originally.
contrasting
train_17908
achieve a CR larger than 95% and reduce the space usage by a percentage near or even larger than 40%, which is similar to Method 3.
method 6 is slightly better than method 5 from the aspect of space usage, just like method 2 is slightly better than method 1.
contrasting
train_17909
This fact seems to contradict our intuition that modal verbs are not of high diversity with respect to a complementary set containing nominal words only, which are generated according to determiners.
we know that, at least in English, words are of ambiguous categories; therefore, those words of both nominal and verbal senses, such as 'consider' and 'help', are classified to the complementary set by determiners according to their nominal sense, but their existence in the complementary set also enlarges the diversity of modal verbs due to their verbal sense.
contrasting
train_17910
With the rapid growth of the Internet data and the dramatic changes in the user demographics especially among the non-English speaking parts of the world, machine transliteration is important in many cross-lingual NLP, MT and CLIR applications as their performances have been shown to positively correlate with the correct conversion of names between the languages in several studies (Demner-Fushman and Oard, 2002;Mandl and Womser-Hacker, 2005;Hermjakob et al., 2008;Udupa et al., 2009).
the traditional source for name equivalence, the bilingual dictionarieswhether handcrafted or statistical built offer only limited support because new names always emerge.
contrasting
train_17911
Nivre (2008); Huang and Sagae (2010)), mainly because it achieves state-of-the-art accuracy while retaining linear-time computational complexity, and is also considered to reflect how humans process natural language sentences (Frazier and Rayner, 1982).
although some of the Chinese POS tags require long-range syntactic information in order to be disambiguated, to the extent of our knowledge, none of the previous approaches have addressed the joint modeling of these two tasks in an incremental framework.
contrasting
train_17912
Given a segmented sentence, our model simultaneously considers possible POS tags and dependency relations within the given beam, and outputs the best parse along with POS tags.
the combined model raises two challenges: First, since the combined search space is huge, efficient decoding is difficult while the naïve use of beam is likely to degrade the search quality.
contrasting
train_17913
In practice, just remembering a minimal set of features called kernel features˜ f (j, S) suffices to evaluate the equivalence of states: (2) By merging equivalent states based on this condition, we only need to remember relevant information from the top d (d = 3 in our models) trees on the stack to evaluate the score of the next actions.
since the stack shrinks when a REDUCE-LEFT/RIGHT action is applied, you often need to recover the last element of the stack from the history.
contrasting
train_17914
the optimality of the deductive system) is still assured (proof omitted due to limited space) even with the delayed features incorporated.
because any number of REDUCE-LEFT/RIGHT actions can occur between two SHIFT actions, the delayed features might need to refer to unboundedly deep elements from stack trees; therefore, the boundedness (see Huang and Sagae (2010)) of the kernel features no longer holds and the worst-case polynomial complexity is not assured.
contrasting
train_17915
Thanks to these features, we added to LGLex 2,084 adverbial entries (+20%).
a certain number of paraphrases are part of construction features, and thus need to be extracted from them.
contrasting
train_17916
Because the head words for the internal nodes are not marked in the PS, there are several possibilities in choosing the head words for the internal nodes: the head word of the Y can be a or b, and the head word of X can be c or the head word of Y , resulting in four possible DSs, as shown in Figure 3.
no matter which head words we choose for the internal nodes in the PS, the resulting DSs will not be the ones in Figure 4.
contrasting
train_17917
According to the structuralist tradition, coordination is an endocentric construction, since it contains not only one but several heads that can replace the whole construction syntactically.
this raises the question of whether coordination can be analyzed in terms of binary relations holding between a head and a dependent.
contrasting
train_17918
On the other hand, rather than the approximated classfier-based approach to labeling, one could consider settling for an exact but partial labeling by only assigning those dependency labels which unambigously arise from tuples of head and dependent CDG types.
the dependency labels obtainable with absolute certainty in this way are often of the less interesting kind -e.g.
contrasting
train_17919
Thus LPCFG becomes sensitive to lexical heads, and its performance is improved.
the information provided by lexical heads is limited.
contrasting
train_17920
The number of iterations needed is optimized on the development set.
introducing sequential dependency into this lexical model would cause a severe efficiency problem with the joint inference for parsing.
contrasting
train_17921
The lexical model I enables the parsing to utilize a large variety of features other than those encoded in the CFG in use.
the computation of forward/backward variables is expensive in both time and space.
contrasting
train_17922
The surrounding words provide most information for the disambiguation for tagging.
the weight parameter λ seems not as effective as expected for English.
contrasting
train_17923
This approach has the advantage to combine heterogeneous models, and solves the complex combinatory optimization problem via Lagrangian relaxation.
it has a inevitable defect of inefficiency, since it requires to parse and tag the input sentence repetitively (usually 10 times for each sentence).
contrasting
train_17924
This has a two-fold benefit: i) introduces a generalization level over surface forms; ii) provide to the parsing algorithm only the essential information, since POS tags are directly related to syntactic constituents and are sufficient to induce the syntactic structure of a sentence.
in our system, the role played by POS tags in syntactic parsing is played by entity components, annotated by CRF.for named entity annotation it is not true that components are sufficient to induce the entity tree.
contrasting
train_17925
For example, the 1 st stage tree for sentence 3 is shown in Figure 4 (a).
a normal single stage parser (our baseline parser) is trained on the full tree that looks like Figure 4 (b).
contrasting
train_17926
2-Soft, though giving a minimal improvement in the accuracies, is not statistically significant with the baseline.
on analyzing the output parses of all the three setups, we found clear and similar improvement patterns (listed below) in case of both 2-Hard and 2-Soft.
contrasting
train_17927
This seems to contradict with the fact that the latter solves the problem optimally while the former only looks for a reasonable solution.
this is not a problem because our word-reordering model itself cannot catch accurately the quality of a sentence.
contrasting
train_17928
Most SMT systems, not only phrase-based models (Och and Ney, 2004;Koehn et al., 2003;Xiong et al., 2006), but also syntax-based models (Chiang, 2005;Galley et al., 2006;Huang et al., 2006;Shen et al., 2008), usually extract rules from word aligned corpora.
these systems suffer from a major drawback: they only extract rules from 1-best alignments, which adversely affects the rule sets quality due to alignment mistakes.
contrasting
train_17929
This is different for the conditional models, which are easier to handle but where most approaches are based on initializing from single-word based models (Brown et al., 1993;Vogel et al., 1996;Al-Onaizan et al., 1999).
the recent work of Mauser et al.
contrasting
train_17930
Superficially the Bi-HMM looks similar to (Deng and Byrne, 2005).
this latter is actually a Mono-word model.
contrasting
train_17931
In principle, each of the three arising distributions has its own parameter set.
the initial probability and the inter-alignment model share the parameters p 0 and p 1 .
contrasting
train_17932
The technique presented in this paper is related to these previous works as it concerns the weighting of corpora or sentences.
it does not 1996 1997 .......................... 2009 2010 Alignments obtained from time-stamped training parallel text Figure 1: Overview of the weighting scheme.
contrasting
train_17933
(2010) to automatically optimize the weights of each time period.
this approach does not seem to scale very well when the number of individual corpora increases.
contrasting
train_17934
The best performance is obtained when using all the data (55M words, BLEU=30.48), but almost the same BLEU score is obtained by using only the most recent part of the data (24M words, part Recent 2).
if we use the same amount of data that is further away from the time period of the test data (25M words, part Ancient 2), we observe a significant loss in performance.
contrasting
train_17935
The research community has been using methods such as word error rate, EER, precision and recall and its many variants as metrics to evaluate systems.
due to homonyms and phone-set differences across multiple languages, word error rate is not always sufficient to distinguish transliteration accuracy.
contrasting
train_17936
This enables the use of taskspecific loss functions (e.g BLEU).
the definition of Bayes Risk depends critically on the posterior probability of hypotheses.
contrasting
train_17937
Other works based on salient terms are frequent phrases by (Osinski and Weiss, 2005), and integration of hierarchical information by (Muhr et al., 2010).
the suggested terms, even when related to each other, tend to represent different aspects of the topic underlying the cluster, and it is often the case that a good label does not occur directly in the document.
contrasting
train_17938
They showed the effectiveness of the method.
wikipedia is the free online encyclopedia, and everyone can access and edit the information.
contrasting
train_17939
Each prediction takes the form of a probability distribution that is provided to an encoder, which is usually an arithmetic coder.
the details of actual coding technique are of no relevance to this paper.
contrasting
train_17940
Probability ρ i (c) is expected to be a strong clue for sense disambiguation.
it is not clear how we can effectively use the probabilities of different cases c; ρ i (c) for some cases would be reliable, while some others less reliable.
contrasting
train_17941
Historically, using word senses usually involved the use of manually compiled resources in which word senses were represented as a fixed list of definitions.
there seem to be some disadvantages associated with such fixed list of senses paradigm.
contrasting
train_17942
These context vectors are clustered and the resulting clusters are taken to represent the induced senses.
when constructing context vectors, the approaches based on VSM assume that the words occurring in the contexts are independent and do not exploit semantic relevance between words.
contrasting
train_17943
Just like what have been shown in (Agirre and Soroa, 2007) and (Manandhar et al., 2010), the 1c1w baseline shows the best performance.
it only discoveries one sense for each target word, which is the most frequent sense of the target word.
contrasting
train_17944
However, it only discoveries one sense for each target word, which is the most frequent sense of the target word.
wSI_SR and wSI_SOV predict 2.1 and 2.4 senses on average per word respectively, which is more close to the actual number of senses (2.5).
contrasting
train_17945
For example, specific attributes like power consumption, pulsator, load, spin-dry effectiveness, noise, water usage, water leakage, etc for a product like washing machine cannot be correctly found in descriptions.
customers express their opinions in the form of reviews.
contrasting
train_17946
In general, we can see that the classifiers perform relatively well for the majority categories, such as problems and solutions.
for minority classes, e.g., "feedback" (that is only 9.8% among all the posts), the classifiers are not able to learn well.
contrasting
train_17947
Of course, it is possible to compile comprehension rate at the sentence level by preparing comprehension questions for each sentence, but this is just not realistic.
we have to compile read and listened data at the sentence level just like for spoken and written data.
contrasting
train_17948
In these cases, answerers did not indicate unclear points of questions.
we think these unhelpful solutions are one type of indication of unclear points of questions.
contrasting
train_17949
Normally, the retrieved questions are ranked according to the semantic similarity to the query question.
taylor (1962) argues that the user may fail to express his infoNeeds fully in the question.
contrasting
train_17950
Non-standard spellings in text messages often convey extra pragmatic information not found in the standard word form.
text message normalization systems that transform non-standard text message spellings to standard form tend to ignore this information.
contrasting
train_17951
Sometimes it may be composed of a number of sentences.
to annotate in such a way is costly and time-consuming.
contrasting
train_17952
We can see that the POS tagger can effortlessly assign the right tags to both "development" and "develop" in the English side.
it is very difficult in the Chinese side since no word form inflection is available and the context features may be too sparse or uninformative.
contrasting
train_17953
However, it is very difficult in the Chinese side since no word form inflection is available and the context features may be too sparse or uninformative.
the introduction of long-distance dependencies can largely reduce this difficulty.
contrasting
train_17954
They use the same WordNet-based relatedness method in order to expand documents, following the BM25 probabilistic method for IR, obtaining some improvements, specially when parameters had not been optimized.
to their work, we investigate methods to apply relatedness to query expansion, and we compare the results with pseudo-relevance feedback.
contrasting
train_17955
Currently, corpora are created mainly through manual annotation by expert annotators.
the cost of annotating a necessary size of corpora is prohibitive.
contrasting
train_17956
For example, SemCor 1 corpus contains WSD annotation of about 250,000 words sampled from a subset of the Brown corpus.
for most of the ambiguous words, the number of examples is still too small to train a high performance all-words WSD model.
contrasting
train_17957
The method can take advantage of a larger range of people.
human computation methods are rarely used in natural language processing tasks.
contrasting
train_17958
(2012) proposed an approach for adapting an answer extractor trained on one domain to another, by separating out the lexical characteristics of an answer from its domain relevance.
learning the lexical characteristics still required a training set.
contrasting
train_17959
The intuition is that posts that bear more resemblance to other posts in the thread have higher chances of being answers.
in a lot of discussion forums, especially those related to troubleshooting and problem resolution, we found that this assumption usually does not hold.
contrasting
train_17960
One of the methods proposed in this paper that uses a parallel acknowledgment classification task, belongs to the family of Multi-Task Learning (MTL) (Caruana, 1997) since what is learned for each task is used to improve the other task.
to the best of our knowledge, this is the first work that proposes a MTL-type answer classifier for forums in a semi-supervised setting.
contrasting
train_17961
This does not come as a surprise as the supervised classifier learns from labeled training data while the rule-based classifier is unsupervised.
we also find that the precision of the rule-based classifier largely outperforms our best supervised classifier on HLTH.
contrasting
train_17962
The fact that the best overall F-score achieved is not higher may be ascribed to the heavy noise (spelling/grammar mistakes) contained in our web-data.
we believe that even with those data we can show the relative effectiveness of the different feature types which is the most relevant aspect in our proof-of-concept investigation.
contrasting
train_17963
Most previous work on QS only can use wordbased MI as introduced above.
in some cases, the MI between tokens can not provide sufficient information for a segmentation decision.
contrasting
train_17964
investigated the use of syntactic dependency output by a dependency parser and reported a slight improvement over a baseline method that used only words.
the use of dependency parsers still introduces the problems stated in the previous section because of their handling of only syntactic dependencies.
contrasting
train_17965
At each node, we select children to which edge classifiers return positive scores (Lines 4-7).
if no children have positive scores, we select one child with the highest score (Lines 8-10).
contrasting
train_17966
English particles only, for example, also, especially, principally) mostly precede a focus element and in the theory of TFA, they are also considered contextually nonbound.
also contrastive contextually bound expressions can follow the rhematizerstypically at the beginning of the sentence (and in this case, also the rhematizers are contextually bound).
contrasting
train_17967
Lexical resources like wordnet usually feature animacy of nominals of a given language (Fellbaum, 2010;Narayan et al., 2002).
using wordnet, as a source for animacy, is not straightforward.
contrasting
train_17968
x is the normalized dimensions in a feature vector of a nominal x. k is the number of coordinates and x i is the i th coordinate of x. Animacy is an inherent and a non varying property of entities that nominals refer to.
due to lexical ambiguity animacy of a nominal can vary as the context varies.
contrasting
train_17969
The cluster prototypes, returned by the fuzzy clustering, show animates are left skewed while inanimates are right skewed on the hierarchy of control.
in our clustering experiments the order of dative/accusative and instrumental case markers on the control hierarchy (Scale 1) has been swapped.
contrasting
train_17970
• Nominal Ambiguity: As a matter of fact, animacy is an inherent and a non-varying property of nominal referents.
due to lexical ambiguity (particularly metonymy), animacy of a word form may vary across contexts.
contrasting
train_17971
We have addressed this problem by capturing the mixed membership of such ambiguous nominals.
since animacy of a nominal is judged on the basis of its distribution, the animacy of an ambiguous nominal will be biased towards the sense with which it occurrs in a corpora.
contrasting
train_17972
Such transformation is of course not possible in general.
we found that a greedy rewriting procedure suffices for that purpose on all of the high-school level math problems used in the experiment.
contrasting
train_17973
The problems containing only six variables may be hard for today's computer with the best algorithm known.
several positive results have been attained as the result of extensive search for practical algorithms during the last decades (see (Caviness and Johnson, 1998)).
contrasting
train_17974
This suggests that there is a sizable subjective aspect to these judgments and we should be somewhat skeptical of the judgment of any particular annotator.
we had forced our annotators to make a boolean choice for each style, which may be somewhat inappropriate for somewhat non-discrete phenomenon like style.
contrasting
train_17975
Another shortcoming of previous classification approaches is that they only focus on detecting the overall topical category of a document.
they do not perform an in-depth analysis to discover the latent topics and the associated document category.
contrasting
train_17976
In the same way violence analysis maps violence polarity into violence words such as looting, revolution, war, drugs and non-violent polarity to background words such as today, happy, afternoon.
as opposed to sentiment and affect prior lexicon derivation, the generation of violence prior lexicons pose different challenges.
contrasting
train_17977
Our intuition is that low entropy words are indicative of semantically coherent topics and therefore more informative, while high entropy words indicates words whose usage is more topical diverse and therefore less informative.
the entropy of word w given the class label c is defined as follows: where C denotes the number of classes (in our case violent and non-violent) and p(w|sd c i ) denotes the probability of word w given the document sd i in class c. to the general E SD , the class word entropy characterises the usage of a word in a particular document class.
contrasting
train_17978
For the case of non-violent topics, VDM revealed topics which appeared to be less semantically coherent than those of violent topics.
when reading the non-violent VDM T1, it gives an insight of the super bowl game related to the Jets.
contrasting
train_17979
Furthermore, when the spammer's intention is just advertising, we can easily identify signs of its activity: repeated phone numbers or URLs and then ignore them.
when the spammer's intention is to obtain higher reputation within the community, the spam content may lack obvious patterns.
contrasting
train_17980
Their method only works for definition sentences, where the assumption that the formal and informal equivalents cooccur nearby holds.
this assumption does not hold in general social network microtext, as people often directly use informal words without any explanations or definitions.
contrasting
train_17981
Li and Yarowsky (2008a) computed the Levenshtein distance (LD) on the Pinyin of the two words in the pair to reflect the phonetic similarity.
as a general string metric, LD does not capture the (dis-)similarity between two Pinyin pronunciations well as it is too coarse-grained.
contrasting
train_17982
As an initial step, we can recognize informal words and segment the Chinese words in the sentence by applying joint inference based on a Factorial Conditional Random Field (FCRF) methodology (Wang and Kan, 2013).
as our focus in this work is on the normalization task, we use the manually-annotated gold standard informal words (O) and their formal equivalents (T ) provided in our annotated dataset.
contrasting
train_17983
Note that this constraint makes our method more efficient over a brute-force approach, in exchange for loss in recall.
we feel that this trade-off is fair: by retaining the top 1000 candidates, we observed the loss rate of gold standard answers in each of the channels is 14%, 15%, and 17% for phonetic substitution, abbreviation and paraphrase, respectively.
contrasting
train_17984
Assuming the character-word mapping events are independent, we obtain: where o i (t i ) refers to ith character of O (T ).
this SCM model suffers serious data sparsity problems, when the annotated microtext corpus is small (as in our case).
contrasting
train_17985
In this case, Fj may be a better one because it happens to be the representative node for the path between Fi and L1.
there is a chance that the sub-tree of Fi may have important features (i.e., L3, L5) that end up elevating Fi's weight unfairly.
contrasting
train_17986
Transliteration methods have often been used for the task of keyword matching across different languages (Chen and Ku, 2002;Fujii and Ishikawa, 2001).
han (2006) applied the transliteration method to perform part-of-speech tagging for Korean texts using Xerox Finite State Tool.
contrasting
train_17987
Summarization of properties in Korean SMS text Ling and Baron (2007) reported that lexical shortening is the one of the most significant characteristics one can see in text messages.
'ignoring spacing' is the exception, since Korean suffixes can play as good predictors for the roles or the functions of the preceding stem.
contrasting
train_17988
Thus one would anticipate an increase in errors when the input text is not properly spaced, because it would increase the complexity of the analysis process.
unlike the predictions, the Romanization-based method recorded a lower precision than the morphological analysis-based approach.
contrasting
train_17989
This is because the shortened targets can also be found as sub-string of bigger words.
this shortcoming does not weaken the efficiency of the whole approach.
contrasting
train_17990
As mentioned earlier, past studies paid little attention to elaborating the lattice generation algorithm.
the results of our experiments reveal that the design of the lattice generation algorithm crucially affects the performance of the reranking system (including speed, accuracy, and lattice size).
contrasting
train_17991
Most previous work on this approach has aimed at developing a single general-purpose unknown word model.
there are several types of unknown words, some of which can be easily dealt with by introducing simple derivation rules and unknown word patterns.
contrasting
train_17992
There are a lot of studies on rendaku in the field of phonetics and linguistics, and several conditions that prevent rendaku are known, such as Lyman's Law (Lyman, 1894), which stated that rendaku does not occur when the second element of the compound contains a voiced obstruent.
few studies dealt with rendaku in morphological analysis.
contrasting
train_17993
The wide scope of influence is due to the fact that " " consists of hiragana characters like most Japanese function words.
in the case of (5), although the unknown word " " (decontamination) is divided into two parts by JUMAN, there is no influence on the adjacent analyses.
contrasting
train_17994
UniDic dictionary (Den et al., 2008) handles orthographic and phonological variations including rendaku and informal ones.
the number of possible variations is not restricted to a fixed number because we can insert any number of long sound symbols or lowercases into a word, and thus, all the variations cannot be covered by a dictionary.
contrasting
train_17995
is input into the baseline system, it builds the word lattice that is described with solid lines in Figure 2.
this lattice does not include such expressions as " " and " " since they are not included in the lexicon.
contrasting
train_17996
The number of potential entries of onomatopoeias with repetition is large, but the candidates of onomatopoeias with repetition can be quickly searched for by using a simple string matching strategy.
to search the candidates of onomatopoeias without repetition is a bit time consuming com- Input : "ᬂᬼᬂᬼᯯ" (Approximately how much?)
contrasting
train_17997
Since long sound symbols and lowercases rarely appear in the lexicon, there are few likely candidates other than the correct analysis.
voiced characters often appear in the lexicon and formal text, and thus, there are many likely candidates.
contrasting
train_17998
Our system recognized unknown onomatopoeias with repetition at a recall of 89%, which is not very high.
since there were several repetition expressions other than onomatopoeias, such as " / " (wow wow) as shown in Table 9, we cannot lessen the cost for onomatopoeias with repetition.
contrasting
train_17999
For example, previous work has shown that sequence models alone cannot deal with syntactic ambiguities well (Clark and Curran, 2004;Tsuruoka et al., 2009).
stateof-the-art systems usually utilize high complexity models, such as lexicalized PCFG models for syntactic parsing, to achieve high accuracy.
contrasting