id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_18000
Attempts have been made to utilize hand annotated semantic information for constituency parsing (Fujita et al., 2007;MacKinlay et al., 2012) as well as dependency parsing (Øvrelid and Nivre, 2007;Bharati et al., 2008;Ambati et al., 2009).
acquiring such information for new sentences remains a challenge.
contrasting
train_18001
It may look intuitive to use fully expanded concept lineage, as it contains more detailed description of the lexical unit.
opting for a highly fine-grained concept lineage leads to the problem of sparseness.
contrasting
train_18002
As depicted in Graph (Figure 4), there is a average increase of 0.38 (LAS) on all datasets using the first sense strategy from the baseline.
using WSD the accuracy decreased across all datasets.
contrasting
train_18003
This is evidence that our strategy of splitting experiments into basic and extended was sound for these TDs.
the situation for BIO is different.
contrasting
train_18004
For less frequent POS classes (those that dominate the macroaveraged measure, especially verbal POS), sequence information and "long-distance" context is probably more stable and can be exploited better than for NN, NNP and JJ.
there is still a dropoff from the baseline for BIO; we attribute this to the larger differences in the transition probabilities for BIO vs WEB (Table 4); the sequence classifier is at a disadvantage for BIO, even on a macroaveraged measure, because the transition probabilities change a lot.
contrasting
train_18005
Third, there are languages that abound in verb + noun constructions or multiword verbs (such as Estonian (Muischnek and Kaalep, 2010) or Persian (Mansoory and Bijankhan, 2008)): verbal concepts are mostly expressed by combining a noun with a light verb (Mansoory and Bijankhan, 2008).
there are views that the relationship between the verbal and the nominal component is not that of a normal argument.
contrasting
train_18006
As can be seen, we have the light verb construction döntést hoz decision-ACC bring "to make a decision" in the sentence.
it is parsed as a "normal" object of the verb in the first case (OBJ) and as a light verb object (OBJ-LVC) in the second case.
contrasting
train_18007
Thus, there is no LVC in the above example, but approaches that heavily build on MWE lexicons may falsely identify this verbobject pair as a light verb object-light verb pair since they hardly consider contextual information.
dependency parsers have access to information about other dependents of the verb hence they may learn that in such cases the presence of a dative dependent argues against the identification of the verb-object pair as an LVC.
contrasting
train_18008
Hence, the cognitive process behind labeling a participant to have a particular type of power is not a binary decision the annotator makes for each participant.
evaluating agreement on such a formulation is not straightforward.
contrasting
train_18009
does not understand (1) OOV words, e.g., "the opposite wall" * ; (2) more than one meaning of polysemous positional relations, e.g., "to the left of the table" * as "to the left and on the table" as well as "to the left and next to the table"; (3) positional relations that are complex, e.g., "in the left near corner of the table" * , or don't have a landmark, e.g., "the ball in the center" * ; and (4) descriptive prepositional phrases starting with "of" or "with", e.g., "the picture of the face" * and "the plant with the leaves" * .
contextual information sometimes enables the system to overcome OOV words.
contrasting
train_18010
The relevance measure fc (Equation 2), which is applied to both metrics, enables us to handle equiprobable interpretations.
rank-based evaluation metrics do not consider the absolute quality of an interpretation, i.e., the top-ranked interpretation might be quite bad.
contrasting
train_18011
Accuracy over segments was higher, at 96.26% for texts, and 87.28% for ASR outputs.
ssP's performance for the identification of Noise was rather poor, with an average accuracy of 54.75%.
contrasting
train_18012
The probability of the Semantic Model for this modified sentence is which is higher than that of the original Semantic Model, as is the probability of the new Text given the new Semantic Model: Pr(Text |SemModel ) = Pr("the thing"|O) Pr("in"|P ) Pr("the microwave"|L) .
this gain is offset by the penalties incurred by the modifications.
contrasting
train_18013
Note that, as the number of follow up questions presented increases, the scores will improve since it is more likely that the 'correct' choice is presented.
there is a trade-off here since the agent has to again peruse more questions, which increases time spent, and so we limit this value to 5 as well.
contrasting
train_18014
An example of such efforts is the automatic translation of English Wikipedia to languages with smaller collections.
mT quality is still far from ideal for many of the languages and text genres.
contrasting
train_18015
The selected summarized sentences are then translated to the target language using Google Translate.
we go a step further and design a hybrid approach in which we incorporate our MT quality classifier into the state-of-the-art summarization system.
contrasting
train_18016
Most of existing studies dealing with comparable corpora look for parallel data at the sentence level (Zhao and Vogel, 2002;Utiyama and Isahara, 2003;Munteanu and Marcu, 2005;Abdul-Rauf and Schwenk, 2011).
the degree of parallelism can vary considerably, from noisy parallel texts, to quasi parallel texts (Fung and Cheung, 2004).
contrasting
train_18017
The bootstrapping algorithm can effectively reduce manual intervention in building the system.
it is prone to noise brought in during iterations.
contrasting
train_18018
Compared with Table 2 above, we can find that the coverage of the English NEs is lower than that on Chinese.
the volume of the extracted NEs is al- most 30 times larger.
contrasting
train_18019
Yom-Tov and Diaz (2011) report that Twitter can broadcast news faster than traditional media, which provides an opportunity for event detection in Twitter.
there are challenges in event detection from Twitter data: 1) tweets are too short and sometimes cannot carry enough information; 2) tweets contain many noisy words, which can be harmful for event detection and 3) the volume of Twitter data is very large, which makes event detection a big data problem.
contrasting
train_18020
Both similarity of edges and Wikipedia are useful in filtering out heterogeneous collections from news events, while Wikipedia can also separate some news and topics.
there are several problems with this approach: 1) newsworthiness cannot distinguish news from some topics, includes horoscope topics ("sagittarius; approach; big trouble") and topics such as "hitler; fox; megan fox; rip; megan; selena gomez", which contain segments that can also frequently occur in Wikipedia; 2) as a single measure, newsworthiness is subject to a tradeoff between precision and recall, while a high precision can be obtained only with an extremely low recall (about 10%).
contrasting
train_18021
However, most of the above work focuses on relation extraction rather than event extraction.
to the well-studied problem of relation extraction, only a few works focused on event extraction.
contrasting
train_18022
Secondly, since only verbs with subject and object are extracted, non-predicate verbs and the verbs without subject/object will not be extracted as candidate triggers.
the coverage of possible triggers by our trigger extraction algorithm is reasonable good (more than 85%), because most of the trigger words appear repeatedly in the corpus, and their usages are varied.
contrasting
train_18023
Therefore, it takes domain specific event triggers as the input.
it is also a costly task to annotate triggers for new domains.
contrasting
train_18024
For training and evaluating the performance the proposed method, we need a large number of abbreviation and corresponding standard form pairs.
manually labeling is a laborious and time consuming work.
contrasting
train_18025
We think that the global formulas contribute a lot for the entire accuracy.
since the constraint of simultaneously dropping or keeping characters does not consider context, it may also bring some false matches.
contrasting
train_18026
Comparing the performances of CRFs and MLNs, we can observe that CRFs achieve slightly better performance in classifying single characters.
mLNs achieve significantly better results of the entire accuracies.
contrasting
train_18027
used linguistically motivated statistical measures to distinguish subtypes of verb + noun combinations.
it is a challenging task to identify rare LVCs in corpus data with statistical-based approaches, since 87% of LVCs occur less than 3 times in the two full-coverage LVC annotated corpora used for evaluation (see Section 3).
contrasting
train_18028
In some other studies (Cook et al., 2007;Diab and Bhutada, 2009) the authors just distinguished between the literal and idiomatic uses of verb + noun combinations and LVCs were classified into these two categories as well.
to previous works, we seek to identify all LVCs in running texts and do not restrict ourselves to certain types of LVCs.
contrasting
train_18029
The resulting datasets were not balanced and the number of negative examples basically depended on the candidate extraction method applied.
some positive elements in the corpora were not covered in the candidate classification step, since the candidate extraction methods applied could not detect all LVCs in the corpus data.
contrasting
train_18030
In this dataset, only one positive or negative example was annotated in each sentence, and they examined just the verb-object pairs formed with the six verbs as a potential LVC.
the corpus probably contains other LVCs which were not annotated.
contrasting
train_18031
This can be explained if we recall that this dataset applies a restricted definition of LVCs, works with only verb-object pairs and, furthermore, it contains constructions with only six light verbs.
wiki50 and SZPFX contain all LVCs, they include verb + preposition + noun combinations as well, and they are not restricted to six verbs.
contrasting
train_18032
released a new version, the op spam v1.4 dataset, with gold-standard negative reviews included as well (Ott et al., 2013), which offers an opportunity to more extensively study the problem of detecting deceptive reviews.
the work described in this paper is focused on positive review spam, and based directly on the work of Feng et al.
contrasting
train_18033
N-GRAM features: As shown by Ott et al., the bag-of-words model is effective for identifying deceptive reviews.
the optimal choice of n is not consistent in Ott et al.
contrasting
train_18034
Intuitively, in order to define a new epoch, both a big social impact of a series of events and new issues, which arouse the social interest, must be observed.
it is hard to define what makes a feature "distinctive" or an event a "great change".
contrasting
train_18035
They show that it is possible to determine censorship and suppression by comparing the frequencies of proper names in bilingual Google books corpora.
the authors did not proceed to a systematic studies of epochs.
contrasting
train_18036
We can see that the number of terms which are positive to statistical tests varies substantially.
it is not by chance that the changes occur.
contrasting
train_18037
Various claims have been made about social media text being "noisy" (Java, 2007;Becker et al., 2009;Yin et al., 2012;Preotiuc-Pietro et al., 2012;Eisenstein, 2013, inter alia).
there has been little effort to quantify the extent to which social media text is more noisy than conventional, edited text types.
contrasting
train_18038
Most research to date on social media text has used very shallow text processing (such as keyword-based time-series analysis), with natural language processing (NLP) tools such as partof-speech taggers and parsers tending to be disfavoured because of the perceived intractability of applying them to social media text.
there has been little analysis quantifying just how hard it is to apply NLP to social media text, or how intractable the data is for NLP tools.
contrasting
train_18039
There has been work in the NLP community to detect arguments and interruptions in spoken as well as written interactions (Somasundaran et al., 2007).
the wellstructured nature of interactions that is expected in the debates allows us to use some simple heuristics to detect arguments and interruptions for the purposes of this study.
contrasting
train_18040
It is hard to infer strong conclusions based purely on the SVM feature weights.
sVM does pick up some interesting signals.
contrasting
train_18041
SSR These results indicate that there is no one readability index which correlates significantly with all of the linguistically motivated complexity features.
it seems that they complement each other well as each one of them is significantly correlated with a different subset of features.
contrasting
train_18042
it can be identified in texts with the help of semantic tools.
there are other types of uncertainty which cannot be described by just concentrating on semantics.
contrasting
train_18043
On the number of different cues, Table 1 tells us that the set of linguistic cues expressing weasels are the most limited, with almost 100 cues.
peacock cues vary the most with 540 cues.
contrasting
train_18044
This prevents us using those corpora in the evaluation.
aCE also produced a multilingual small corpus called REFLEX Entity Translation Training/DevTest (REFLEX for short), which consists of about 60K of tokens with two levels of classes.
contrasting
train_18045
The most notable findings, as shown in Table 11 are that, the recall metric shows a sharp drop in all datasets.
the precision shows high scores, suggesting the Wikipedia corpus is strong in difference when compared with the news-wire domain.
contrasting
train_18046
Despite solid work on Arabic modality in theoretical linguistics (Mitchell and al-Hassan, 1994;Brustad, 2000;Moshref, 2012), there are no Arabic corpora annotated for modality, not even the widely used Penn Arabic Treebank.
there is a plethora of work and annotated corpora for modality in other languages, including English (Saurí et al, 2006;Baker et al., 2010;Rubinstein et al., 2013), Portuguese (Hendrickx et al., 2010;Avila and Mello, 2013), Japanese (Matsuyoshi et al.
contrasting
train_18047
However, the canonical word order in Tunisian verbal sentences is SVO (Subject-Verb-Object) (Baccouche, 2003).
the MSA word order can have the following three forms: SVO / VSO / VOS (2).
contrasting
train_18048
For example, if the word ‫>أكتب‬ />ktb /Write in MSA is preceded by a negative particle such as ‫/لم‬lam (Do not), the verb in the dialect will be: mAktibti$/‫=ماكتبتش‬ TUN-Neg-Particle(‫+)ما‬ Tun-verb ‫+)كتب(‬ Tun-Neg-enc ‫)ش(‬ Tools words or Syntactic tools exist in a large amount in the Treebank and all MSA-texts.
their transformation was not trivial and required for each tool a study of its different contexts.
contrasting
train_18049
Data driven Machine Translation approaches have gained significant attention as they do not require rich linguistic resources such as, parsers or manually built dictionaries.
their performance largely depends on the amount of training data available (Koehn, 2005).
contrasting
train_18050
In many PBSMT systems, once the phrase-pairs have been extracted, it is no longer required to store the training corpus from which the phrase-pairs were extracted.
while dealing with many morphologically rich languages, the morphological variants of the target phrase not only depend on their source phrase but also on the context in which the source phrase appeared.
contrasting
train_18051
POS taggers are now easily available for most Indian languages.
no other rich sources such as, parsers or morphological analyzers are used on the target language.
contrasting
train_18052
• Uniform probability is not informative: The heuristic extraction method tends to assign an uniform probability for groups of translations and this is evident in the flat segments of the baseline curve and is especially dominant in the low probability region.
the VB-grammar is more peaked (in Fig.
contrasting
train_18053
Semi-supervised methods, which make use of both labeled and unlabeled data, are ideal for sentiment classification, since the cost of labeling data is high whereas unlabeled data are often readily available or easily obtained (Ortigosa-Hernández et al., 2012).
there are some drawbacks of semi-supervised approaches such as most of the work assume that the positive and negative samples in both labeled and unlabeled data set are balanced, otherwise models often bias towards the majority class (Chawla et al., 2002;Yen and Lee, 2009).
contrasting
train_18054
(2) The sentiment scores will be updated to incorporate diverse measurements leading to less odd scores.
broader coverage did not necessarily guarantee better performance since irrelevant words often matched to generate noise.
contrasting
train_18055
Among semi-automatic methods, rule-based methods are known for high accuracy if the patterns are carefully chosen according to morphological structure or special format of corpus (Nakayama et al., 2008), either manually or via automatic bootstrapping (Hearst, 1992).
the methods suffer from sparse coverage of patterns in a given corpus.
contrasting
train_18056
We believe that the best topical key concept about this topic is 文艺 'literature'.
'literature' cannot be created if it never appears in the tags of Douban.com.
contrasting
train_18057
While these are efficient at retrieving synonymous words, they fare less well at identifying antonyms as non-similar words, and routinely include them as semantically similar words.
despite the problems resulting from this, there have only been few approaches that explicitly tackle the problem of synonym/antonym distinction, rather than focussing on only synonyms (e.g.
contrasting
train_18058
Figure 1: The N-gram language model treats the English sentence "all things pass" composed of (all, things) and (things, pass), for N = 2.
revealing the structure of a sentence is the task of parsing, which is based on linguistically oriented formulations and focuses on generating the likeliest structure for a given sentence.
contrasting
train_18059
There are also dependency-based approaches (Stolcke et al., 1997;Gao and Suzuki, 2003;Graham and van Genabith, 2010).
these approaches need a trained dependency parser because they construct a language model based on the decisive best structure produced by the parser.
contrasting
train_18060
To deal with this problem, the dependency model with valence (DMV) 6 proposed by Klein and Manning (2004) introduces a special mark STOP.
it is necessary to distinguish two kinds of parameters, P STOP and P CHOOSE in the bi-gram estimation, which makes it difficult to extend the approach to higher orders.
contrasting
train_18061
Comparing the test set perplexities in Tables 3 and 4, we can see the dependency bi-gram model achieves the same, or sometimes better performance of the original N-gram language models.
when we look at the tri-grams, the interpolated modified Kneser-Ney (KN) discounting method, which is state-of-the-art, shows its strength and our dependency model does not produce much improvement for the reasons we described above 18 .
contrasting
train_18062
(2005) or Stochastic Gradient Descent.
learning the θ parameters in our case is not as straightforward, since the objective is non-differentiable at many points due to the 2 1 norm.
contrasting
train_18063
van Noord (2009) trades parsing efficiency for parsing effectiveness by learning a heuristic filtering of useful parses.
we develop a selfsupervised online learning algorithm to achieve efficient extraction without reducing effectiveness.
contrasting
train_18064
In our experiments, missing keyphrases have not been removed.
we evaluate with stemmed forms of candidates and reference keyphrases to reduce mismatches.
contrasting
train_18065
The approaches are studied in (Bakliwal et al., 2011;Bespalv et al., 2011;Kennedy and Inkpen, 2006;Jin et al., 2009;Narayanan et al., 2009;Pang et al., 2002;Schuller and Knaup, 2011).
the lexicon-based approaches are mostly unsupervised, requiring a dictionary or a lexicon of pretagged words.
contrasting
train_18066
The reviewer seems happy with the camera size, structure, easy use, video modes, SDHC support etc.
the autowhite balance and high compression leading to sensor acuity seem to disappoint him.
contrasting
train_18067
Mining infor-mation from ConceptNet can be difficult as oneto-many relations, noisy data and redundancy undermine its performance for applications requiring higher accuracy (Smith et al., 2004).
we use ConceptNet for the following reasons: 1.
contrasting
train_18068
For example, for a pair of words (seikaku ga warui) "bad personality", neither "bad", nor "personality" on their own express harmful meaning.
when these words are used together in a dependency relation, they become harmful (negative depiction of a person's personality).
contrasting
train_18069
In previous research the mixing was set at the same rate (half of the entries included cyberbulliyng).
it cannot be assumed that harmful entries appear in real life with the same rate as normal ones.
contrasting
train_18070
Since the phrases included seed words as well, this most probably increased the polarity value of the harmful entry.
there were many nonharmful entries classified as harmful with harmfulness polarity score equally high or higher than the actual harmful entries.
contrasting
train_18071
For instance, PR is interpreted by probable and PS is interpreted by possible.
in Japanese, it is not so straightforward to distinguish between PR and PS due to a diverse variety of modality expressions.
contrasting
train_18072
Out of the 1,128 correct instances in the area analyzed by the corpus, 351 are correct by predicates in the dictionary.
to this, only 34 errors are due to insufficient coverage for predicates.
contrasting
train_18073
In Section 3, we described that our analyzer determines the event factuality based on three components: Predicates, Functional Expressions, and Propagated Factuality.
we find that it is crucial to determine boundaries whether the analyzer should propagate the factuality.
contrasting
train_18074
This means that our approach based on lexical knowledge works well, especially for minor labels.
some errors still remain.
contrasting
train_18075
402 of them do not have CT+ as the propagated factuality and all of them should block the factuality propagation.
only 229 of 402 blocked the propagation.
contrasting
train_18076
We therefore performed an additional experiment with lexical knowledge for scope and discussed its helpfulness.
the issue regarding scope includes the issue by profound meaning and context.
contrasting
train_18077
The synsets are highly accurate in representing words' meanings.
the size of the thesaurus is limited, and not equally available in different languages.
contrasting
train_18078
(1a) shows that the entailment relation can be correctly predicted through recognizing "read into" → "interpreted" 2 and "what he wanted" → "in his own way".
the alignment developed in MT does not solve the non-alignment samples well.
contrasting
train_18079
Since the CWS rules can be trained from context without linguistic information, the proposed CWS method might also work for Chinese texts from different ages.
there are some issues and problems that require further investigation.
contrasting
train_18080
Furthermore, the fuzzy v measure that we developed in order to evaluate soft clusterings still seems to provide sub-optimal evidence of clustering quality: The magnitude of the score depends on the threshold, so it is difficult to decide which threshold performed best.
several of our analyses pointed towards similar numbers for an optimal k, and these optimal k values were reasonable, as they were close to the number of gold standard classes.
contrasting
train_18081
Given a full form F F F , its valid abbreviation A A A should have the character number constraints: 0 < |A A A| < |F F F |.
we can assume that a negative full form has an "invalid abbreviation" A A A with |A A A| = 0 1 http://ictclas.org/ or |A A A| = |F F F |.
contrasting
train_18082
Research in speech summarization -unlike its text-based counterpart -carries intrinsic difficulties, which draw their origins from the noisy nature of the data under consideration: imperfect ASR transcripts due to recognition errors, lack of proper segmentation, etc.
it also offers some advantages by making it possible to leverage extra-textual information such as emotion and other speaker states through an incorporation of prosodic knowledge into the summarization model.
contrasting
train_18083
Maskey and Hirschberg (2006) use an HMM to perform summarization by relying solely on prosodic features.
their model -unlike ours -is supervised.
contrasting
train_18084
In this scenario a metric such as ROUGE -which effectively measures n-gram overlap -would reward the "begin" summary for including key terms that appear several times in the gold standard summaries.
for longer summaries, where lexical variation is more pronounced, prosodic information provides a robust source of intelligence to select noteworthy utterances.
contrasting
train_18085
Their Chi-Square test gives weightage also to those documents in which the word is absent while calculating the score.
their idea of selecting hitting words, considers multiple occurrences of a word in a single doc-ument as one.
contrasting
train_18086
Several studies linked the personal lexicon with health and the health-related behavior of the individual.
few text mining studies were performed to analyze opinions expressed in a large volume of user-written Web content.
contrasting
train_18087
Surgeries related to HL are the most common surgeries in North America; thus, they affect many patients and their families.
there are only a few health forums dedicated to Hearing Loss (HL).
contrasting
train_18088
Verbs are the next common in the lexicon and give good indication of subjectivity.
as verbs are used in many different forms and have many meanings, just relying on the verb polarity will misguide the prediction in cases where the verbs are used in some other senses, e.g., he uses a car is neutral, when he was used has a negative sense.
contrasting
train_18089
Takamura et al., (2007) used the Potts model for extracting semantic orientation of phrases (pair of adjective and a noun): positive, negative or neutral.
to that, we have used the Potts model for identifying the emotional class (es) of a word.
contrasting
train_18090
The same issue goes for the development set.
even if we figured out a way to split the corpus into sets of the appropriate numbers of songs while taking into consideration that the artists for each set must be unique, there are further factors that could skew the results.
contrasting
train_18091
Group B has members that describe movement from one place to another, as does Group C. Groups D and E in Table 5 faithfully replicate the Levin Classes avoid-52, adn learn-14 from Table 1.
groups F and g seem to not form coherent semantic groups.
contrasting
train_18092
(Uszkoreit 2011) introduced a bootstrapping system for relation extraction rules, which achieved good performance under some circumstances.
most previous semi-supervised methods have large performance gaps from supervised systems, and their performance depends on the choice of seeds (Vyas et al., 2009;Kozareva and Hovy, 2010).
contrasting
train_18093
In this case, their settings favor instances with a positive prediction from the local classifier and a negative prediction from the global classifier, thus influencing the selection of queries.
in terms of diversity of queries, the global classifier is more capable of discovering unseen instances in the local feature space.
contrasting
train_18094
When the number of instances is large enough, the statistical model will effectively incorporate these entity type constraints as long as entity types are extracted as features.
in active learning, even with suitable training examples, we will select and present to the user some instances violating these constraints.
contrasting
train_18095
In this paper, we, similarly, present the state-of-the-art approach as a graph.
unlike Kok and Brockett (2010), we treat English phrases (instead of multilingual phrases) as nodes.
contrasting
train_18096
Although the weights of parameters carry the linguistic properties, the proposed factors could be considered separately for examining and comparing the individual effectiveness in our framework.
other factors could be taken in consideration.
contrasting
train_18097
Research has also focused on classifying polarities relative to contexts (Wilson et al., 2009).
only limited research has taken place on applying polarity classification techniques on complex domains such as the medical domain (Niu et al., 2005;Sarker et al., 2011).
contrasting
train_18098
The majority of previous work has primarily focused on clustering a static collection of short documents [Rangrej et al., 2011, Tsur et al., 2012, or on using surface features to compute pairwise similarity between microtext [Reuter et al., 2011, Li et al., 2012.
the challenge of how to effectively cluster microtext in dynamic data streams has not been well addressed.
contrasting
train_18099
Using this model, a tweet represents a data point in where d is the size of the word vocabulary and v j is the TF-IDF weight of j th word in tweet m i .
in a dynamic microtext stream, word vocabulary changes and the number of tweets increases over time, making it computationally expensive to recalibrate the inverse document frequency of TF-IDF.
contrasting