id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_100400
In order to avoid this, the two coding systems (MSD and KR) were harmonized and their basic principles were also made compatible.
of the morphological analysis, pairs of lemmas and morphological codes are provided for each word.
neutral
train_100401
which means that most of the suffixes exist in two different forms -one with a front vowel and another one with a back vowel -and it is the vowels within the stem that determine which form of the suffix is attached to the word.
the codes Nc-sd and Nc-sd---s3 will be reduced to the same form since there is no such Hungarian lemma that would have the same word form for a dative singular and a dative singular with a third person singular possessor (and thus, the POS-tagger would not have to choose between these possibilities).
neutral
train_100402
Concatenation In this classification, two or more words were connected to each other to form one token.
mADA has one exception to this rule.
neutral
train_100403
Then, we will retrain Stanford tagger on Arabic tweets since its speed is ideal for tweets domain and it is only the retrainable tagger.
we used Twitter Stream API to crawl Twitter by setting a query to retrieve tweets from the Arabian Peninsula and Egypt by using latitude and longitude coordinates of these regions since Arabic dialects in these regions share similar characteristics and they are the closest Arabic dialects to MSA.
neutral
train_100404
Also, accuracy calculated in this manner is not ideal and does not provide a reliable result since each calculation only factors in true positives and true negatives per class.
in this case, we only focused on accuracy, starting with the userbased evaluation as the authoritative score.
neutral
train_100405
In addition we also intend to improve the grammar rules to cover more of the exceptions and characteristics of the Persian language.
based on this comparison, we calculated two measures: 1.
neutral
train_100406
Due to the limitations discussed above, we performed a second evaluation.
in this second phase the JAPE rules follow this sequence: 1.
neutral
train_100407
It may also be caused by the absance of high granularity features of our dictionary entries.
on the other side, the representation of terminological data in a standard format allows the integration and merging of terminological data from multiple source systems, while improving terminological data quality and maintaining maximum interoperability between different applications.
neutral
train_100408
Now the essence of the EM algorithm (Algorithm 1) can be described as follows.
observing a small value of W /(M + N ) is one of the indicators in favor of parallelism of two pages.
neutral
train_100409
Although the idea of receiving a direct response to an information need sounds very appealing, CQA websites also involve risk as the quality of the provided information is not guaranteed.
we test the influence of question tags, length of the question title and body, presence of a code snippet and the user reputation on question quality.
neutral
train_100410
SENTIPOLC dataset is made of a collection of Tweet IDs, since the privacy policy of Twitter does not allow to share the text of Tweets.
in order to compare our system with the best ones of SENTiPOLC, beside using the same dataset, we adopted the same experimental framework.
neutral
train_100411
The domain dependent group includes Word-Based and Synsets features described in Section 4.1 and 4.2 often used in text classifications and topic recognition tasks.
detecting if the Tweet is subjective before deciding if it is ironic, as irony implies subjectivity).
neutral
train_100412
In order to compare our system with the best ones of SENTIPOLC, beside using the same dataset, we adopted the same experimental framework.
in the SENTIPOLC task domain dependent features were relevant, and detecting the topic of a specific class was important.
neutral
train_100413
The corpus of study was collected from Arabic Wikipedia through Arabic kiwix 1 tool.
this system consists of a tokenizer, a morphological analyzer and a NE finder.
neutral
train_100414
The second dictionary contains the last names.
figure 7, the transducer treat exceptional cases in the corpus of study.
neutral
train_100415
They are distributed as in Table 1.
we find that the results for the proposed method are motivating.
neutral
train_100416
Finally, we also wished to accommodate systematical errors made by immigrants or foreign language learners in Denmark, in particular endings errors due to category confusions 5 (e.g.
we therefore modified DanGram's analysis module to recognize and mark this kind of error.
neutral
train_100417
This problem is often expressed as the ultimate objective, finding the author.
the corpus main characteristics are drawn in table 2 LIB contains the same number of authors as EBG, but the number of texts bounded to each author is higher (31.2 ± 4.2 texts per author in LIB, 15.8 ± 2.6 in EBG).
neutral
train_100418
Event Detection (ED), one aspect of Information Extraction, involves identifying instances of specified types of events in text.
even if this pattern did not appear in the training data, adding it during pattern expansion will do little to improve event classifier accuracy because there are many Die events in the training data whose trigger is the verb "die".
neutral
train_100419
Much of the research on ED has been based on the specifications of the 2005 ACE [Automatic Content Extraction] event task 1 , and the associated annotated corpus.
regarding the correctness criteria, following the previous work (Ji and Grishman, 2008;Liao and Grishman, 2010;Ji and Grishman, 2011;Li et al., 2013), a trigger candidate is counted as correct if its event subtype and offsets match those of a reference trigger.
neutral
train_100420
(2013) implements a joint model via structured prediction with crossevent features.
most words have multiple senses and so may be associated with multiple types of events.
neutral
train_100421
From a practical point of view in software application (real scenario) for the algorithms we also do not have the assurance that all documents given as examples of an author, have actually been written by the author in question.
one of the principal evaluation labs for the dissemination, experimentation and collaboration in the development of methods for the authorship analysis is found in the PAN 1 lab associated to CLEF.
neutral
train_100422
In Table 3 we report the precision, recall and F-score for the prediction task.
it assesses the level of readability accounting for the number of syllables per word (as an approximation of the difficulty of a word) and for the number of words per sentence (as estimation of the syntactic difficulty of a text).
neutral
train_100423
We extract randomly from our dataset 1,000 English original sentences and 1,000 sentences translated into English 3 .
we conclude that readability features cannot discriminate between original texts and translations significantly better for some of the source languages than for the others.
neutral
train_100424
Brown clustering (Brown et al., 1992) uses distributional information to group similar words.
for the social media data (SM), we observe unstable quality for large |T | (Table 3).
neutral
train_100425
Datasets We evaluate Brown tuning using two text types.
we introduce an <EOS> marker after each tweet to break bigrams.
neutral
train_100426
Our findings suggest that a number of factors other than content coverage are important to consider when it comes to summarizing opinions from social media.
part-of-speech based grammatical features have been widely used in text quality prediction (Feng et al., 2010;Dell'Orletta et al., 2014).
neutral
train_100427
the Food category includes Dishes (pudding), but also Animals, Vegetables, Insects, Fruits, etc.
proper nouns and multi-word expressions were filtered out, as the techniques presented in this paper target singleword common nouns.
neutral
train_100428
This work is the first to use semantic preferences from PDEV for ontology population from the web, therefore it is still work in progress.
for each ST and each verb unambiguously taking the ST as subject/object, all verb occurrences were extracted together with their direct syntactic dependents, as well as dependents indirectly connected to the verb via coordination with a direct dependent.
neutral
train_100429
For these 2,000 affectations, the expert approves that 232 incorrect affectations and 140 missed ones are detected.
at the end of this step, for each processed Context the syntactic behaviour is identified.
neutral
train_100430
For instance, prior experiments on the WCL dataset showed results ranging from F=54.42 to F=75.16 Boella et al., 2014).
in relation to this, we also performed experiments with a development set automatically constructed from the Web, but due to lack of preprocessing for noise filtering, results were unsatisfactory and therefore unreported in this paper.
neutral
train_100431
It has received notorious attention for its potential application to glossary generation (Muresan and Klavans, 2002;Park et al., 2002), terminological databases (Nakamura and Nagao, 1988), question answering systems (Saggion and Gaizauskas, 2004;Cui et al., 2005), for supporting terminological applications (Meyer, 2001;Sierra et al., 2006), e-learning (Westerhout and Monachesi, 2007), and more recently for multilingual paraphrase extraction (Yan et al., 2013), ontology learning (Velardi et al., 2013) or hypernym discovery (Flati et al., 2014).
we consider a context window of [-2,2].
neutral
train_100432
A feature targeting the context of a mention's head to model selectional preference.
the Berkeley System expands the feature space by feature conjunctions: If a pairwise feature f fires for current mention m c and antecedent mention m a , features f ∧ type(c) and f ∧ type(c) ∧ type(a) are also activated, where type(•) returns a mention type literal based on the head's POS.
neutral
train_100433
We use index k to range over multiple relations between the nodes of a parse tree for a pair of sentences.
this discourse structure needs to include anaphora, rhetoric relations, and interaction scenarios by means of communicative language (Galitsky and Kuznetsov, 2008).
neutral
train_100434
Since important phrases can be distributed through different sentences, one needs a sentence boundaryindependent way of extracting both syntactic and discourse features.
the one of the tree kernel based methods improves as the sources of linguistic properties are expanded.
neutral
train_100435
Intuitively, we would like the training algorithm to fit the vectors so that v c (c) is a high number if we are likely to see c near w, and a low number otherwise.
when the algorithm encounters a word w, it first represents the full context window by building a sumv c of the context vectors of the words appearing in the window.
neutral
train_100436
Although the lack of context to describe the actual usage of word makes them unsuitable for word sense evaluation, they have been used to evaluate sense-aware vector-space models (Reisinger and Mooney, 2010;Neelakantan et al., 2014), so we include a comparison for completeness.
, w t+Rt } to create a combined context: C = C t ∪ C sg .
neutral
train_100437
In all 291 cases две (meaning 'two' for feminine and neuter gender) was annotated as Mc-pi.
the corrections made to the corpus are linguistically motivated and linguistically motivated corrections are needed for further progress (Manning, 2011).
neutral
train_100438
There have been several efforts on social media text POS tagging in recent years, but almost exclusively on Twitter and mostly for English (Darling et al., 2012;Owoputi et al., 2013;Derczynski et al., 2013) and German (Rehbein, 2013;Neunerdt et al., 2014).
when working with Facebook messages, we found several long posts, with a high number of code alternation points (6-8 alternation points are very common).
neutral
train_100439
In addition to preparing a dataset of annotated tweets, we further focus on creating sentiment polarity lexicons for Macedonian.
in both tasks, the winning systems benefited from building and using massive sentiment polarity lexicons (Mohammad et al., 2013;.
neutral
train_100440
Turney's PMI-based approach further serves as the basis for two popular large-scale automatic lexicons for English sentiment analysis in Twitter, initially developed by NRC for their participation in SemEval-2013 (Mohammad et al., 2013).
we developed a corpus of tweets annotated with tweet-level sentiment polarity (positive, negative, and neutral), as well as with phrase-level sentiment, which we made freely available for research purposes.
neutral
train_100441
4 shows that the ATP assigned (correctly) keywords with negative tonality only to 57.
the semantic information, which is saved up in the metadata, enables the development of various searching strategies that rely significantly on automatic text processing, lexical hierarchies and information search techniques.
neutral
train_100442
At this point, we can only note that we used SVM as the basic underlying classifier in our classification and regression experiments, but we used logistic regression as the basis for our ordinal regression.
we can see in Table 2 that the best results are achieved for regression, where the mean squared error is within half a point away for the 11-way and the 5-way class inventories, and it is about four times lower for the 3-way one.
neutral
train_100443
As the small amount of available training data limits the development of robust systems to automatically detect comparisons, this paper investigates how to use semi-supervised strategies to expand a small set of labeled sentences.
as the unlabeled expansion data, we use a set of 280.000 camera review sentences from epinions.com.
neutral
train_100444
Benthem (1983) uses interval as something that is between boundaries.
recognition and classification of temporal expressions; available classes: DATE, TIME, DURA-TION and SET) and 1-class (boundaries recognition only, all classes cast to a single class called timex).
neutral
train_100445
We can see that adding new features improved F 1 for each model and for each match evaluation.
allen (1995) finds interval temporal expressions in Benthem's meaning denoted by the term duration.
neutral
train_100446
We examined to see if the existing corpora were useful with the corpus that we built using domain adaptation and showed that the filtering of the source data assisted in the domain adaptation work.
it is difficult to use these types of methods in our system because the anime-related NEs such as the titles of an anime often do not match the patterns.
neutral
train_100447
We consider our system to be a part of the Closed sub-task, which is the 11-way classification task using only the TOEFL11 data for training.
in combination with other families, it finally appeared that the best performance was achieved with the suffixes with the length of 4, which was found using the cross-validation on the training data set.
neutral
train_100448
It may be relevant to assign to words some information in the form of finite sets of values.
roughly speaking, in the context of sentiment analysis of a given text, combining polarizations of contained words means adding such normed vectors.
neutral
train_100449
The medical computer systems allow to archive many and varied information (for example medical record, results of medical analyses, X-rays and radiological reports…).
although this network is general, it contains many specialty data, including medicine/radiology, which we have added within the framework of IMaIOS project.
neutral
train_100450
(2009) seem to contradict this hypothesis showing that corpus-based approaches can be as good at identifying similarity (when the right model is based on enough data).
the distance in number of nodes) from s 1 to s 2 plus one.
neutral
train_100451
We are grateful to the anonymous reviewers for their valuable feedback on an earlier version of this paper.
the combinators contain the functional application, coordination, composition, type-raising and type-shifting.
neutral
train_100452
For GeoQuery, they include the ZC07 (Zettlemoyer and Collins, 2007), λ-WASP (Wong and Mooney, 2007), UBL (Kwiatkowski et al., 2010) FUBL (Kwiatkowski et al., 2011).
ccG is a linguistic formalism that tightly couples syntax and semantic (Steedman, 1996;Steedman, 2000).
neutral
train_100453
The Geo-Query database contains only a single geography domain, 7 relations, and 698 total instances.
this algorithm is online, repeatedly performing both lexical expansion (Step 1) and parameter update (Step 2) procedures for each training example.
neutral
train_100454
Non-standard linguistic features have been analysed both qualitatively and quantitatively (Eisenstein, 2013;Hu et al., 2013;Baldwin et al., 2013) and they have been taken into account in automatic text processing applications, which either strive to normalise non-standard features before submitting them to standard text processing tools (Han et al., 2012), adapt standard processing tools to work on non-standard data (Gimpel et al., 2011) or, in task-oriented applications, use a series of simple pre-processing steps to tackle the most frequent UGC-specific phenomena (Foster et al., 2011).
the learning of the technical and linguistic dimensions seem to be equally hard when there are up to 100 instances available for learning, the technical dimension taking off after that.
neutral
train_100455
The first source consists of a simple match among the lemmas in T 1 and T 2: if two lemmas are equal (case insensitive), then we count it as an alignment between T 1 and T 2.
the goal is to predict the behavior of an entailment algorithm given the characteristics of both the resource and the data-set.
neutral
train_100456
For example, if a sequence of tokens T in one sentence has labelling L, then there is high probability that the same sequence in an another sentence will have the same labelling.
one direction is to decompose the task into two stages: named entity boundary detection and classification .
neutral
train_100457
The advantage of CRFs over other statistical models (like Hidden Markov Models) is that they can utilize a large set of features describing a sequence of observations.
this feature checks the number, case and gender agreement between adjacent tokens.
neutral
train_100458
We then started from a more general set L1, but good trees were hard to obtain.
this type is the most common in plWordNet (almost 50% of multiword lexical unit instances -see Figure 3).
neutral
train_100459
From Soricut and Marcu (2003).
the most prominent theory in Computational Linguistics to structure the discourse of a text is the Rhetorical Structure theory (RSt) proposed by Mann and thompson (1987).
neutral
train_100460
The resulting probabilistic model is sparse and many equal instances may indicate different relations (classes).
also, given that the number of features will increase, feature selection may be applied to select the most informative features in each iteration of the SSNEL.
neutral
train_100461
(3 -Avg comments per publication) shows that both paid and "mentioned" trolls post more comments per publication than non-trolls.
we also plan to analyze the comment threads as a whole .
neutral
train_100462
It is interesting to see how our classifier would perform if tested on trolls with different minimum number of comments (and the corresponding number of non-trolls).
this is much of a witch hunt and despite our good overall results, the training data is not 100% reliable.
neutral
train_100463
The drawback of this method is that phrases are built in an iterative process starting from single words and joining others to them until the expected size of phrases is reached.
• To take into account long distance dependency: the phrase-based language model can easily capture the long distance relationship between the different components of the sentence.
neutral
train_100464
5: Execute 2, 3 and 4 but switch the source and the target corpora.
word-based language model is a special case of phrase-based language model if only single word phrases are considered.
neutral
train_100465
In a related example, Figure 1 shows the projections onto a 2D space 3 of the representations for the two senses ofåsna: 'donkey' or 'slow-witted person', and those of their corresponding nearest neighbors.
formally: our training objective is to maximize the sum of the log-probabilities of context words c given a sense s of the target word w plus the logprobability of the sense s given the target word: We must now consider two distinct vocabularies: V containing all possible word forms (context and target words), and S containing all possible senses for the words in V , with sizes |V | and |S|, resp.
neutral
train_100466
In these cases we evaluate our result as incorrect.
in this paper we explain the methodology and results of the experiments for the enlargement of the Croatian Wordnet using the WN-Toolkit and the derivational database CroDeriV.
neutral
train_100467
We have used a state-of-theart word sense algorithm (Freeling+UKB), so we don't expect to make any improvement in the tagger.
this method significantly contributed to the improvement of the CroWN's coverage of lemmas from various Croatian corpora.
neutral
train_100468
Negation and modality triggers are identified and their scope is determined (Rosenberg et al., 2012) in order to extract the contextaware sentiment association values 4 with PMI for unigrams, bigrams, and dependency triples (typegovernor-dependent).
for a term to fall into the positive categories, it has to occur at least twice as often in positive tweets as in negative tweets, thus positive terms have association scores greater than 1.
neutral
train_100469
Tweets use more informal and non-standard language than other text forms posing additional challenges.
negation does not always reverse the effects of the sentiment carriers, as the case of judgements illustrates: This isn't awful.
neutral
train_100470
Explicit attributes are the attributes mentioned by the user in the NL query text.
(eg., the word each may trigger GROUP BY clause).
neutral
train_100471
We used Conditional Random Fields (Lafferty et al., 2001) for the machine learning task.
the attribute professor name is an implicit attribute.
neutral
train_100472
In this representation, each term is a product of p and typical tf-idf.
the list contains only those VNCs whose frequency was greater than 20 and that occurred at least in one of two idiom dictionaries (Cowie et al., 1983;Seaton and Macaulay, 2002).
neutral
train_100473
For our experiments we only use VNCs that are annotated as I or L. We only experimented with idioms that can have both literal and idiomatic interpretations.
this 200 dimensional vector space provides a basis for our experiments.
neutral
train_100474
A word can be represented by a vector of fixed dimensionality q that best predicts its surrounding words in a sentence or a document (Mikolov et al., 2013a;Mikolov et al., 2013b).
the words associated with a literal target will have larger projection onto a target σ vn .
neutral
train_100475
For instance, linguists rejected such word combinations as koszt zakupu 'the cost of buying something' or kolor włosów 'hair coloring', while accepted płaca minimalna 'minimum wage' or ośrodek zdrowia 'health centre'.
results of experiments are presented in Table 1.
neutral
train_100476
POS bi-gram feature set is superior among the other n-grams' (i.e.
for CI, we solve the equation , where q * is the reduced dimensionality matrix representation of the original training, reference or test matrix.
neutral
train_100477
This is because 'extended substitution' is, on the one side, the most common type of error (such that sufficient training material is available), and, on the other side, very variant (such that it is difficult to be captured by a rule-based procedure).
firstly, a certain number of collocations was affected by spelling and inflective errors.
neutral
train_100478
The lexical features consist of the lemma of the collocate and the bigram made up of the lemmas of the base and collocate.
if this is not the case, it checks whether the gender suffix is wrong, considering it as a 'gender Creation error', as in, e.g., *hacer regala instead of hacer regalo, lit.
neutral
train_100479
In such cases, we assume that these are orthographical or morphological mistakes, rather than collocational ones.
the prototypical context is represented by a centroid vector calculated using the lexical contexts of the correct uses of the collocation found in the RC.
neutral
train_100480
(2007) have shown, it is useful to take such adverbial intensification into account when predicting document-level sentiment scores.
(2014) on ordering adjectives, maximizers and approximators were grouped as one pole of attraction for adjectives, and boosters, moderators, and diminishers as another.
neutral
train_100481
Our solution to this problem involves increasing the language model training corpus from 22M to 47M sentences, while leaving the vocabulary size at 200K units.
implementing such a converter for Latvian is challenging, because all word endings must be matched.
neutral
train_100482
For example there are results for English (Fosler-Lussier & Morgan, 1999) and Japanese (Shinozaki & Furui, 2001) that show that infrequent words are more likely to be misrecognized, which is most likely to be true also for other languages.
in order to improve ASR performance, it is important to understand which factors are most problematic for recognition, identify the types of errors, their main causes and how critical these errors are.
neutral
train_100483
Section 5 describes and evaluates the algorithm to build the hierarchy of topically focused fragments.
burst analysis is relevant in the context of hierarchical topic segmentation, but an appropriate way to exploit it has to be proposed; we address this open issue in the following section.
neutral
train_100484
We estimated "prolificness" of the authors as the ratio of the total number of author"s posts to the total number of posts of the most prolific author in the studied topics (Patil et al, 2013).
our current work studies relations between authors" activity levels and expressed sentiments in an online IVF forum.
neutral
train_100485
paraphrase, summarisation) which cannot be modelled by standard PB-SMT systems.
we focused on detailed analysis of the generated sentences in all three languages, seeking to discover what are the possibilities and limitations of our simplification models.
neutral
train_100486
2 We created Czech dictionaries and a cascaded grammar for analysis of crisis events, as well as boolean combination of keywords for recognition of event types, which was then used in the multilingual event extraction system -NEXUS (Tanev et al., 2008).
the dictionaries used by NEXUS are developed following a semi-automatic procedure described in (tanev et al., 2009).
neutral
train_100487
There has been an increased interest in discovering the semantics of Noun Compounds (NCs 1 ).
the experiments on RELAtION Nastase (2003) (Beamer et al., 2008;turney, 2006b).
neutral
train_100488
The model employs a Best-Selection strategy which does nothing more than selecting the more suitable model for classification for any given test instance.
this experiment investigates the relevance of verb+prep paraphrases (e.g.
neutral
train_100489
One problem that we have not tackled yet is determining the language of origin for each name.
the second step was to apply the first models on the data to further clean and tidy it up (details are given in section 5.1), so that a second, better series of models is obtained.
neutral
train_100490
The discrepancies between the two human transliterations demonstrate the complexity of the task.
there are often multiple transliterations of the same name.
neutral
train_100491
This actually happens because the English word "or" is always related to the Arabic word ‫.
these words must be recombined into their surface forms.
neutral
train_100492
Hence, having assumed that paraphrased pairs would share the same content and similar syntactic structures, we decide to choose the Microsoft Research Paraphrasing Corpus (Dolan et al., 2005) which contains 5,800 sentence pairs extracted from news sources on the web, along with human annotations indicating whether each pair captures a paraphrase/semantic equivalence relationship.
compared to the baseline, the contribution of syntactic structure is not significant to the overall performance.
neutral
train_100493
One limitation is that the metaphor index needs to be derived manually and that manual annotation of metaphors can be an onerous and unreliable process.
index (6), the only index whose evaluation requires human input, estimates the difficulty posed by texts to autistic readers due to their difficulties in understanding metaphor and figurative language.
neutral
train_100494
They utilise the LCD algorithm that tests variables for dependence, independence, and conditional independence to restrict the possible causal relations (Cooper, 1997).
we search for causal relations between events without considering the object, between events given the object, and between events and states.
neutral
train_100495
We have adopted a rich annotation model in which sentiment polarity description is combined with emotion categories.
automatic annotation of lexical material is not a viable option.
neutral
train_100496
This aspect creates semantic ambiguity in which one word could imply different meanings.
this same set of tweets was used to evaluate the proposed approach and the result is presented in Section 5.
neutral
train_100497
Parsing linguistic expressions (e.g., noun phrases (NPs)) is a fundamental component in many natural language processing (NLP) tasks like machine translation (MT) or information retrieval (IR) and indispensable for understanding the meaning of complex units.
to FAST, SAST produces a more fine-grained scoring by exploiting more data.
neutral
train_100498
The task, as well as approaches proposed for determining tweet level sentiment, are nicely summarized in the survey paper of Kharde et al.
overall we see that the performances of the classifiers are all highly satisfactory.
neutral
train_100499
For example in Twitter, tweets may report about news related to recent events such as natural or man-made disasters, discoveries made, local or global election outcomes, health reports, financial updates, etc.
while defining news types, topicality plays an important role.
neutral