id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19400
For all three datasets, the verbs that were not marked as events were mostly the verbs be, have and do, which we assume to be those occurrences where they act as auxiliaries (91%, 65% and 68% of non-annotated verbs in PB/NB, FB and TE3 respectively).
we also found some surprising cases.
contrasting
train_19401
The coverage of adjectives, prepositions and adverbs is quite similar on the three datasets, as shown in Table 3.
we know that for PB/NB these are all part of phrasal verbs (cut loose, dig up), which also accounts for the high coverage of particles in PB/NB.
contrasting
train_19402
The authors of Elephant (Evang et al., 2013) made their system available at http://gmb.let.rug.nl/ elephant, 4 and deserve our gratitude for making the effort to provide a clear documentation, including how to reproduce their experiments.
although a training script is provided, this script only allows training the CRF model.
contrasting
train_19403
): a CRF model is trained, which might include features from a RNN language model previously trained with the same training data.
instead of using only one specific template for the CRF model, the cross-validation stage described in §4.2.
contrasting
train_19404
If E and J were in perfect harmony then we would be able to pair e 1 with j 1 , e 2 with j 2 and so on.
matched documents are rarely in such close correspondence.
contrasting
train_19405
Middle Dutch shares a number of interesting phenomena with Early Modern Dutch (e.g., case marking, clitics, pronoun compounding) which are not found in Modern Dutch, therefore this tagger provided a useful starting point for manual annotation.
major differences between the two historical language varieties exist as well, which necessitates a full check on all generated tags.
contrasting
train_19406
To extend this approach, in Figure 1 agreement measures are provided 1 for lemmas, full POS tags, main tags only, agreement on single features instead of feature sets, and agree-1 Originally, the tagging task was assigned to a pool of nine annotators, each of which was assigned an individual set of documents as well as a selection of documents from the set of the previous annotator for the measurement of inter-annotator agreement.
one of the annotators (annotator e) left the pool, which means that the agreement for the pairs d-e and e-f could not be measured, resulting in a total of seven pairs in Figure 1.
contrasting
train_19407
Therefore, the established categories were selected as a framework for sociolinguistic tagging of the current corpus.
given the fact that agreement on letter goal and topic is low, the framework may need to be revised.
contrasting
train_19408
In recent two decades, a number of scholars have focused on Chinese word extraction for solving CWS or CCWS by dint of various association metrics or a hybrid model of several metrics (Chang and Su, 1997;Chen and Ma, 2002;Luo and Sun, 2003;Ma and Chen, 2003;Feng et al., 2004;Tang et al., 2009;Zhang et al., 2009;Zhang et al., 2010;Duan, Han and Song, 2012;Zhang et al., 2012;Mei et al., 2015;Shen, Kawahara and Kurohashi, 2016).
one big problem for collocation-based methods is that the performance of certain systems will heavily depend on the thresholds setting, because the thresholds for such association metrics are always set heuristically or empirically to gain high performance, which means we have to adjust thresholds with the change of applications or domains with extra effort.
contrasting
train_19409
The word is a Chinese idiom of which the definition is included in Baidu Baike.
with the statistical information alone, this 4-gram word is not extracted out of the corpus.
contrasting
train_19410
A lot of the work in essay grading today makes use of the ASAP AEG dataset.
most of the essays only have an overall score, not attribute-specific scores.
contrasting
train_19411
Most of the essays were annotated by a single annotator.
about a sixth of them were annotated by a second annotator.
contrasting
train_19412
Exercise related advice are relatively rare (n=40), as exercise is often recommended for most of the chronic diseases.
it is found from the annotation that exercise in certain contexts (e.g., in a hot or humid weather, within certain time range of drug administration) can negatively affect well-being.
contrasting
train_19413
Persuaders who express positive emotions increase the ratio of success at persuasion more than persuaders who do not express their emotions because expressing positive emotions gives a cooperative impression to a partner (Carnevale and Isen, 1986;Forgas, 1998).
expressing negative emotions such as "anger" may wrest a concession from a partner even if the proposal from the persuader is not attractive to the partner, especially when the partner does not have any other options to choose (Sinaceur and Tiedens, 2006).
contrasting
train_19414
Emotional expressions tend to be observed in communications between people who have close relationships.
it is hard to record dialogues in such closed situations.
contrasting
train_19415
Automatically detecting emotions has also gained considerable attention over recent years, especially from text (Mohammad, 2012b;Mohammad, 2012a;Zhu et al., 2014;Kiritchenko et al., 2014;Yang et al., 2007;Bollen et al., 2009;Wang et al., 2016;Mohammad and Bravo-Marquez, 2017; but also from images (Fasel and Luettin, 2003;De Silva et al., 1997;Zheng et al., 2010).
image annotations for emotions have largely been limited to small datasets of facial expressions (Lucey et al., 2010;Susskind et al., 2007).
contrasting
train_19416
Which Emotions Apply Frequently to Art: Humans are capable of recognizing hundreds of emotions and it is likely that all of them can be evoked from paintings.
some emotions are more frequent than others and come more easily to mind.
contrasting
train_19417
On the CrowdFlower task settings, we specified that we needed annotations from ten people for each instance.
because of the way the gold instances are setup, they are annotated by more than ten people.
contrasting
train_19418
(Fleiss' κ calculates the extent to which the observed agreement exceeds the one that would be expected by chance (Fleiss, 1971).
note that correcting for chance remains controversial.
contrasting
train_19419
5 Established in 2013, this series of health-related challenges led to the preparation of several corpora-mostly for the English language, but also for other European languages.
these corpora are typically very small and available only for usage directly related to the respective task, i.e., they can neither be used later on nor are they available for the research community independent of the specific CLEF task.
contrasting
train_19420
Recent neural domain adaptation approaches also work through cross-domain embeddings to improve the cross-domain performance (Cai and Zhao, 2016;Zhang et al., 2014).
a critical examination of the underlying assumption, and their assessment in the light of naturally occurring linguistic data, reveal its inherent contradictions (Taylor, 2012).
contrasting
train_19421
In its features, it mainly follows HunToken 4 , which is a rule-based tokenizer and sentence boundary detector for Hungarian (and English) texts.
emToken differs in several properties, e.g.
contrasting
train_19422
Previous morphological analyzers for Hungarian used various ad hoc tagsets.
the tagset used by emMorph and emLem contains tags suggested in the Leipzig Glossing Rules (Comrie et al., 2008) widely used by linguists.
contrasting
train_19423
The GUI-based framework of the LCM enables end-users to conduct standardized text mining workflows without any programming skills.
individual and innovative research designs often demand more flexibility which is hard to achieve with generic pre-defined workflows accessed by point-and-click GUIs.
contrasting
train_19424
The LCM is optimized for the generic processing of large amounts of text data.
for experienced researchers, it is easy to identify needs for analyzes that go beyond the generic usage of text mining.
contrasting
train_19425
Word embedding models can be used to satisfy recurrent tasks in NLP such as lexical and semantic generalisation in machine learning tasks, finding similar or related words and computing semantic relatedness of terms.
building and consuming specific word embedding models require the setting of a large set of configurations, such as corpus-dependant parameters, distance measures as well as compositional models.
contrasting
train_19426
(b) No query options: To extract information from UIMA documents, these documents must first be completely imported into the cache.
it is not possible to select only those UIMA documents that are required for a given application.
contrasting
train_19427
For all six systems, morphosyntactic homographs like read, live, and lives are more challenging than lexical homographs like bass.
the server model performs significantly better on morphosyntactic homographs than the embedded model, presumably due to the presence of POS tag features.
contrasting
train_19428
These issues could be caught by carefully inspecting the intermediate data between every normalization step for every new model trained.
we avoid this approach for three reasons: privacy considerations; the sheer number of language models across languages (each one with many data sets that undergo many steps of normalization); and wanting staff to be able to develop improvements to models even when they do not speak the language involved.
contrasting
train_19429
() built a dataset of 868 German noun-noun compounds, where one of the annotations quantifies the compositionality of the compound on a scale of 1 (semantically opaque) to 6 (semantically transparent).
to Kruszewski and Baroni (2014) and Schulte im Walde et al.
contrasting
train_19430
The two texts in 1a and 1b are textual paraphrases 1 .
they include more than one atomic paraphrase': "magistrate" and "judge" are an instance of "same polarity substitution", while "A federal magistrate ... ordered" and "Zuccarini was ordered by a federal judge..." are an instance of "diathesis alternation" 2 .
contrasting
train_19431
In a similar way, in 11b, "boats" has the meaning as "all boats".
in 11a, "boat" can have the meaning "one particular boat", thus the inflectional change "boat -boats" is not sensepreserving.
contrasting
train_19432
Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.
overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).
contrasting
train_19433
Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.
for other authors, including Polanyi and Zaenen (2006), intensification (e.g.
contrasting
train_19434
The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.
our resource covers over 1200 verbal shifter lemmas.
contrasting
train_19435
A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.
not every argument of a verbal shifter is subject to polarity shifting.
contrasting
train_19436
A common approach to understanding complex controversial issues is to hire experts and conduct surveys.
such an approach has inherent limitations: the survey creators inadvertently bring in biases, often these surveys fail to cover all relevant aspects, and the process is time intensive and expensive.
contrasting
train_19437
These new natural language processing tasks are a way to summarize information about the controversial issues without necessarily having to do the crowdsourcing described above.
the crowdsourced data will serve as a source of reference (gold) labels for the evaluation of these NLP algorithms.
contrasting
train_19438
To calculate these scores, we reuse the formula shown in equation 2.
the percentages are now calculated only on the set of persons that have agreed to (support score) or disagreed with (oppose score) the assertion.
contrasting
train_19439
For the issues Climate Change, Gender Equality, Media Bias, Mandatory Vaccination, and Obama Care the scores are even below 0.5, which means that on average there is more consensus than dissent in judging the assertions on these issues.
as shown by the issues Same-sex Marriage (0.66), Marijuana (0.69) and Vegetarianism & Veganism (0.73), our data contains also more polarizing issues.
contrasting
train_19440
We believe that, when a newspaper has a clear political bias (left-wing or right-wing), most of its readers will share the same ideologies and they will express similar emotions in their replies.
when a newspaper has an intermediate position (centrist), the opinions expressed by its readers will be diverse as well as their emotions.
contrasting
train_19441
As such it has become imperative that such behaviours be recognised and dealt with using automatic or semi-automatic means.
as much as we want to deal with this automatically, it is not quite that easy to automatically recognise these, especially, using the traditional dictionary look-up or similar methods.
contrasting
train_19442
Additionally, all the categories were defined more rigorously, thereby, reducing the scope of different interpretations by the annotators.
given the fact that aggression is a pragmatic phenomenon, the guidelines still gave annotators the flexibility of giving judgments based on their interpretation, instead of fixing the structures, lexical items, etc for each effect.
contrasting
train_19443
Intrinsic evaluation tries to directly quantify how well various kinds of linguistic regularities can be detected with the model independent of its downstream applications (Baroni et al., 2014;Schnabel et al., 2015).
the quality of a word vector may be assessed by its performance in downstream tasks through measuring changes in performance metrics specific to the tasks by extrinsic evaluation.
contrasting
train_19444
For example, different relation categories benefit from different context windows size in different ways, such as the model with larger context windows tends to capture antonymy relation while with smaller windows, learns synonymy relation of the words.
negative sampling and frequency cut-off parameters have different impacts in the three relation categories.
contrasting
train_19445
It shows a large improvement in all evaluations when the dimensionality is increased.
the improvement peaks at 400 for the synonymy and antonymy predictions and 500 for alternative form.
contrasting
train_19446
For example, the statistical methods extracted related terms such as m`lh 4 (degree) and clsiws (Celsius) for the target term TmprTwrh ((physics) temperature).
these related terms do not appear in Hebrew WordNet and thus would be judged irrelevant.
contrasting
train_19447
Due to the problematic gold-standard (Hebrew Wordnet), we do not have a decisive conclusion on the best configuration for term representation.
we did demonstrate the importance of the methodological scheme by showing that the default configuration is definitely not the optimal one.
contrasting
train_19448
(2009), linguistic features were not taken into account for the selection of the samples.
to the corpus of the Baker study that consisted of speech based on a pre-written dialogue, the corpora for this project contained mostly free speech.
contrasting
train_19449
As a consequence of not controlling the linguistic features within the single samples, the difficulty of locating a specific sample might also have differed depending on the amount of salient features of a dialect that appeared in a sample, especially in the case of lexical or certain phonological cues.
as all the samples are longer than 10 seconds, it is very unlikely that no salient features would be included in any of the samples.
contrasting
train_19450
Conversational communication between bilingual speakers represents the dynamic nature of code-mixing in nearly all its entirety.
there are large sections of print media now that employ recurrent patterns of code-mixing, if not switching.
contrasting
train_19451
A pivot language, which is usually English, can bridge the source and target languages and make translation possible.
the domains of these two are often different and thus results in low performance and even ambiguities.
contrasting
train_19452
As shown in Figure 2, we develop an end-to-end system to automatically build our parallel corpus from bilingual websites.
there are still some challenges: 1) how to identify parallel/comparable news articles (bilingual document alignment tasks); 2) bilingual news articles are not direct translations to each other but written separately by Chinese authors and Portuguese authors of the same story.
contrasting
train_19453
As shown in Table 1, vocabulary size is very big on both corpora.
nMT models typically operate with a fixed vocabulary, which results in the OOV problem.
contrasting
train_19454
Standard image search engines are limited to pictures that already exist in their databases, biasing them toward retrieving images of mundane and real-world scenarios.
a scene generation system like WordsEye can illustrate a much wider range of images, allowing users to visualize unusual and fantastical scenes.
contrasting
train_19455
The annotation consists in enriching the documents with summaries, keywords or participant names in order to satisfy the complex queries elaborated by INA customers or researchers within media databases.
due to the increasing number of documents and the limited number of annotators, many documents remain undocumented or only partly documented.
contrasting
train_19456
The sign test indicates that there is a statistically significant difference between these models (p = 5.7e −202 ).
the performance of the TE label prediction model trained and tested on the SICK corpus is close to the performance of the baseline model (56.7%).
contrasting
train_19457
In (Zuanović et al., 2014) authors translated small portion from English analogy dataset to Croatian in order to evaluate their Neural based model.
this translation of syntactic analogy reasoning dataset was only made for a total of 350 questions based on positive-comparative form relationship in adjectives.
contrasting
train_19458
Some approaches started to enrich the text representation by exploiting its semantic meaning by using the Latent Semantic Analysis (LSA) (Choi et al., 2001).
these approaches require a very large corpus, and consequently the pre-processing effort required is significant.
contrasting
train_19459
This is because at each stage in the algorithm the proximity of the newly merged object to all other available segments is computed.
in C-HTS, we apply the hierarchical agglomerative clustering on text level.
contrasting
train_19460
We argue that this is because in the Moonstone dataset the boundary for each level, in each document, was placed by a number of different annotators, hence, there can be mixed agreement between those annotators on the correct placement of the level boundary.
in Wikipedia dataset, the original article hierarchy (where levels are obtained from) was created and updated with the agreement of the Wikipedia article contributors.
contrasting
train_19461
In HAPS, the desired number of levels needs to be passed as a parameter to the algorithm.
in C-HTS, it does not need to know number of levels that are needed in the output structure because the structure produced by C-HTS depends on the coherence between the atomic units of the text.
contrasting
train_19462
Automatically scoring metaphor novelty is an unexplored topic in natural language processing, and research in this area could benefit a wide range of NLP tasks.
no publicly available metaphor novelty datasets currently exist, making it difficult to perform research on this topic.
contrasting
train_19463
Systems capable of distinguishing between different grades of metaphor novelty could thus learn ways to assess cognitive health based on a user's per-ceived comprehension of different metaphors.
data scarcity currently acts as a barrier to research activity in these promising application areas.
contrasting
train_19464
The idea to use distributional semantics to find hypernyms seems natural and has been widely used.
the existing methods used distributional, yet sense-unaware and local features.
contrasting
train_19465
words "apple" and "mango" have distinct "fruit" senses, represented by a list of related senses.
sense clusters represent a global and not a local clustering of senses, i.e.
contrasting
train_19466
Similarly to the induced word senses, the semantic classes are labeled with hypernyms.
to the induced word senses, which represent a local clustering of word senses (related to a given word) semantic classes represent a global sense clustering of word senses.
contrasting
train_19467
Distributional representations of rare words, such as "mangosteen" can be less precise than those of frequent words.
co-occurrence of a hyponym and a hypernym in a single sentence is not required in our approach, while it is the case for the path-based hypernymy extraction methods.
contrasting
train_19468
Secondly, for the unpruned model (t = 0), edge weights based on counts worked better than logarithmic weights.
when pruned (t > 0), logarithmic edge weighting shows better results.
contrasting
train_19469
The improvements of recall are due to the fact that to label a cluster of co-hyponyms it is sufficient to lookup hypernyms for only a fraction of words in the clusters.
binary relations will be generated between all cluster hypernyms and the cluster words potentially generating hypernyms missing in the input database.
contrasting
train_19470
(2008) recognize 23% of relation mentions in a biomedical dataset as inter-sentence relation instances.
a major bottleneck for investigating inter-sentence relation extraction is the absence of a significantly large dataset with inter-sentence relation mentions.
contrasting
train_19471
(2017) have investigated inter-sentence relation extraction on a large dataset.
the study is focused on a specialised domain such drug-gene interaction.
contrasting
train_19472
For instance, as seen in Listing 3, the sample sentence pair for the relation business/company/industry does not provide an explicitly visible relationship between the entities for the said relation.
the seed instances used identifies "Google" as a "Search" industry, resulting in obtaining this sentence pair as a suitable candidate for inter-sentence relation extraction.
contrasting
train_19473
These results shows that these models are able to easily learn from the available features (words) for these relations.
as seen in Table 4, there are a number of relations, where the models achieve a significantly lower F-score.
contrasting
train_19474
(2015b) report an F-score of 0.82 by training an LSTM model using word embeddings.
instead of using all the words between the entities in the sentence, Xu et al.
contrasting
train_19475
Naturally, users type without explicit accents and rely on the auto-completion systems.
these systems are usually simple, unigrambased, and based on the word form ambiguity for a given language (cf.
contrasting
train_19476
Also, first-order syntactic features (extracted directly from the dependency relations themselves) and secondorder syntactic features (the partial analysis of a word, when already linked in the partial tree) complete the linguistic information usually exploited for automatic parsing.
additional features, whenever available, are easy to integrate in data-driven systems and our initial intuition was that semantic features, like wordnet relations, sub-categorization frames and semantic classes can increase parsing performance.
contrasting
train_19477
This classifier beats the majority baseline overall and performs relatively well in most areas.
it has a very low recall when identifying None type sluices.
contrasting
train_19478
Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems.
most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions.
contrasting
train_19479
The IEMOCAP database (Busso et al., 2008), to the best of our knowledge, is the only dataset that provides emotion labels for each utterance.
iEMOCAP was created by actors performing emotions, and hence carries the risk of overacting.
contrasting
train_19480
In academic research we are often focused on a highly specific problem, and we can make extensive assumptions about aspects that are not in the immediate center of attention.
industrial endeavors require a more holistic view; they need to work with given practical settings and address specific requirements of live systems.
contrasting
train_19481
Relatively recent orthographic reforms have made the modern Javanese and Sundanese orthographies highly regular and phonemically transparent.
in conventional usage several ambiguities arise.
contrasting
train_19482
(1987), warrant is indistinguishable from data.
argument schemes capture specific patterns of argument that are in use; each argument scheme specifies specific premises for the given conclusion, as well as critical questions that can be used to examine the strength of the given argument (Walton, 1996;Blair, 2001).
contrasting
train_19483
The annotator that has chosen DEFINI-TIONAL has most likely interpreted the premise as a categorization.
the categorization process is not at the basis of the inference that allows to support the conclusion.
contrasting
train_19484
A sad highlight: Losing 60:87 to the weakish Mannheimers in a sold-out stadium.").
due to the scarcity of data, it is not possible to statistically evaluate the significance of this preferential association.
contrasting
train_19485
Discourse connective prediction is related to the NLI problem since entailment and contradiction can be explicitly indicated by certain connectives (for instance, therefore and by contrast, respectively).
the larger number of classes makes connective prediction more challenging.
contrasting
train_19486
In a study of whether phonological inventories have become more or less complex over time, Marsico (1999) shows that languages dating back as far as 10,000 years are equally-complex in terms of their number of segments, consonant/vowel ratio, average number of consonants and vowels, and frequency hierarchy of the segments.
marsico (1999) also notes that modern languages tend to have slightly more consonants today than their ancestors did in the past.
contrasting
train_19487
For example, a vowel space can be expanded straightforwardly by the contrastive features length and nasalization.
length and labialization, palatalization, and velarization, can expand consonant inventories.
contrasting
train_19488
In addition, especially for low-resource languages, names can be an invaluable source of information for learning morphemes and their semantics.
finding their optimal translation is a challenging task, due to various reasons, including low occurrence counts and because certain names have high translation entropy or are translated into their localized proper names.
contrasting
train_19489
For example, the English name David is best aligned to the lemma form Depito in the Ankave language.
david is also aligned to words of the form depito + some affixes.
contrasting
train_19490
The digitisation efforts have made the Sanskrit manuscripts easily available in the public domain.
the accessibility of such digitised manuscripts is still limited.
contrasting
train_19491
when a document about a hurricane says that "housing across the island was destroyed", annotators should label a shelter need).
inference is a slippery slope, and too much use of inference can lead annotators to create frames for all possible needs typically associated with an incident -even when they are not implied by the document (e.g.
contrasting
train_19492
The high F-scores indicate that the meaning representations are often syntactically very similar, if not identical.
there is a considerable subset of meaning representations which are different from the English ones, indicating that there is at least a slight discrepancy in meaning for those translations.
contrasting
train_19493
A common trend across evaluations of WordNets extracted from lexical resources is that while the synsets themselves are reasonably precise, recall is often very low -that is, although extracted synsets are accurate when compared to reference WordNets such as PWN, not enough synsets are actually being extracted using automatic methods.
these results do not necessarily paint the full picture -there are few agreed principles or common guidelines for evaluating extracted synsets, and it is often difficult to decide what constitutes a correctly or incorrectly extracted synset.
contrasting
train_19494
More recently there have been several purely data-driven end-to-end approaches to sign recognition from continuous signing based on Recurrent Neural Net (RNN) architectures (Cui, Liu, and Zhang, 2017;Koller, Zargaran, and Ney, 2017).
the performance of these imagebased approaches is held back by limitations in the data- Figure 1: Framework Overview sets and the fact that they do not integrate linguistic knowledge and perform 3D analysis.
contrasting
train_19495
The handshape CNN returns top-1 accuracy of 70.1%; top-5 accuracy reaches 92.3%.
we use the entire set of handshape probabilities from the output of the neural net as features for sign recognition.
contrasting
train_19496
For example, learning performance was better for the Genbase dataset (LC: 1.252, LD: 0.046) as compared to the Medical dataset (LC: 1.245, LD: 0.028), where they had similar cardinalities but the Medical dataset was less dense.
performance was better for the Emotions dataset (LC: 1.869, LD: 0.311) as compared to the Yeast dataset (LC: 4.237, LD: 0.303), where they had similar density but cardinality of the Yeast dataset was higher.
contrasting
train_19497
Movie and TV subtitles are a highly valuable resource for the compilation of parallel corpora thanks to their availability in large numbers and across many languages.
the quality of the resulting sentence alignments is often lower than for other parallel corpora.
contrasting
train_19498
Web 2.0 has brought with it numerous user-produced data revealing one's thoughts, experiences, and knowledge, which are a great source for many tasks, such as information extraction, and knowledge base construction.
the colloquial nature of the texts poses new challenges for current natural language processing techniques, which are more adapt to the formal form of the language.
contrasting
train_19499
It's common belief that a "他"("him" or "he") is dropped from the sentence 2 .
if restored, the resulting sentence "我请他他吃饭。"(I invite him he eat meal, i.e., "I invite him to eat a meal.")
contrasting