id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19200
We implemented this by a map-reduce counting operation.
this did not account for another problem which arose during knowledge acquisition: knowledge sparsity.
contrasting
train_19201
Based on the assumption that related or similar class labels tend to co-occur in documents more often than unrelated labels , we expect that vector representations for similar labels will tend to be closer to each other due to the associations via documents for which such labels co-occur.
relationships of this sort could conceivably also be discoverable simply by counting co-occurrences of class labels.
contrasting
train_19202
Indeed, our experimental results in Section 5 show that this turns out to already be a strong baseline.
this idea only works for labels that have been observed together.
contrasting
train_19203
We conclude the paper in Section 9.. Our resource is publicly released at: github.com/CorentinDumont/QA_Minecraft Minecraft (Figure 1) is a sandbox video game, which means that the player is free to choose the actions he wants to execute, and the order of these actions.
as all video games, the number of possible actions is limited.
contrasting
train_19204
Tireless efforts of data collection of speech data and their transcriptions from the early stage of NLP and SLP researches drastically improved accuracies of a variety of NLP and SLP tasks.
we still do not have enough resources for some minor languages, and it causes less accuracies in minor languages (Kominek and Black, 2006).
contrasting
train_19205
Manual annotation and transcription of speech followed the guidelines specified for the GOS corpus (Verdonik et al., 2013).
they were reviewed in order to best meet the needs of ASR.
contrasting
train_19206
In this paper we compared the average speaking rate of each corpus by calculating total speech time.
in the case of the Japanese language, although duration of one mora is almost constant, the duration of speech changes slightly depending on the position of the phoneme (Ota, 2003).
contrasting
train_19207
Qualitative and reader-task assessments require human readers, professional judgment and experience.
quantitative assessment can be automated, thereby giving the opportunity to explore linguistic features and analyse how they reflect the complexity of the text.
contrasting
train_19208
In general, the performance of the algorithms decreased when a new class was added and the best result was obtained by the FilteredClassifier algorithm (0.73 F-measure).
the OneR algorithm 8 got a high F-measure (near to FilteredClassifier algorithm).
contrasting
train_19209
Recent works in spoken language translation (SLT) have attempted to build end-to-end speech-to-text translation without using source language transcription during learning or decoding.
while large quantities of parallel texts (such as Europarl, OpenSubtitles) are available for training machine translation systems, there are no large (>100h) and open source parallel corpora that include speech in a source language aligned to text in a target language.
contrasting
train_19210
For example, Fisher and Callhome Spanish-English corpora provide 38 hours of speech transcriptions of telephonic conversations aligned with their translations (Post et al., 2013).
these corpora are only medium size and contain low-bandwidth recordings.
contrasting
train_19211
In the ELEV domain, NMT had the lead again, as shown in Table 6.
in this case SMT means were closer to NMT's means, which left GT as the worst performing system.
contrasting
train_19212
The connection between MT output and "post-edits" is hence weaker than in (Junczys-Dowmunt and Grundkiewicz, 2016) due to the fact that our "post-edits" are actually independent reference translations of the source sentences.
the possible noise introduced by translation errors can only affect one element of our triplets.
contrasting
train_19213
Also, we expect the blacklist method to achieve a high precision, because the definition of "blacklist" is actually closely related to literal translation errors.
the drawback of this method is that the method is restricted to only one error type, literal translation errors, and will not detect any other type of errors such as deletions or repetitions of the idiom.
contrasting
train_19214
In this example shown in Table 6, the idiom meaning "full of energy" or "actively" is incorrectly translated into "have to".
as this is not a literal translation error, our blacklist method is unable to catch it.
contrasting
train_19215
Neural word embedding models trained on sizable corpora have proved to be a very efficient means of representing meaning.
the abstract vectors representing words and phrases in these models are not interpretable for humans by themselves.
contrasting
train_19216
One of the most popular semantic resources for English is WordNet (Fellbaum, 1998;Miller, 1995).
wordNet has been criticized for its too high granularity at the bottom level and its generality at the top level (Brown, 2008).
contrasting
train_19217
Similarly, the preferences for the fonts that are found to be incongruent by our method was much lower than the expected value, with an average of only 20.13%.
a detailed look at the values for individual emotion attributes reveal that the performance differs between them.
contrasting
train_19218
The code of the lexical corrector is available at the above mentioned url 1 .
to standard DL we assign weights to error types: missing or superfluous diacritics only add 0.3 to the distance.
contrasting
train_19219
The LSTM-CRF models the prediction task as a classification problem, using a fixed number of non-ordinal class labels.
the LSTM-SIG model provides a continuous prediction, using a sigmoid nonlinearity to bound the prediction scores between 0 and 1.
contrasting
train_19220
In these experiments subjects accepted robots with conformed gender and personality to the respective role stereotype more.
to these studies, we choose a more content-related manipulation of personality traits and stereotypes to the robot with verbal cues.
contrasting
train_19221
The moderator had the role of leading the discussion.
this does not change the fact that this is an example of dynamic multiparty situated interactions that (Bohus and Horvitz, 2009) define as an open-world dialogue.
contrasting
train_19222
The ability to model and automatically detect dialogue act is an important step toward understanding spontaneous speech and Instant Messages.
it has been difficult to infer a dialogue act from a surface utterance because it highly depends on the context of the utterance and speaker linguistic knowledge; especially in Arabic dialects.
contrasting
train_19223
For instance, the speaker utters a sentence, which most well expresses his/her intention (act) so that the hearer can easily understand what the speaker's dialogue act is.
the speaker type Operator or Customer of the current utterance can help to determine the act of utterance.
contrasting
train_19224
As semantic equivalence is a transitive relation, it suffices that each concept, in a particular language/WordNet, is indexed with an equivalent concept, in any other language/WordNet.
in practice concepts from all languages other than English have been connected to concepts of only one other language, namely English.
contrasting
train_19225
With regard to authorization, both approaches use a rolebased access management based on Access-Control-Lists (ACLs).
there are differences in the use of ACLs, and where they are stored.
contrasting
train_19226
Game will ask players to listen to short audio clips and identify the language spoken.
our version will improve tracking, language choices, educational potential and, the ability to collect new judgements.
contrasting
train_19227
If a dimension were found not to be relevant to any paper in natural language processing, that would constitute evidence that it is not a valid dimension, or at least not a very useful one.
if an analysis of publications in natural language processing resulted in very disparate aspects of papers being lumped into the same dimension, that would be consistent with the hypothesis that the dimension in question needed to be split into finer-grained categories.
contrasting
train_19228
More generally, they lie at the very heart of evaluation in natural language processing, where the most common trope is to compare the performance of one system as measured by some figure of merit to that of another (Resnik and Lin, 2010).
with a conclusion, a finding is a repeatable discovery, whereas a conclusion is not-it is instead a broader statement inferred (justifiably or not) from one or more findings.
contrasting
train_19229
Findings The finding that the frequencies of explicit phrasal negation in the scientific journal articles were normally distributed was not reproduced.
the finding that the mean of the frequencies of explicit phrasal negation in the scientific journal articles was statistically significantly lower than the mean in the clinical documents was reproduced.
contrasting
train_19230
Further, different words can convey affect to various degrees (intensities).
existing manually created lexicons for basic emotions (such as anger and fear) indicate only coarse categories of affect association (for example, associated with anger or not associated with anger).
contrasting
train_19231
We find that anger, fear, and sadness words, on average, have very similar VAD scores.
sadness words tend to have slightly lower dominance scores than fear and anger words.
contrasting
train_19232
For example, dejected and wistful denotate some amount of sadness (and are thus associated with sadness).
some words are associated with affect even though they do not denotate affect.
contrasting
train_19233
Best-Worst Scaling (BWS) was developed by Louviere (1991), building on some ground-breaking research in the 1960's in mathematical psychology and psychophysics by Anthony A. J. Marley and Duncan Luce.
it is not well known outside the areas of choice modeling and marketing research.
contrasting
train_19234
The last dimension, Dominance, is sometimes omitted, leading to the VA model.
to NLP where many different formats are being used lexical resources in psychology almost exclusively subscribe to VA(D) or Basic Emotions (typically omitting Surprise; the BE5 format).
contrasting
train_19235
Since different from Section 4.2., we now have a fixed test set, we use a one-tailed z-test (p < .05) based on z-transformed correlation values (Cohen, 1995).
to Valence, the performance for Arousal and Dominance may suffer quite substantially in the crosslingual approach, depending on the combination of training and testing languages.
contrasting
train_19236
Rather it seems to depend on subtle semantic differences between the translational equivalents of the affective dimensions/categories, cultural differences, or variations in the annotation guidelines, suggesting that the above assumption of language independence (not so surprisingly) may not fully hold.
to these partly inconclusive results, the outcome for mapping VAD2BE5 is much more favorable for EMOMAP and easy to describe.
contrasting
train_19237
As evident from the last section, the performance of our mapping approach may vary depending on source and target language.
different from the last experiment, when constructing new emotion lexicons in a crosslingual fashion, there is no need to restrict the training set to only one language.
contrasting
train_19238
We choose Twitter as the source of the textual data we annotate because tweets are selfcontained, widely used, public posts, and tend to be rich in emotions.
other choices such as weblogs, forum posts, and comments on newspaper articles are also suitable avenues for future work.
contrasting
train_19239
Similar to the work by Mohammad and Bravo-Marquez (2017b), we create four subsets annotated for intensity of fear, joy, sadness, and anger, respectively.
unlike the earlier work, here a common dataset of tweets is annotated for all three negative emotions: fear, anger, and sadness.
contrasting
train_19240
The EI-oc datasets include the same tweets as in EI-reg, that is, the Anger EI-oc dataset has the same tweets as in the Anger EI-reg dataset, the Fear EI-oc dataset has the same tweets as in the Fear EI-reg dataset, and so on.
the labels for EI-oc tweets are ordinal classes instead of real-valued intensity scores.
contrasting
train_19241
On the CrowdFlower task settings, we specified that we needed annotations from seven people for each tweet.
because of the way the gold tweets were setup, they were annotated by more than seven people.
contrasting
train_19242
As an added benefit, the fastText classifier is dramatically faster than the HypeNET model, with a reduction of training time from around 75 minutes to less than a minute.
as the results of the MaxEnt model show, the features alone are not enough.
contrasting
train_19243
NP-MSSG reports the best performance in SCWS where sentential information is available, which shows an advantage of cluster-based models of capturing the senses.
the proposed method significantly outperforms (Fisher transformation at p < 0.05) NP-MSSG and MSSG in RW, MEN and SimLex.
contrasting
train_19244
Second, shortened strings, such as abbreviations, co-occur in the same context as their full form but differ strongly in their surface form.
the computational cost of training word embeddings increases proportionally with a rising number of languages considered.
contrasting
train_19245
For this reason we chose Jaccard as the baseline for measuring the performance of the proposed embedding-based approach.
linguistic condensation strategies, such as compounding or abbreviations, and the lack of semantic context can pose a serious challenge to string-based ontology alignment methods.
contrasting
train_19246
From this point of view, both apple and book have the same role, as both are carrying the [PHYSICAL OBJECT] feature.
this similarity is restricted to the chain clarifying relationship for the verb see.
contrasting
train_19247
For limited data, human experts can verify manually the validity of some of them.
our approach, PP, produces hundreds of different paraphrases for each verbal phrase.
contrasting
train_19248
Generally in German, punctuation segments a text into sentences, and spaces are used to segment a sentence into words.
this rule of thumb was not always applicable to 1 A sample of our corpus will be available at http:// ansichtskartenprojekt.de Word form: real word forms A2 Normalized word form: all lower case and withoutü A3 Character type of unit: word form is categorised into the following classes: (1) all special characters (2) all numbers (3) capitalized (4) all alphabets without capitalization (5) mixed of all possible character without capitalization A4-7 Suffix: the last 4, 3, 2, 1 character of words, respectively.
contrasting
train_19249
The morphosyntactic analysis using the existing POS taggers showed a lower performance, and the semantic features (B) did not achieve high accuracy.
the combination of these three types of features outperformed the word/lemma features.
contrasting
train_19250
The main finding was that the window side did not affect the accuracy as much as expected.
the wider context window size slightly improved the accuracy of the test set of TüBa.
contrasting
train_19251
These discourse types are typical in postcards but are rarely included in a newspaper corpus.
difference-based entropy is a measurement of differences in entropy scores based on a language model trained on both out-of-domain and in-domain sentences.
contrasting
train_19252
This information is frequently needed for research, quality improvement, surveillance, and other important functions.
the manual abstraction of this information can be incredibly timeconsuming and expensive, often making it infeasible for both early-stage research and clinical quality improvement projects.
contrasting
train_19253
This implies that a larger volume of entities and relations could be found by processing the same number of words from abstracts than from full papers.
full papers (where available) provide much more information about the entities they contain and so are important for information extraction tasks where important information may not be reported in the abstract.
contrasting
train_19254
Therefore, specific training and test datasets are also necessary to precisely translate biomedical document across languages.
despite its importance for the general population and researchers, there are very few parallel and comparable corpora specific for this domain.
contrasting
train_19255
The Website site of the Brazilian Clinical Trials Registry 9 provides ways to easily download the trials in XML format, which was further parsed.
given the various elements (sub-sections) in a trial, e.g., inclusion criteria, exclusion criteria, and given that some of these appear multiple times in the document, the automatic alignment of parallel documents is not straightforward.
contrasting
train_19256
Sentence segmentation: we relied on tools which are non-specific for the biomedical domains, such as Stanford CoreNLP, OpenNLP and SAP HANA.
we did observe issues.
contrasting
train_19257
Note that this paradigm will only work for MWEs which can be translated into single lexical nodes in the target language; MWEs which are translated by other multiwords will result in translation failures (i.e., insertion or deletion errors).
we expect that such failures will happen relatively infrequently 11 .
contrasting
train_19258
They have shown that their translation-based approach performs better than using linguistic approaches.
they did not combine these two kind of approaches.
contrasting
train_19259
They compared the translational capabilities of their model with regard to Google Translate.
there was no research specific to the domain of idioms in this work.
contrasting
train_19260
(2013) claimed that most of the time user queries need to be addressed with recent information.
many situations demand the past or future related information.
contrasting
train_19261
When these two sentences are subjected as input to the SUTime tagger, 2 we observe that, for both the sentences, the word 'present' 3 is tagged as a temporal expression.
it should be temporal only for the first sentence.
contrasting
train_19262
Here, both rule-based and machine learning-based methods incorrectly classify the sentence as present.
this is actually an instance of future.
contrasting
train_19263
(1) Businesses are emerging on the Internet so quickly that no one, including government regulators, can keep track of them.
the temporal information of expressions other than "event" also can be a clue to understand text.
contrasting
train_19264
In particular, all classifiers we developed share with previous systems basic morpho-syntactic features, such as token, lemma, POS, and dependency relations.
we have added lexical semantic information by using not only WordNet synsets, but also VerbNet classes and FrameNet frames, obtained from the alignments in the Predicate Matrix (Lacalle et al., 2014).
contrasting
train_19265
The techniques could be integrated in for instance a reading aid, an activity mainly focused on receptive understanding.
all experts expressed doubt whether language learners, especially high school students, would put in the extra effort to solve the linguistic puzzle laid out before them simply to understand a word, while easier alternatives, such as linked dictionary could be made available.
contrasting
train_19266
But most of the web texts retrieved by search engines require a high language proficiency, even for native speakers (Vajjala and Meurers, 2013).
the use of an off-line corpus ensures content quality, while hindering the search for different text topics.
contrasting
train_19267
One remarkable extension in version 4.0 was the inclusion of a Coreference Resolution module, based on Relax-Cor, the second-ranked system in CoNLL-2011 shared task 1 http://nlp.lsi.upc.edu/freeling 2 http://www.gnu.org/copyleft/agpl.html (Sapena et al., 2011).
academic shared tasks have a very specific scenario, which does not necessarily match the real-world settings in which a system like FreeLing is required to operate.
contrasting
train_19268
In cases where they do not refer to the same referent, but a related entity, they can in principle be considered bridging.
in this case, Tehran is not anaphoric, which leads us on to the following important distinction.
contrasting
train_19269
in a case like http://www.ims.uni-stuttgart.de/ institut/mitarbeiter/roesigia/ guidelines-bridging-en.pdf where some people might consider this a bridging case, as the foreign secretary Mottaki is probably not interpretable alone for a typical WSJ reader without the mentioning of Iran first.
others might argue that his discourse referent might already be identified by his name.
contrasting
train_19270
The first improvement featured larger word embedding vectors (of the size 300 instead of 50), which gave 6647 features for each training example.
despite much richer embeddings, we did not observe any significant improvements in the evaluation metrics.
contrasting
train_19271
A small dataset with manual coreference annotation was earlier published for Hungarian (Miháltz, 2012).
here we present our large corpus, SzegedKoref, which has been manually annotated for coreference data.
contrasting
train_19272
The coreference relation is shared across all languages.
languages differ considerably in the range of linguistic means triggering this relation (Kunz and Steiner, 2012;Kunz and Lapshinova-Koltunski, 2015;Novák and Nedoluzhko, 2015).
contrasting
train_19273
adverbs are not considered in most coreference annotation schemes.
they constitute around 8% of all referring expressions in the German language 1 and are especially frequent in spoken and spoken-like language.
contrasting
train_19274
We suppose that the reason for the greater disagreement for German texts is the complexity of the linguistic structures triggering coreference in this language.
a more detailed analysis of the agreement results is needed to understand the reasons.
contrasting
train_19275
4 million parameters on as little as around 100 hours of speech.
the de facto standard in the industry is to use sizeable training corpora, which total durations are measured in hundreds or even thousands of hours (Amodei and others, December 2015; Xiong and others, January 2017).
contrasting
train_19276
The present discussion has focused on the collection of a large quantity of continuous speech recordings.
our first attempts at recorded speech collection were focused on a much simpler case, i.e.
contrasting
train_19277
For example, the database at National Institute for Japanese Language and Linguistics (National Institute for Japanese Language and Linguistics, 2016) includes text and speech data spoken by native dialect speakers.
the recording setting is not suitable for spoken language processing.
contrasting
train_19278
In addition, because their corpus includes the parallel text of the common language and dialects, it is also useful for natural language processing research.
dialects that can be collected in such a perfect environment are very limited and collecting many dialects is unrealistic because of geographical and expense issues.
contrasting
train_19279
Vocabulary knowledge prediction is an important task in lexical text simplification for foreign language learners (L2 learners).
previously studied methods that use hand-crafted rules based on one or two word features have had limited success.
contrasting
train_19280
Another one, Page Analysis and Ground-truth Elements(Pletschacher and Antonacopoulos, 2010) (Page XML) allows to store image features, layout structure and content page.
the diversity and complexity of data contained in a daily page of Italian comedy require types and tags to be more specific and hold-back all levels of annotation.
contrasting
train_19281
In both cases, the textual data directly describes image content, thus enabling the above-mentioned lines of research.
these datasets are often restricted to English language text and typically of relatively broad domain; The FLICKR caption datasets for instance contain images including landscapes, animals, and everyday scenes while the COCO dataset is similarly broad but contains more items per image.
contrasting
train_19282
On the one hand, we will include other types of fashion items besides dresses, such as shoes and shirts.
we aim to repeat crowdsourced data gathering efforts for languages other than German, such as English, French, and Dutch.
contrasting
train_19283
In crowdsourcing, since difference how much preserve meaning for each annotator, the case that BLEU become low occurs.
it is considered that the quality of simplification corpus is high because about 70% of inter-agreement exceeds 0.4.
contrasting
train_19284
The aligned files (parallel corpus) share the same first part of the file name.
not all files with the same name contain the same amount of text.
contrasting
train_19285
Also, many annotators chose to mark a minimum of entities instead of marking all options.
this manual inspection showed us that the topics that were marked, were correctly marked, linked and also correctly linked to the topic in the target language (in this small sampling 90-100% correct).
contrasting
train_19286
An approach based on ontology label translation (Arcan and Buitelaar, 2013) was developed to provide a knowledge-based extension to a statistical machine translation system.
automatically translated multilingual terms often suffer from quality issues.
contrasting
train_19287
On the one hand, it exhibited a high frequency of domain specific terms and expressions, named entities, scientific formulas, and words unknown to crowdworkers, as well as to any system for posterior processing.
the subtitle genre contained spontaneous speech properties, truncated sentences, elliptical formations, disfluencies, repetitions, interjections and fillers.
contrasting
train_19288
This method of collecting vocabulary-test results may be beneficial for classroom teaching because the environment under which a dataset is taken is a classroom and the applications to which the dataset are used are also for classrooms.
this is not the case for developing educational software, in which participants are more diverse than in typical classrooms.
contrasting
train_19289
This feature has been used particularly with the ACE corpus.
modern corpus is no longer annotated with this feature, which is why we do not include it.
contrasting
train_19290
Generally speaking, only the factual information with high credibility has the value for use.
most expressions in social media are published with the hypothesis and episteme which cannot be decided whether it is true or false at that moment.
contrasting
train_19291
Similar work for Chinese has been reported by Ji (2010), who constructed the Chinese corpus from newspapers for Chinese uncertainty identification.
their data was annotated exactly based on the cue-phrases and size of corpus was a bit small for some uncertainty detection systems.
contrasting
train_19292
Question is the uncertainty category with the most microblogs, followed by Possible.
only 117 microblogs are labeled as External.
contrasting
train_19293
As meta-data for describing event information, event types and schemas are very important information in many applications such as event extraction.
there is no large amounts of annotated data for event typing or event schema extraction.
contrasting
train_19294
In some sense, this data can be seen as a small knowledge base of air crashes.
our EventWiki contains much more major events of a variety of types and rich knowledge about events, which is the first complete event-centric knowledge base as far as we know.
contrasting
train_19295
Strictly speaking, spin is not necessarily a form of deception, as the intention is difficult to establish most of the time, e.g., spin in abstracts may be conditioned by limited space; by author's wish to report the results that he/she perceives to be most important; by unclear/absent reporting guidelines; by lack of training etc.
spin is similar to deception for what concerns its impact and the method required to detect it from textual content only (Mihalcea et al.
contrasting
train_19296
The goal of those previous location estimation researches is to estimate the locations vaguely with best-effort approaches.
our goal is to provide the positions of infected people as accurately as possible.
contrasting
train_19297
This has been showcased in cross-genre experiments in Section 4, which also show that our model, trained on tweets, is robust and achieves results comparable to the models built specifically for the crossgenre task.
specific components and the entire workflow can be reused for building new models for AP as well as for other text classification tasks.
contrasting
train_19298
Fast prototyping is important for the users who wish to develop new models quickly.
there are users who just wish to use an already developed and trained model.
contrasting
train_19299
While thus in the computational linguistics literature, word pair relations have been extensively researched, a focus or look to the categories function vs. content word is rather rare.
(psycho-) linguistic literature often applied these categories, but a focus on word pairs is rather rare.
contrasting