id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_96100
This system could be improved in many ways.
to most of these previous works, multilingual support is at the core of HED-WIG.
neutral
train_96101
Then, we vote argument types of NL patterns with NL triples in the similar way as above.
finally, Section 6 concludes this work.
neutral
train_96102
Misclassifications were fixed and segments missing a role were assigned the appropriate one.
furthermore, by using these models, systems can increase their own interpretability, benefiting from the structured data for performing traceable reasoning and generating explanationsfeatures which are becoming even more valuable given the growing importance of Explainable AI (Gunning, 2017).
neutral
train_96103
Following the same syntactic rules adopted for pre-annotation, missing supertypes were identified and the roles around it had its limits adjusted, while the remaining classification was kept unchanged.
in order to make the most of those resources, it is necessary to capture the semantic shape of natural language definitions and structure them in a way that favors both the information extraction process and the subsequent information retrieval, allowing the effective construction of semantic models from these data sources while keeping the resulting model easily searchable and interpretable.
neutral
train_96104
Here, the information of a source sentence is complemented or eliminated in a translation.
this allows 90 different MT systems to be constructed.
neutral
train_96105
(2011) constructed a parallel corpus of biomedical article titles from PUBMED in six languages.
images, tables, references), we would be infringing such copyright rules.
neutral
train_96106
As we removed some parts from the articles (e.g.
europarl corpus is a transcription of speeches, thus inducing a greater linguistic variability.
neutral
train_96107
If the negated event is only in the matrix clause, subordinates are usually excluded from the scope of negation.
he is CL not like show-off DE person 'He is a person who does not like to show off.'
neutral
train_96108
Although keeping them annotated, we do not consider them any further.
the current annotations closely follow the same CONLL format of CONANDOYLE-NEG, marking each negation instance in a sentence as a set of three columns, for cue, event and scope respectively.
neutral
train_96109
(4) can be in fact paraphrased as 'It is not the case that (Marx knew nothing about his customer).
since they are not annotated consistently in the English side, we decide *not* to mark these verbs as cues in Chinese.
neutral
train_96110
Hindi-English Linked Wordnet contains bilingual dictionary entries created from the linked Hindi and English wordnets.
the HindiEn component of the corpus has also been used for the WMt 2014 shared task.
neutral
train_96111
For English, we used true-cased representation for our experiments.
the translations included in the corpus were determined to be of good quality by annotators.
neutral
train_96112
Therefore, the larger the parallel corpora are, the better the performance of the SMT system is.
to compute the similarity of two sentences, Lucene's original source code was modified so that the queries could be read from a text file and the most relevant sentences from another file could be given as the result of each query by the IR system.
neutral
train_96113
The sentences extracted by both methods have been made available online.
the Penalty variable, which is the difference in the word number of the two sentences, makes the similarity score smaller when the difference is too large.
neutral
train_96114
To build the translation systems, the default settings for Giza++ and SRILM toolkit were used.
there is a lack of such data available for everyone.
neutral
train_96115
Before splitting the data into train, dev and test sets, we theorized that having too similar documents in the train and the test sets could lead to a skewed (too optimistic) evaluation of any supervised summarization methods.
the following documents were dropped: • with empty headline; • with abstract shorter than 10 words; • with full text shorter than 100 words; • with text-to-abstract ratio less than 4.
neutral
train_96116
3 This indicates that Twain and James used 1 Distinctiveness Ratio: Measure of variability defined by the rate of occurrence of a word in a text divided by its rate of occurrence in another.
figure 1 shows two salient features for James and Twain: the adjectives broad and usual, where both decrease in usage over time.
neutral
train_96117
After describing the resource, we show how the corpus can be used to detect changing features in literary style.
without also having examined combined models of or with other authors, it is not clear how close James and Twain are in terms of stylistic change regarding the features examined here.
neutral
train_96118
The corpus which we consider in Section 4. comprises argumentation in the political domain.
four annotators were extensively trained in the use of Inference Anchoring Theory (IAT) (Budzynska and Reed, 2011) to analyse the television debates and Reddit discussions constituting the US2016 corpus that we take as a case in point in the current paper.
neutral
train_96119
First, we generated questions for 10,000 nouns randomly sampled from the nouns that most frequently appear in the -de position in the TSUB-AKI corpus (Shinzato et al., 2008) of 600 million web pages.
we used 300-dimensional word embedding vectors pre-trained with wikipedia articles using Skip-gram with a negative-sampling algorithm (Mikolov et al., 2013).
neutral
train_96120
In this paper we have demonstrated that it is feasible to automatically induce training data using parallel data without manual intervention.
the back-off strategy is fairly simple.
neutral
train_96121
The aim of our work is to project Named Entity (NE) annotations from several source languages into a target language for which there is not training data.
we project the automatic tagged Named Entities from three source languages to a fourth target language, performing all 4 permutations.
neutral
train_96122
These two event mentions have conflicting place argument (Canada vs. Iowa), but they are interpreted as coreferential, because both mentions refer to the Life.Die event of "John Smith" (also mentioned as "the man").
eRe builds on the approach to labeling entities, relations, events and their attributes under a pre-defined taxonomy, following the approach used in Automatic Content extraction (ACe) (LDC, 2005;Walker et al., 2006;Song et al., 2015;Mott et al., 2016).
neutral
train_96123
Annotators should not place the aggregate event in the same hopper as any of its subevents, and likewise should not place the subevents in a hopper with each other.
the corpus includes both positive examples of corpus-wide event hopper coreference, both crossdocument and cross-lingual, and also negative coreference judgements of many more potential event pairs.
neutral
train_96124
The goal is that all encoders share the same sentence representation, i.e.
to previous works on cross-lingual document classification with RVC2, we explore training the classifier on all languages and transfer it to all others, ie.
neutral
train_96125
We investigate citation networks in terms of content distance and study how the uncovered patterns can be used for identifying highly influential publications.
at the same time, the difference between the most similar and most dissimilar reference is higher for literature reviews (F15).
neutral
train_96126
Although multiple applications can benefit from analyzing questions based on this criterion, the majority of datasets and taxonomies were designed for question answering systems.
coal, Solar, Wind, Oil, Gas, Nuclear."
neutral
train_96127
ProcessProcedure -the question requests the process by which something happens (e.g., a natural/involuntary process of change) or the procedure for accomplishing a task.
with previous datasets with questions primarily used for question answering systems, we propose the first dataset that can facilitate the analysis of student responses in the educational environment.
neutral
train_96128
The increased demand for structured knowledge has created considerable interest in relation extraction (RE) from large collections of documents.
the basic idea of our zero subject (entity) prediction is to perform tasks by finding the central entity being described within a paragraph without parsing.
neutral
train_96129
We found that 29% of all questions are yes/no questions.
as an upper bound for model performance, we assess how well humans can solve our task.
neutral
train_96130
But, humans will use external information from other modalities for disambiguation successfully if it becomes available.
the same holds true for the German examples of A2), in which RBG assigns alternative, improbable heads to the relative clauses.
neutral
train_96131
Since we investigate the effect of external knowledge like visual scenes on language processing, information are not derived from the images automatically.
the relative clause is supposed to be attached to bed.
neutral
train_96132
We describe and evaluate an element-wise, an item-based, and a hybrid approach, combining the two, to automatically calculate the CO2-footprints of recipes.
so far the footprint of a recipe was calculated with a manual process (O'Connor et al., 2018) which is timeconsuming and therefore too costly to be applied to a wide range of cooking recipes.
neutral
train_96133
Our first approach, ingredient matching, calculates the CO2-footprint based on the ingredient descriptions that are matched to food products in a language resource and therefore suffers from the long tail problem.
we also store the amounts of each term in each recipe, so that we can quickly retrieve the amount of a term in a given recipe when comparing recipes.
neutral
train_96134
For example, the first sentence (Stc.
the learning using the linear kernel took only 6.54 seconds.
neutral
train_96135
In Table 3 we report the performances, for the Twitter data (filtered by our native volunteers to what they judged dialectal as the Gold Standard) and PCJD, in the form of confusion matrices.
we used the default dictionary that comes with the tool with nine features (Japan Information-Technology Promotion Agency, 1995).
neutral
train_96136
This is likely due to the fact that our pivot model is based on newspaper data.
japanese poses an additional challenge since there is no word segmentation in its orthography.
neutral
train_96137
To show how further refinement could help increase the accuracy of a classifier we trained a model using the same set of reduced features but this time only considering instances with more than or equal to 20 words.
each list contains function words/tags related to that category (Garcia-Barrero et al., 2013; Ryding, 2014).
neutral
train_96138
The authors acknowledges the support of the DGE (Ministry of Economy) and DGA (Ministry of Defense): RAPID Project "DRIRS", referenced by number 172906108.
the verb law∼aj will have the same pattern.
neutral
train_96139
Scikit learn (Pedregosa et al., 2011) is an open source python library that is a simple and efficient tool for data mining, data analysis and machine learning.
we carried out an experiment to determine the data size that will give us the highest performance.
neutral
train_96140
MSA is the lingua franca amongst Arabic native speakers.
we choose Dialect Identification as the task to evaluate SDC and compare it with two other corpora.
neutral
train_96141
"Listen, [female] dear, the problem lies both in you and them.
we now turn to describing our dataset.
neutral
train_96142
• Filtering out short tweets: To avoid ambiguity in very short tweets and hence difficulty and confusion in annotations, we also eliminated tweets that have less than three words.
to have a better-quality potential dataset for labeling, we cleaned the dataset as follows: • Filtering out non-Arabic tweets: Many Arab users post tweets written in multiple languages.
neutral
train_96143
The results further suggest that common-off-the-shelf algorithms can reduce the amount of work required to retrieve relevant literature.
yearbook .003 .625 .229 .803 .189 .808 .012 .481 .020 .738 .256 .785 Cohen .449 .839 .265 .768 .472 .814 .163 .557 .423 .832 .239 .714 (a) Intra-topic results averaged over 10 runs (5 × 2 cross validation) for different dataset compositions.
neutral
train_96144
This is to be expected, since M represent those references the human annotators required the full text to judge, and it would be unreasonable to expect the ranker to be able to judge these based only on title and abstract.
unless otherwise stated, we use the default settings for all parameters.
neutral
train_96145
The use of loss instead of other metrics such as accuracy or F-measure is supported by the fact that the loss itself models the actual behavior of the model.
nonetheless, the results of their experiments did not support any hypothesis of the method usefulness.
neutral
train_96146
• Text length: distribution of the text lengths in the training dataset.
the test performance is expected to be slightly worse in comparison to the validation performance since the model was chosen to achieve the best validation performance regardless the test evaluation.
neutral
train_96147
BoW: Identity of the term headword.
in this sense, terms like "distribution" or "measure" can be hypothesized to constitute motors of scientific innovation.
neutral
train_96148
Such units are, indeed, characteristic of academic language and should not be ignored.
we attribute the permeation between these two classes to some of the features that give account for polysemy, as both Linguistics and Interdisciplinary contain more words with numerous possible senses, in addition to being the classes with highest word variety in terms of type-token ratio.
neutral
train_96149
Importing, compressing and representing Wiki instances is an integral part of the system: A new Wiki is imported by providing a path to a compressed XML dump file (either current revisions only, or the full history version) to the XMLDumpParser.
we start by importing the history dump of the simple English wikipedia into a Neo4J database as backend using non-transactional mode 7 .
neutral
train_96150
Generally speaking, researchers need efficient as well as manageable means to access wiki data.
no additional setup is required.
neutral
train_96151
In order to conduct the annotation, two undergraduate students were trained in the use of the MITI 4.0.
interestingly, the results show that the combination of n-grams and lexicons offer similar performance as the MiTi behaviors features.
neutral
train_96152
People are interacting with cellphones, smart TVs, and computers on a daily basis using voice-based interfaces.
the problem of identifying interpersonal relationships is cast as a classification task.
neutral
train_96153
One reason for the occurrence of the misunderstandings is that these systems rely on automated speech recognition (ASR) systems, which, despite showing strong improvements in performance, are far from perfect.
it should provide an intuition for further exploration.
neutral
train_96154
Although these numbers may suggest that PALAVRAS achieved a better score than UDPipe, these numbers taken globally do not reveal the effective quantity of mistakes that were corrected.
the primary motivation to mine the DHBB came from the need to query the material looking for information that requires almost total reading of the whole body of texts.
neutral
train_96155
In fact, even if "Machado Coelho" or "Castro Abreu" were in the lexicon and the phrase were "José Machado Coelho de Castro Abreu nasceu em 1931", only "José Machado de Castro" would be recognized, provided "José Machado Coelho de Castro Abreu" is not in the lexicon.
missingAppos an appositive relation was not detected, Figure 2d.
neutral
train_96156
Another outcome was the difficulty to identify microblog languages in this corpus without using specialized lexical resources (Hamon et al., 2017).
specific and owner-based collections are developed to answer or evaluate a very specific task or are based upon data with specific ownership.
neutral
train_96157
The results show that the GRU multi-way model outperforms the one-way models for all language pairs on all datasets.
all other parameters for the models were identical -we clip the gradient norm to 1.0 (Pascanu et al., 2013), use a dropout of 0.2 and trained the models with adadelta (Zeiler, 2012).
neutral
train_96158
Multi-way NMT systems in both directions improved translation quality (by 3.09 -5.28 BLEU points for Russian→Estonian and 2.16 -4.31 BLEU points for Estonian→Russian) for all three model architectures (deep GRU, convolutional, and transformer), for which we performed multi-way experiments.
the results showed that the most stable architecture for multiway model training was the deep GRU model architecture.
neutral
train_96159
TermEx is very similar to GlossEx with extra extension of entropy-related Domain Consensus (DC) metric.
the term candidate classification is framed as a N-gram classification task rather than the conventional sequence labelling methods that are commonly seen in previous work (Zhou and Su, 2004;Finkel et al., 2004).
neutral
train_96160
The lack of a standard spelling, which introduces intra-dialect and intra-speaker variability.
it contains High German words with their phonetic transcriptions.
neutral
train_96161
If w is a known GSW word then replace w with w and proceed to (4), if not, go to (3).
linguistically, the variety of High German written and spoken in Switzerland is referred to as Swiss Standard German (see Russ (1994), Chapter 4, p. 76-99) and is almost entirely intelligible to German or Austrian speakers.
neutral
train_96162
Linguistically, the variety of High German written and spoken in Switzerland is referred to as Swiss Standard German (see Russ (1994), Chapter 4, p. 76-99) and is almost entirely intelligible to German or Austrian speakers.
training an NMT is not feasible for GSW/DE, as the size of our resources is several orders of magnitude below NMT requirements.
neutral
train_96163
(/sikkalaana nilaiai uruwahiyulathu/) means 'has created a problematic situation', where the word "சிக்கலான" (/sikkalaana/) has meant to be 'problematic' though it can also take the meaning as 'complex', 'issue', or 'conflict'.
and finally, the sixth chapter presents the conclusion of the research along with future work.
neutral
train_96164
Results show that the use of pseudo in-domain data gave positive results in TM and a less significant improvement for LM.
this out-domain data was collected from some freely available sources (Ramasamy et al., 2012, Goldhahn et al., 2012 as well as by web crawling.
neutral
train_96165
title, description, keywords, publisher, author, license etc.).
a list of terms that describe the domain.
neutral
train_96166
The main reason for the relatively low document-level recall is that many document pairs consisting of very short documents were not identified.
we evaluated these methods in the task of reconstructing the English-Greek parallel collection, i.e.
neutral
train_96167
This decision was made under the assumption that code-switching occurs most frequently in spontaneous speech.
the lack of existing speech corpora for code-switched Egyptian Arabic-English is a bottleneck in the creation of ASR systems for conversational Egyptian Arabic.
neutral
train_96168
The overlap was found to be 44.5% in unigrams, 19.2% in bigrams and 5.3% in trigrams.
(Lyu et al., 2015) collected the SEAME corpus, where Mandarin-English audio recordings were collected from interviews and conversational speech and were manually transcribed.
neutral
train_96169
The sentence-level scores were standardized according to each individual assessor's overall mean and standard deviation score.
according to MLT accuracy, for the teams that submitted both constrained and unconstrained models (those using additional external data for training), unconstrained models show improvement over their constrained counterparts in most cases (see Tables 1 and 5).
neutral
train_96170
For teams that submitted both multimodal and text-only systems, the role of multimodality is not evident as far as MLT Accuracy is concerned: sometimes multimodal systems perform better and sometimes text-only systems perform better.
the assessors gave a sentence-level score between 0 and 100, where 0 indicates that the meaning of the source sentence is not preserved in the system output, and 100 means that the meaning is 'perfectly' preserved.
neutral
train_96171
The overall Meteor score of a system is the mean of the sentence-level scores over the test set.
in the training set lehnen occurs 137 times while the rest of the lexical translations combined occur only 16 times.
neutral
train_96172
It assembles user activity data obtained from translation tasks into several languages using a common set of six short English source texts.
other studies have focused on the time course of the translation process (cf.
neutral
train_96173
At first sight this might seem strange.
this is consistent with previous findings that post-editing is less cognitively effortful than translation from scratch (e.g., Green et al., 2013).
neutral
train_96174
For the BML12 from-scratch translation data, there is a consistent pattern where the PWR values for a segment show a steady downward trend as the pauses become longer.
in order to isolate the effects of short monitoring pauses in the translation process and to understand better their influence on cognitive effort, we examine here segment level pause-word ratios in the BML12 and ENJA15 studies for pauses whose lengths fall into different time ranges, specifically 300-500ms, 500-1000ms, 1000-2000ms, 2000-5000ms, and at least 5000ms.
neutral
train_96175
This was also in line with our desideratum not to include meta-information on the sentences, such as being found in two interlinked articles.
we randomly split the documents of the corpora before parallel sentence pair insertion.
neutral
train_96176
MADARi is a web-based interface that supports joint morphological annotation (tokenization, POS tagging, lemmatization) and spelling correction at any point of the annotation process, which minimizes error propagation.
the total number of words to be annotated is about 212,000 words.
neutral
train_96177
These discrepancies are due to differences between the morphological theories adopted by the UD treebanks developers and those employed by the developers of the morphological analyzers.
for Arabic, we adapted the morphological analyzer used in MADAMIRA (Pasha et al., 2014), which is built on top of the databases of SAMA (Maamouri et al., 2010) to output morphology that adheres to the UD Arabic treebank (Taji et al., 2017).
neutral
train_96178
The other project is the so called Baroque corpus (Kieraś et al., 2017).
for example, the word komisja ('commission') appears in f19-1M in the following spellings: komisja, kommisja, komissja, kommissja, komisya, kommisya, komissya, kommissya, komisyja.
neutral
train_96179
Proceduraly, by presenting one symbol at a time on the input layer, a TSOM is prompted to complete the current input string by anticipating the upcoming BMU to be activated.
infinitive, gerund/present participle and past participle forms were added for English, German, italian and Spanish, whereas 3 singular forms of the simple future were included for Modern Greek.
neutral
train_96180
This reduces processing uncertainty, by constraining the range of possible continuations at the stem-suffix boundary of irregularly inflected forms.
in this paper, we investigated inflectional complexity by controlling a number of interacting factors through language-specific training regimes, on which we ran a psycho-linguistically plausible computer model of inflection learning.
neutral
train_96181
We would like to see how well our methods work on compound words it has not seen before.
word formation via compounding is a very widely observed but quite diverse phenomenon across the world's languages, but the compositional semantics of a compound are often productively correlated between even distant languages.
neutral
train_96182
Furthermore, the existing corpora usually do not distinguish sentence pairs which present full matches (both sentences contain the same information), and those that present only partial matches (the two sentences share the meaning only partially), thus not allowing for building customized automated TS systems which would separately model different simplification transformations.
it sometimes increases the number of partial matches which model deletion (see Tables 3 and 4).
neutral
train_96183
Basically these methods optimize some cross-lingual constraints so that the semantic similarity between words corresponds to the closeness of these representations in a common vector space.
tED is a much smaller multilingual data compared to Europarl and contains other languages than European languages.
neutral
train_96184
The contribution of this work is the introduction of a method and its product corpus, KIT-Multi 2 , consisting multilingual word embeddings of English-German-French.
if they need cross-lingual embeddings for a new language pair, they must apply their inducing method on that new bilingual data.
neutral
train_96185
One way of annotating the English sentences would be through human annotators.
since these corpora are publicly available we also release our annotations for the public.
neutral
train_96186
(2017) detail the specific issues of tokenisation for Picard, as well as the choices made.
then, a tagset has to be created or adapted to the language, which requires linguistic expertise.
neutral
train_96187
Transformation of Alsatian spellings for closed class words into their German equivalent in the texts using a custom correspondence dictionary (e.g., Alsatian nìt corresponds to Standard German nicht).
endowing these languages with electronic resources and tools is a major concern for their dissemination, protection and teaching (including for new speakers).
neutral
train_96188
This paper aims at developing a POS tagger for one of the most widely used dialects, namely Gulf Arabic (GA).
table 5 summarises the results of the best systems among all experiments.
neutral
train_96189
The best performance of Bi-LSTM is 91.2% using CC2W+W representation and meta-types and template features.
this suggests that GA is somewhat close to MSA; a similar conclusion reached in (Samih et al., 2017).
neutral
train_96190
We will address this issueamong others -in an inter-annotator agreement experiment described in the following section.
this is often not possible.
neutral
train_96191
When annotating a historical language, the annotators lack the intuition of a native speaker.
neut 85 60 Masc 11 24 neut 34 0 total 130 84 Table 3: Gender of lîf 1 and strît 1 one could suppose that such a detailed annotation as above could be more difficult for the annotator than only using the asterisk and thus could lead to more disagreement between annotators.
neutral
train_96192
Section 2 discusses graph based semi supervised learning techniques and previous attempts on Tamil POS tagging.
fastText treats each word as composed of ngrams and the vector word is made of the sum of these vectors.
neutral
train_96193
The results concerning the directness, which are depicted in Figure 9, show that there is a significant difference between men and women for Spanish: the Spanish female participants selected the direct options significantly more often than the Spanish male participants did.
by adapting the system's behaviour to the user, the conversation agent may appear more familiar and trustworthy.
neutral
train_96194
Moreover, Burleson (2003) presents a study of culture and gender differences in close relationships, emotion and interpersonal communication.
we explore not only the influence of the user's culture but also of the gender, the frequency of use of speech based assistants as well as the system's role.
neutral
train_96195
For each of these acts, the template file randomly picks an utterance from a pre-defined set.
in the next section we describe how we made use of the proposed architecture by implementing a framework and components that can capture sensor data in a use case scenario performing multi-party multi-modal spoken interaction between humans and a robot.
neutral
train_96196
First, it proposed a solution to handle data-stream synchronisation.
the modular version presented in (Dias et al., 2014).
neutral
train_96197
Continuous Bag-of-Words (CBOW) Each statement was embedded, and each vector was averaged.
a sequence of one-hot word vectors were used for CBOW, LSTM, and BLSTM.
neutral
train_96198
Our assumption is that more important attributes should be presented earlier in the dialogue, and that a user-system dialogue simulation system design (Shah et al., 2018;Liu et al., 2017;Gašić et al., 2017) would require such information to be available.
we count the number of hotels for which each attribute applies.
neutral
train_96199
For example, we observe that attributes describing the "feel", such as "feels chic" or "feels upscale", are mentioned around 700 times, and that for 80% of those times they appear in the first half of the conversation as opposed to the second half, showing that they are often used as general hotel descriptors before diving into detailed attributes.
in order to provide more information on the importance of particular attributes, we analyze where in the conversation (i.e.
neutral