id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19800
This behavior can be observed for several languages, for English or German for example.
most natural language processing systems, parsers for example, rely on linguistic information only without taking further knowledge into account.
contrasting
train_19801
However, the terms juice and puree should not be weighted heavier than apple, since a match to one of the three food products containing apple is already a much better fit than a match to for example orange juice.
the term frequency is needed since some ingredient descriptions contain the same stemmed term multiple times and thus we assume that it is indeed more important than others.
contrasting
train_19802
An average recipe in the hobby collection consists of 12.7 ingredients and an average recipe in the catering collection has 20.5 ingredients.
not only the number of ingredients is different but also the ingredients themselves.
contrasting
train_19803
Originally, in analyzing puns, as with many other linguistic phenomena, it is necessary to analyze them reflecting the frequency of occurrence in the real world and conditions for occurrence (such as topics with high possibility of occurrence of conversation including puns).
with regard to the use of puns in everyday conversation, actual corpus and its statistical analysis result are not available so far and it is still difficult to find concrete and practical examples.
contrasting
train_19804
Hence, it can be said that the performance with respect to the unit requirement of the computational resource becomes maximum in the case of the linear kernel.
there still remains a strong need for a faster learning algorithm as accurate as an SVM with RBF Kernel.
contrasting
train_19805
The missing diacritics are not a major challenge to literate native adults.
their absence is the main source of ambiguity in Arabic NLP.
contrasting
train_19806
• Eastern zone (Mashriq): with dialects from Egypt, Syria and the others middle-east countries like Iraq, Golf states, Yemen, Oman, Jordan, etc.
this classification was refined, giving a new typology which was accepted by many researchers, such as (Versteegh and Versteegh, 2001;Habash, 2010).
contrasting
train_19807
• Inaccessible Tweets: We used Twitter widget to display the tweets on the task interface using the tweet ID.
when tweets are deleted or the author make his profile private, the annotators are no longer able to label them.
contrasting
train_19808
The first is a raw latitude longitude geo-tag which can give precise indication to region.
this is an optional feature most users do not use: only about 2% of tweets are geo-tagged for location (Huck et al., 2012).
contrasting
train_19809
Furthermore, a potential confound arises from the fact that a particular author's features may be easily identified across testing and training sets (Rangel et al., 2017).
the data has been completely anonymized and therefore further analysis regarding the influence of authorship is unavailable.
contrasting
train_19810
In the systematic reviews, the M are those articles that were excluded after reading the full text, and so are in reality negative examples.
our results suggest that these can still be used as positive examples for training.
contrasting
train_19811
Since the validation and test datasets were constructed by randomly choosing examples, the test performance on the model is conditionally independent on the validation performance given the model parameters.
the test performance is expected to be slightly worse in comparison to the validation performance since the model was chosen to achieve the best validation performance regardless the test evaluation.
contrasting
train_19812
The use of loss instead of other metrics such as accuracy or F-measure is supported by the fact that the loss itself models the actual behavior of the model.
in a classification problem, the accuracy of the model does not express its behavior in a sufficient detail.
contrasting
train_19813
On the contrary, the employment of dropout provides only approximately 10% accuracy gain.
with the previous experiment, the employment of the adversarial examples damaged the model performance in the case of bAbI #1.
contrasting
train_19814
(2015)), which corresponds to the most severe cases of trolling (Bishop, 2013).
we believe that it is important not only to identify trolling attempts, but also comments that could have a negative psychological impact on their recipients.
contrasting
train_19815
As an example, consider the situation where a commenter posts a comment with the goal of amusing others.
it is conceivable that not everybody would be aware of these playful intentions, and these people may disagree or dislike the mocking comments and take them as inappropriate, prompting a negative reaction or psychological impact on themselves.
contrasting
train_19816
Science changes continually: While certain research topics may be in a state of stagnation or decline, other research fronts move forward rapidly.
even "dormant" (Menard, 1971) science can regain importance if new data is produced or methods are developed to tackle unresolved research problems.
contrasting
train_19817
Such units are, indeed, characteristic of academic language and should not be ignored.
with the heavy skew observed, the ACL2 data set seems hardly usable for automatic prediction.
contrasting
train_19818
We have used ten-fold cross-validation during development to determine which classifier setup was more robust.
the classifier we used to actually tag the corpus is trained on the full dataset.
contrasting
train_19819
It is not straightforward to compare our classification result to the outcome of other annotation efforts, not only because there are only few such efforts, but also because annotation evaluation scenarios vary considerably.
qasemiZadeh and Schumann (2016) provide a detailed analysis of their manual annotation of the same data set.
contrasting
train_19820
For the 2000-2006 period, many of the lexical units highlighted by the rank shifts analysis in Technologies are related to recent work on the use of machine learning in computational linguistics, e. g. terms such as "classifier" or "feature".
ontologies have also gained importance and the 2000-2006 slice of Technologies includes novel lexical units such as "ontology learning", "ontology acquisition", or "ontology induction".
contrasting
train_19821
Linear CRF was used in (Schneider, 2006) in order to extract seven attributes about conferences from CFPs with the use of layout features.
in this approach only plain text of CFPs was used and layout features were based on lines of text, indicating, e.g., first token in line or first line in the text.
contrasting
train_19822
They used rule-based methods to extract information about conferences from conference services, like Wi-kiCFP, and combined them in one system in order to facilitate the process of finding conferences that are of interest for a user.
to aforementioned works (Xin et al., 2008) extracted information about conferences from web pages with Constrained Hierarchical Conditional Random Fields.
contrasting
train_19823
In contrast to aforementioned works (Xin et al., 2008) extracted information about conferences from web pages with Constrained Hierarchical Conditional Random Fields.
the set of homepages used in experiments has not been published.
contrasting
train_19824
The most effective compression is achieved by only storing differences between subsequent page revisions.
other compression techniques like BZip2, LZMA2 or none at all are also available and may be used when disc space is less an issue.
contrasting
train_19825
These experiments were conducted using Neo4J as backend.
wikiDragon is by design not limited to a specific backend but can be adapted to other database systems by implementing specific interfaces.
contrasting
train_19826
Regardless of the counseling method, counselors follow general principles, such as supporting autonomy, expressing empathy, centering on the patient and engaging patients using specific skills such as reflective listening (Charles et al., 1997;Harting et al., 2004).
using a more directing style -characterized by counselors providing instruction and advice, and patients obeying, adhering and complying (Miller and Rollnick, 2013) is usually avoided.
contrasting
train_19827
From the table we can see that providing a full conversation, both acoustic and textual representations are indicative of the speaker relationships.
the text based representation seems to have more information for this task, yielding 60.7% accuracy when using the Naive Bayes algorithm.
contrasting
train_19828
Not surprisingly, the most common types of semantic relationship that the appositive constructions reveal are those of role and ident.
personal relations such as friendship or fellowship are almost never made explicit The PALAVRAS system recognized 797 cases of appositives and UDPipe 954 cases.
contrasting
train_19829
Companies use social data to gather insights on customer satisfaction, but can also relate this data to forecast product or services revenues (Rui and Whinston, 2011) or measure and optimize their marketing.
there are several levers that make social media such popular.
contrasting
train_19830
On one hand this massive social activity can transform a local cultural event into an international buzz feed.
major festivals that do not follow the social mainstream could fail in attracting and renewing the public.
contrasting
train_19831
One of the major advantages of neural machine translation (NMT) is that unlike statistical machine translation (SMT), which was the previous industry standard (and is still actively used in commercial applications), NMT is trained and used jointly as a single end-to-end system without the need to optimize multiple independent models and relations between the models.
training NMT systems for individual language pairs has shown to take significantly more time (e.g., two to three weeks or up to a week with newer platforms, such as Marian (Junczys-Dowmunt et al., 2016) or Google's Tensor2Tensor toolkit 1 ) than training of SMT systems (e.g., less than a day or up to several days for large systems).
contrasting
train_19832
The results show that the GRU multi-way model outperforms the one-way models for all language pairs on all datasets.
the convolutional and transformer models increase quality only for the low-resource language pairs.
contrasting
train_19833
For the low-resource language pairs, the best results were achieved by the multiway model.
for the high-resource language pairs, the best results were achieved by the respective one-way models.
contrasting
train_19834
A practical solution to this limitation is to make use of comparable corpora (Rocheteau and Daille, 2011;Xu et al., 2015;Hakami and Bollegala, 2017) that are available in large quantities.
term extraction along this line is often limited to noun phrases (< 5 words) from monolingual comparable corpora.
contrasting
train_19835
Within the large group of Germanic languages, the dialects of Switzerland belong to the Alemannic group.
while a majority of dialects are High Alemannic (yellow area on map in Figure 1), those spoken in the city of Basel and in the Canton of Valais belong respectively to the Low Alemannic and the Highest Alemannic groups.
contrasting
train_19836
The Bible has been translated in several GSW dialects, but the only electronic version available to us were online excerpts in Bernese.
6 this is not translated from High German but from a Greek text, hence the alignment with any of the German Bibles is problematic.
contrasting
train_19837
The Workshops on Statistical MT have proposed translation tasks for low-resourced languages to/from English, such as Hindi in 2014 (Bojar et al., 2014), Finnish in 2015, or Latvian in 2017.
these languages are clearly not as low-resourced as Swiss German, and possess at least a normalized version with a unified spelling.
contrasting
train_19838
However, training an NMT is not feasible for GSW/DE, as the size of our resources is several orders of magnitude below NMT requirements.
several recent approaches have explored a new strategy: the translation system is trained at the character level (Ling et al., 2015;Costa-jussà and Fonollosa, 2016;Chung et al., 2016;Bradbury et al., 2017;Lee et al., 2017), or at least character-level techniques such as byte-pair encoding are used to translate OOV words (Sennrich et al., 2016).
contrasting
train_19839
When pseudo in-domain LM is added, this word got translated to the correct term "இறுதி ஆண்டு" (/iruthi aandu/), where the meaning is 'final year'.
the integration of out-domain data reduced the BLEU scores in both filtered and unfiltered cases in either direction.
contrasting
train_19840
Table 3 shows that overall, NMT scores better on accuracy than previous systems.
upon closer inspection, it becomes evident that PBMT handles DNT issues better than NMT.
contrasting
train_19841
When scanning the error statistics in Table 5, we can see that also in our data set, NMT makes fewer omission errors than PBMT.
the ratio of omitted words per omission error is much higher in NMT than in PBMT and RBMT.
contrasting
train_19842
In standard Word Sense Disambiguation, words are disambiguated based on their textual context.
in a multimodal setting we could also disambiguate words using visual context.
contrasting
train_19843
For teams that submitted both multimodal and text-only systems, the role of multimodality is not evident as far as MLT Accuracy is concerned: sometimes multimodal systems perform better and sometimes text-only systems perform better.
human scores show that overall multimodal systems tend to be better than the text-only counterparts.
contrasting
train_19844
For English>Japanese, there is strong and significant correlation between |CrossS| and monitoring effort.
there is only a moderate, non-significant correlation for English>Spanish.
contrasting
train_19845
At first sight this might seem strange.
mT quality from English to Spanish is good, so the machine will successfully resolve most of the difficult semantic and structural problems.
contrasting
train_19846
These resolutions, which will likely be highly salient to the post-editor and which can usually be accepted quickly, will likely reduce the need to expend much monitoring effort on segments that would have required much more effort to translate from scratch.
machine translation solutions for segments where there are few semantic or structural problems to resolve may be much less salient to the post-editor.
contrasting
train_19847
Large parallel corpora have been obtained from international bodies or collected from the Web.
they only cover a small subset of the variety of language pairs, domains and genres that are found in language.
contrasting
train_19848
This reduces the comparability of our datasets.
the two sides of each dataset still share several dimensions along which they are comparable: • They belong to the same genre distribution, mainly 'encyclopedic article' (see Table 1).
contrasting
train_19849
As it can be seen from Table 4 the manual pipeline extracts more parallel sentences (especially for adilet data) and is more accurate than an unsupervised automatic tool.
on average, Bitextor produces cleaner bitexts with lower short/parallel ratios and surprisingly low amount of junk.
contrasting
train_19850
There has been an increasing number of natural language processing (NLP) efforts focusing on dialectal Arabic, especially with the increasing amounts of written material on the web.
resources for dialectal Arabic NLP tasks such as part-of-speech (POS) tagging, morphological analysis and disambiguation are still lacking compared to those for Modern Standard Arabic (MSA).
contrasting
train_19851
There are lattice shapes which require independent mechanisms for indexing source and tree tokens (or, rather, states in a tree token lattice).
14 addressing all these cases would require a format that could not be directly compatible with the (current version of the) UD format/model.
contrasting
train_19852
On the one hand, specialised inter-node connectivity makes BMUs less confusable and more salient, as they receive stronger support through temporal connections than any other node.
less specialised and more blended BMUs are densely and less strongly connected with many others, to meet the input of more words.
contrasting
train_19853
morphological breakdowns and glosses), specialized parsers for root-and-pattern morphology and reduplication, and combinators that allow parsing either from the left (for prefixes) and the right (for suffixes).
there are some drawbacks compared to finite-state systems: Efficiency After compilation, a finite-state transducer executes in linear time, while parser combinators result in a recursive descent parser with potentially exponential time complexity.
contrasting
train_19854
Tightening up morpho-phonemic rules for better handling of allomorphs and treatment of templatic verbal morphology became the main goals for the second checkpoint.
documentation on numerous templates was incomplete at best.
contrasting
train_19855
Figure 3 shows one Oromo phonological rewrite rule implemented with parser-combinators.
10 since these rules are themselves complex parsers, they increase the complexity of the grammar and including many of them can affect the runtime performance of the parser.
contrasting
train_19856
akz: murderer, he: pancreas, sh: give birth, and several others).
these errors are quite reasonable (e.g.
contrasting
train_19857
using n-gram counts (Sornlertlamvanich and Tanaka, 1996), supervised methods (Clouet and Daille, 2014), and monolingual and bilingual corpora (Koehn and Knight, 2003;Macherey et al., 2011) and could be productively employed in extensions of our work.
to several of these other works, the approach and analysis in our paper is simple yet effective in that it only requires the usually very readily available on-line dictionaries in multiple languages (e.g.
contrasting
train_19858
The original drawbacks of the German part of the database were an outdated format and use of obsolete orthographical conventions.
these problems were tackled by Steiner (2016), so that the refurbished database yields a foundation for further exploitation.
contrasting
train_19859
In both cases, the sentences were automatically aligned from comparable English Wikipedia and Simple English Wikipedia articles.
the use of EW-SEW dataset for modeling TS has been disputed (Amancio and Specia, 2014;Štajner et al., 2015;Xu et al., 2015) for several reasons: (1) the simplified articles are not necessarily direct simplifications of the original articles; (2) the quality of simplifications is not checked; (3) the dataset does not cover sentence splitting which is one of the most common operations in text simplification.
contrasting
train_19860
We noticed that the use of hypothesis H1 reduces the percentage of full matches regardless of language, level pairs, and similarity measure.
it sometimes increases the number of partial matches which model deletion (see Tables 3 and 4).
contrasting
train_19861
In the case of aligning Level 0 with Level 4, we also have a higher percentage of full matches on the English dataset than on the Spanish dataset, in addition to a higher percentage of partial matches.
the differences in the percentage of full matches and partial matches between the two languages might not reflect the performances of the system on those languages but rather the nature of simplifications performed on the Newsela arti- cles in those two languages, i.e.
contrasting
train_19862
(2017) detail the specific issues of tokenisation for Picard, as well as the choices made.
to Alsatian and Occitan, the Picard corpus was not pre-annotated.
contrasting
train_19863
This suggests that GA is somewhat close to MSA; a similar conclusion reached in (Samih et al., 2017).
it is still far behind the performance on MSA, which is over 96%.
contrasting
train_19864
The best performance of Bi-LSTM is 91.2% using CC2W+W representation and meta-types and template features.
the best performance of SVM is 85.96% by setting the clitic feature value to TFIDF and using meta-types features.
contrasting
train_19865
Both systems achieved their highest accuracy when trained on the Gulf++ dataset.
bi-LSTM outperforms SVM in most of its settings.
contrasting
train_19866
These might have different reasons and can only be resolved using an aforementioned statistical analysis in certain cases.
even in these cases, such an analysis might best be left till after the corpus is annotated.
contrasting
train_19867
This shows how encoding all possibilities in the case of gender ambiguity as portmanteau tags can help us to understand the gender system of GML.
neut 85 60 Masc 11 24 neut 34 0 total 130 84 Table 3: Gender of lîf 1 and strît 1 one could suppose that such a detailed annotation as above could be more difficult for the annotator than only using the asterisk and thus could lead to more disagreement between annotators.
contrasting
train_19868
As the quantity of annotated language data and the quality of machine learning algorithms have increased over time, statistical part-of-speech (POS) taggers trained over large datasets have become as robust or better than their rule-based counterparts.
for lesser-resourced languages such as Welsh there is simply not enough accurately annotated data to train a statistical POS tagger.
contrasting
train_19869
Rule-based POS tagging -whereby pre-defined rules concerning which syntactic categories can be co-located together are used to determine the correct POS tags to assign to word tokens in context -is the traditional alternative to the probabilistic approach.
this introduces an entirely different bottleneck -considerable time and extensive knowledge is required in order to craft and refine the rules in the first place.
contrasting
train_19870
The widely-used Brill Tagger attempts to address this bottleneck by automatically acquiring and inferring rules for POS tagging from a running text, and its accuracy has been comparable to that of probabilistic taggers (Brill, 1992).
more pre-annotated data than is typically available for lesser-resourced languages is still required for this approach, and so again it boils down to a decision (costly either way) between crafting enough rules or annotating sufficient training data by hand.
contrasting
train_19871
We have had reasonable success in assuming that capitalised words that were unknown prior to running CG were proper nouns -80.56% of these assumptions turned out to be correct, although the accuracy for these tokens using the enriched tagset is rather lower (71.23%) due to the fact that in many of these cases it was simply not possible to discern the gender of the proper noun.
the accuracy of our tagging of ambiguous tokens based on their presence in the coverage dictionary (see Section 3.2.3.)
contrasting
train_19872
This approach neither includes any language-specific linguistic information nor requires a large corpus.
they collect all possible words occurring in the same context from the untagged data into a list called contextbased list, thus limiting it from scaling to large monolingual corpus.
contrasting
train_19873
The key difference between Word2Vec and FastText is that Word2Vec treats each word in corpus as an atomic entity and generates a vector for each word.
fastText treats each word as composed of ngrams and the vector word is made of the sum of these vectors.
contrasting
train_19874
This leads us to the conclusion that there is no difference between the investigated European cultures concerning the directness of the system's output.
there are indeed significant differences on the user's preference of the system's elaborateness.
contrasting
train_19875
This leads us to the conclusion that the gender does not influence the user's preference concerning the directness of a system utterance.
the gender seems to influence the preference concerning the elaborateness.
contrasting
train_19876
These results support the conclusion drawn from the results depicted in Figure 6 that the gender may influence the user's preference concerning the elaborateness of the system utterances.
there are no significant differences between men and women for English, Russian and Spanish, what leads us to the conclusion that it depends on the culture whether there are gender differences concerning the elaborateness.
contrasting
train_19877
The game is suitable for studying deceptive behaviour.
due to its engaging, multi-party nature, it is also suitable for studying multi-party turn-taking phenomena, and therefore provides the possibility to investigate other research directions as well.
contrasting
train_19878
This framework can be used to facilitate the decisions of a wizard, which was the case in the Werewolf scenario.
the framework does also provide the grounds necessary to represent a fully autonomous agent instead of the wizard.
contrasting
train_19879
This provides strong support that personal information is a vital component of all chat-oriented dialogue, regardless of culture and participant similarity.
similar to the work by Mitsuda et al.
contrasting
train_19880
Most of these studies dealt with attribution in English.
to the best of our knowledge, there are no empirical studies on annotating attribution in Arabic to generate a gold standard corpus.
contrasting
train_19881
The factors mentioned above provide some preliminary evidence that the content of the ADELE corpus is social and casual, and similar to conversational speech.
the tight central tendency for utterances per conversation is not a feature of casual talk, which tends to be open-ended and thus variable in length.
contrasting
train_19882
Theoretical studies on the Information Structure-prosody interface argue that the content packaged in terms of theme and rheme correlates with the intonation of the corresponding sentence.
there are few empirical studies that support this argument and even fewer resources that promote reproducibility and scalability of experiments.
contrasting
train_19883
The so-called Information Structure-prosody interface stands out as a solid ground for starting to build up such a communicative model in the computational field.
empirical approaches to the Information Structure-prosody interface are scarce, studies on a corpus of more than two speakers are uncommon and the availability of corpora is, so to say, exceptional.
contrasting
train_19884
In excerpt 1, the SC used a similar distal demonstrative aaiu (that sort of), which is generally used as an adjective to strongly project the immediate occurrence of the noun it modifies, as mentioned in 5.1.
are is a demonstrative pronoun, so it can be produced without mentioning the object it refers to when spatial-deictically used.
contrasting
train_19885
(2014) constructed a Japanese corpus with discourse annotations through crowdsourcing.
they did not evaluate the quality of the annotation.
contrasting
train_19886
Let us consider the following examples: In example (4), the Old Annotation disagrees with the Expert Annotation, but the New Annotation agrees with the Expert Annotation presumably because this pair failed the language tests (we can insert "in order to" 2 between (ii) and (iii)).
the New Annotation disagrees with the Expert Annotation in example (5).
contrasting
train_19887
As in Persian there are two pronouns for secondperson: single and plural, the type is classified into four groups.
for direct referent, there are six major groups: identity, inferred, quantifier, cross-speech, event and person/number suffix on verb.
contrasting
train_19888
Indeed, some annotators tend to consider more spans of text as argument components than others.
there is a high agreement on spans identified as argumentative by annotators that consider less spans of text as argumentative.
contrasting
train_19889
Looking at the confusion matrices of annotations of pairs of annotators, in Figure 2, we find that there are important disagreements between all of the categories.
the category of major claim seems to be the most conflictive: in one of the pairs, annotators did not have any overlap, in the other, they had more proportion of disagreement than of agreement.
contrasting
train_19890
In general, there is some confusion between premises interpreted as facts or as case-law, and also between premises considered case-law or law principles.
these confusions can be easily addressed by a formal delimitation of case-law using shallow textual cues, also refining annotation guidelines.
contrasting
train_19891
In addition, it has to deal with noisy user input containing spelling and grammar errors.
most approaches to SAS consider automatic scoring as a classification task, relying on supervised machine learning (mL) techniques which require manually labeledtraining data Basu et al.
contrasting
train_19892
As the total vocabulary size continues to increase, the vocabulary size index also increases.
during the beginning of children's vocabulary production, the vocabulary commonality index once falls and then goes up again.
contrasting
train_19893
These findings may play important roles in further studies of child language development.
the results of this study are limited to 2,688 words.
contrasting
train_19894
The ITA data are compatible with standard Italian as testified by a Google search: nello stesso tempo has 1,5 million occurrences while allo stesso tempo appears 13,3 million times.
the same search on the CORIS corpus (Rossini Favretti 2000) finds 1771 occurrences for nello stesso tempo and just 1908 for allo stesso tempo, so it is necessary to be careful in drawing conclusions from rough data.
contrasting
train_19895
It is expected that use will continue.
approaches with a stronger computational side are also envisaged.
contrasting
train_19896
Revita lies at the intersection of two established areas of research: intelligent tutoring systems (ITS) and computer-assisted language learning (CALL)the project seeks intelligent solutions for language learning.
revita has the potential 1 The system is online at revita.cs.helsinki.fi for enriching the language teaching process as well, because the platform can be used for collecting, mining and analyzing educational data.
contrasting
train_19897
This ratio is a key factor that has suppressed wider emergence of CALL/ITS beyond the beginner levels.
offering a fixed, limited set of exercises is in conflict with the principles of adaptability of the learning process to the profile of the particular user.
contrasting
train_19898
This is in-line with linguistic expectations given Inuktitut's regular agglutinative morphology; there are few combinations of bigrams delimited by white space.
compare Indonesian, which has a higher key parameter (the number of connections a node has).
contrasting
train_19899
Regarding small world characteristics, it is not difficult to imagine how their char-acteristic properties, including efficient information transfer and properties of regional specialization, could account for universal properties like fast retrieval from the mental lexicon.
more substantive work is needed to show for example that small world properties constrain memory models to facilitate retrieval, e.g.
contrasting