id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_18900
Originally every game, each one of which contains between 100 and 900 dialogue turns, was to be split into negotiation dialogues (up to several dozen) in which one person is in charge of the bargaining after a roll of the dice.
we noticed that, often, even if the person in charge of bargaining changed, a conversation that had begun during a previous bargaining episode, continued into the next episode.
contrasting
train_18901
Our approach belongs to the category of example-based dialogue modelling which aims at using a database of semantically indexed dialogue examples to manage dialogue (Lee et al., 2009).
our purpose is a complete automation of this process from the creation of the database to the conversational management process to avoid the need of a costly and time-consuming human intervention.
contrasting
train_18902
As of the repetition of the system (see figure 3), participants reported that the system has been "2: repeating itself" (mode=2; median=3), where 1 is "repeating itself a lot" and 5 is "not repeating itself at all".
these results should be mitigated.
contrasting
train_18903
We have not found significative links between reported enjoyment of the interaction and dialogue features or system capabilities (such as dialogue length and coherence).
we have found a significant correlation with the reported mental state "enthusiasm".
contrasting
train_18904
Dialogue characterization can be achieved by modifying the utterances created by generic dialogue systems using a characterization module.
a trade-off occurs between the significance of characters in dialogue systems and the quality of modified utterances.
contrasting
train_18905
The resource of movie scripts, such as IMSDb, is good enough to generate conversational discourse for dialogue processing.
monolingual movie scripts are not enough for MT which needs a large-scale bilingual dialogue corpus to train and tune translation models.
contrasting
train_18906
The reason is that neither of the reference-based evaluation systems explicitly addresses this aspect of translation quality.
bLEU and Meteor are outperformed by UPF-Cobalt in terms of the correlation with fluency judgments.
contrasting
train_18907
Phrase-based statistical machine translation (PBSMT) is considered as state-of-the-art MT approach whenever sufficiently large parallel (or comparable) datasets for training are available.
for many language pairs and translation directions (English to Portuguese among them) large training datasets only exists for few domains, such as parliamentary discussions (Europarl (Koehn, 2005)) or legal documents (JRC-Acquis corpus (Steinberger et al., 2006)).
contrasting
train_18908
In this regard, our own previous work demonstrated a potential step in the right direction by suggesting that incorporating the output from running WSD as contextual features in a maxent-based transfer model results in a slight improvement in the quality of machine translation (Neale et al., 2015).
to two other approaches we had experimented with -'projecting' word senses into source language input sentences prior to translation either by completely replacing source language lemmas with synset identifiers or appending those synset identifiers onto the source language lemmas -we showed that the reported gains were possible without having to reformulate the word senses themselves nor the algorithms used to retrieve them, as most of the successful marriages of WSD and machine translation reported in the literature had resorted to.
contrasting
train_18909
The user ran a series of scripts on that data.
superCAT is used via a single Java .jar file.
contrasting
train_18910
One would thus expect that proper names fall into the major class neither count nor mass, which is correct for the majority of proper names identified in BECL.
bECL also contains proper names that are members of the other classes, as illustrated in (7).
contrasting
train_18911
As mentioned above, kind-of -readings cannot be in all cases easily distinguished from other shifting types.
roughly 25% of the final examples showing mass-to-count shifting clearly allow this interpretation.
contrasting
train_18912
Most of them use a bag of words model to compute sentence similarity.
the bag of words model is sometimes inadequate for capturing the syntactic and semantic similarities among the sentences, which may affect the qualities of the summaries.
contrasting
train_18913
To model the importance of a word, we include the tf-idf (term frequency -inverse document frequency) values of the dependent and head words to the formula and form the TF-IDF Based Approximate Bigram Kernel (TABK) as defined below: where N (A) is the normalizer function: and sim t is defined as: TABK does not encourage consecutive bigram matches that form a subtree in the dependency trees.
a common subtree in the dependency trees of two sentences means that these sentences contain similar substructures.
contrasting
train_18914
Similar performance with the state-of-the-art submodular functions approach is achieved.
the improvement in the performance of LexRank by our typed dependency tree kernels is not found to be statistically significant.
contrasting
train_18915
Specifically, we look at the verbparticle (V-Prt) split in English and German, such as in the examples below: (1) take V the shoes off P rt (2) macht V schon wieder blau P rt From a syntactic perspective, these two languages behave very differently: in German, the particle position is clausefinal and a particle can be separated from its verb by an embedded clause, while in English a particle can be separated from the verb only by its direct object and a very long split is impossible.
from a MT perspective, these constructions are rather similar as they involve the same type of lexical items (verbs and particles) which must be translated in connection to each other for a correct output.
contrasting
train_18916
Approximately two-thirds of all verbs are aligned well.
looking closely only at the 59 finite (split) V-Prt phrases present in the test suite, we can note that only 22 are properly aligned with their English translation.
contrasting
train_18917
We evaluated the English test suite as translated by a phrase-based system.
since there were no readily available syntax-based systems for the translation between English and French, we instead show a comparison with Google Translate 2 .
contrasting
train_18918
In cases such as 'take (the shoes) off', mentioned above or 'make (these things) up', the literal translation does not make sense because the verb phrase meaning is non-compositional.
in cases such as 'bring in' the meaning of the simple verb 'bring' is very similar to the meaning of the bigger phrase and the literal verb-only translation is acceptable.
contrasting
train_18919
When an author uses the weak word long in a context like long time period, readers differ about what long in this context actually means.
not every use of a weak word lemma is necessarily imprecise or ambiguous.
contrasting
train_18920
The sentences in C-WEP only contain a small number of typos, most errors are "grammatical errors."
both checkers mark a high number of "spelling issues," which are listed in table 1.
contrasting
train_18921
The versions could be ranked by edit distance measured in words; ranking could also take into account the position of editing operationse.g., edits later in the sentence would result in a higher rank.
an even better way would be to prevent errors in the first place by supporting the author during editing.
contrasting
train_18922
As for identifying the error type, there are too many words including the duplicate nicht, and the error involves both the POS verb and preposition.
depending on the target hypothesis, the error is only in the verb or only in the preposition.
contrasting
train_18923
When the cooccurrence-based metric is used we observe that the model lacks robustness, and performance drops especially for large number of seeds.
when RR is used performance is robust and also the correlation curve is quite close to the one observed when context-based similarities are used.
contrasting
train_18924
The so called sentence-level sentiment classification determines whether each sentence expresses a positive or negative opinion, or it is neutral from the polarity point of view.
we should note that many sentences can imply more than one opinion on more than one target.
contrasting
train_18925
Also, we supposed at the beginning of the annotation that words that express negation can mostly be covered by negation words like nem or sem 'not', the negative forms of the copula like nincs or sincs 'is not' and some postpositions like nélkül 'without'.
it turned out that negation is expressed by a wider variety of words and phrases than expected, for instance: elillan 'disappear' nélkülöz 'miss' bizarr lenne aztállítani 'it would be strange to say' helyett 'instead' semmi köze sincs 'it has nothing to do with' nulla 'zero' lespórol 'spare' Altogether, there are 3516 negation words in the corpus, including 2587 adverbs, 468 verbs, 145 pronouns and 93 conjunctions.
contrasting
train_18926
On the basis of the results we concluded that the two types of intensifiers together occur with the same frequency in positive (6693:2706) and negative (8053:3347) fragments, for instance, nagyon jó 'very good' and nagyon rossz 'very bad'.
frequency of intensifiers with decreasing semantic content is not the same in the two types of sentiment fragments: they occur much more often in negative polarity fragments (8053:779) than in positive ones (6693:301).
contrasting
train_18927
So far, sentiment analysis has mainly focused on the detection of explicit opinions.
recently the relevance of implicit opinions has received broader attention within the field.
contrasting
train_18928
There may be contexts in which the theme's creation is more salient, for example, when burning your initials into a tree.
the correct placing of the theme in the target location may be more prominent in situations when considering whether bedrock is suitable for driving a tunnel into.
contrasting
train_18929
At this time, we limit ourselves to annotating GermaNet synsets relative to what is provided in the syntactic valence frames that come with the example sentences.
we still want to distinguish between predicates that are strictly stative and therefore do not involve a causally responsible agent and predicates that do allow a causal agent but which may not be realized in the provided valence frames.
contrasting
train_18930
We could thus in theory collapse all functors sharing the same structure into a single more abstract functor.
we refrain from doing so because the functor labels convey additional information that is useful for other purposes.
contrasting
train_18931
The second most common response was neutral evaluation, which may seem somewhat surprising if causal forces are usually evaluated in the same way as the resulting state.
recall that what functors provide are defeasible implicatures rather than firm entailments.
contrasting
train_18932
That is, the main clause (predicate) is non-factual as well as the subclause (predicate).
in The minister forces the president to cheat both are factual.
contrasting
train_18933
How Description Logics can be used in order to identify so-called polarity conflicts is described in Klenner (2015).
the relations between the referents and again, the factuality of situations are not part of this model.
contrasting
train_18934
The impression is that, on the one hand, the participation on Twitter has been more critical, free from the constraints of the debate and, especially, individual.
the web platform provided for the online consultation contributions collected from groups of individuals, which collectively took the responsibility to synthesize and publish the contents.
contrasting
train_18935
Indeed, among 5,573 tweets in agreement by topic, the 80% were labeled with the generic topic tag.
the distribution of topics in the two corpora, TW-BS and WEB-BS is different, as can be observed in Figure 4.
contrasting
train_18936
And many useful and high-quality semantic resources have been built, such as FrameNet (Baker et al., 1998), PropBank (Palmer et al., 2005), Academia Sinica Treebank (Huang et al., 2000), NomBank (Meyers et al., 2004) , VerbNet (Schuler, 2005), HowNet (Dong and Dong, 2003), etc.
each of them only serves its own purpose.
contrasting
train_18937
However it has not been widely used in SMT tasks.
propBank has been widely used in both SRL and SMT since CoNLL-2005 (Carreras and Màrquez, 2005).
contrasting
train_18938
Intuitively, because the NN features give a measure of the bilingual similarity of a sentence pair, they could be helpful for this task.
this assumption has not been verified previously.
contrasting
train_18939
Most of the increased classif ied well sentence pairs (4733→4869) belong to this type, which significantly improved the recall.
low NN feature values also could be harmful.
contrasting
train_18940
(Dabre et al., 2015) used the NN features for a pivot-based SMT system for dictionary construction.
we score the sentence pairs of with a neural translation model, and use the scores as NN features for parallel sentence extraction from comparable corpora.
contrasting
train_18941
PubMed is the largest database for scientific publications in biomedicine, and has been extensively used in many biomedical natural language processing applications.
only titles are available in more than one language in PubMed (Wu et al., 2011).
contrasting
train_18942
This is certainly a good feature of our corpus, given that previous parallel corpora of biomedical publication were restricted to MEDLINE titles (Kors et al., 2015;Jimeno Yepes et al., 2013).
the monolingual datasets are mainly composed of titles, due to the same reason stated above, i.e., the existence of many articles whose titles were been translated to other languages.
contrasting
train_18943
experiments provide a positive evaluation of the corpus quality as a whole.
in our manual review of sentence alignment, we found some examples where the language quality of the corpus was lacking.
contrasting
train_18944
Many sources of bitexts have been identified; some examples are: • texts from multilingual institutions, such as the Hansards corpus (Roukos et al., 1995) or the Europarl corpus (Koehn, 2005); • translations of software interfaces and documentation, such as KDE4 and OpenOffice (Tiedemann, 2009); or • news translated into different languages, such as the SETimes corpus (Ljubešić, 2009), or the News Commentaries corpus (Bojar et al., 2013).
one of the most obvious sources for collecting parallel data is the Internet.
contrasting
train_18945
In the harder translation direction, English→Croatian, the newly built SMT systems outperform two of the reference systems, Bing and Yandex, while we do not observe a substantial decrease in the quality of the MT system built solely on the hrenWaC parallel corpus compared to the system built on all the training corpora.
in the opposite direction, Croatian→English, there is a significant difference in the performance achieved by both newly built MT systems: the system using all the training data achieves a 2.22 BLEU points increase and a 2.12 TER points decrease when compared to that trained only on the hrenWaC corpus.
contrasting
train_18946
As can be seen in these results, the SMT systems obtained are not able to outperform all the third-party MT systems used for evaluation.
it is worth mentioning that, given that the data used for building these models was obtained in a fully automatic fashion by crawling TLDs, the results are quite positive, since they show that it is possible to obtain an MT system comparable to some of the most used online MT systems by only running an automatic crawling process for a few days, with the only explicit input being a TLD to be crawled and a small bilingual English-Croatian lexicon.
contrasting
train_18947
For example, people seldom engage with surveys that require them to report their height and weight.
such data is crucial for training automated public health tools, such as algorithms that detect risk for (preventable) type 2 diabetes mellitus (T2DM, henceforth diabetes).
contrasting
train_18948
In all, we generated 33 questions that cover all decision nodes in the random forest classifier.
when taking the quiz, each individual participant answered between 12 and 24 questions, depending on their answers and the corresponding traversal of the decision trees.
contrasting
train_18949
Overall we find a strong correlation of 0.82; the correlation remains strong, 0.79, if tweets are counted a half hour before broadcast time.
the two most popular TV shows account for most of the positive effect; if we leave out the single and second most popular TV shows, the correlation drops to being moderate to weak.
contrasting
train_18950
These results can be interpreted as implying that estimated TV ratings could already be publicized at the start of the show.
the high correlation drops to medium or low correlation when the single or two most watched shows are left out.
contrasting
train_18951
A popular approach used by recommendation systems is collaborative filtering for identifying users similar to a target user based on their purchase history, and then recommending products that similar users have already bought but the target user has not (Schafer et al., 1999;Sarwar et al., 2000).
in this approach, it is required that a target user has bought something before.
contrasting
train_18952
(2009) implemented a content-based movie recommender system capable of "cold start" by using preference tags that customers labeled movies within in a movie review service.
to these works, our goal is to infer the future purchase behavior of a customer who is interested in a product.
contrasting
train_18953
In Table 5 we observe that only a small percentage of users who indicate that they want a product by tweeting one of the phrases in Table 1 actually bought a product.
we note that many of the users expressing a buy phrase did buy a product.
contrasting
train_18954
(2015) reported their results using cross-validation on the Test-Stanford data set.
we used the Test-Stanford data set as a test set only and tuned our parameters using our manually created development set (Dev-Stanford).
contrasting
train_18955
This variation is one of the driving factors behind language change.
investigating language variation is a complex undertaking: the more factors we want to consider, the more data we need.
contrasting
train_18956
Traditional, qualitative methods are often not designed to handle more than a handful of data points (albeit in depth).
computational, quantitative methods offer the possibility to explore language variation at an unprecedented scale.
contrasting
train_18957
The most wellknown one is certainly the Google Ngram corpus (Michel et al., 2011), which enables lexical search over enormous amounts of text.
it does not include demographic factors, only time of publication, and has recently been criticized for inherent biases (Pechenick et al., 2015).
contrasting
train_18958
Tweet analysis has led to a large number of studies in many domains such as ideology prediction in Information Sciences (Djemili et al., 2014), spam detection in Security (Yamasaki, 2011), dialog analysis in Linguistics (Boyd et al., 2010), and natural disaster anticipation in Emergency (Gelernter and Mushegian, 2011;Sakaki et al., 2013).
complementary efforts have been made in Social Sciences and Digital Humanities to develop tweet classifications (Dann, 2010;Riemer and Richter, 2010;Shiri and Rathi, 2013;Stvilia and Gibradze, 2014 few studies aim at classifying tweets according to communication classes.
contrasting
train_18959
In the former, words occurrences are assigned sense labels from a predefined sense inventory.
sense discrimination (Schütze, 1998) addresses a simpler task of differentiating among the different uses of a word, without a reference to a sense inventory.
contrasting
train_18960
Context clustering exploits the distributional hypothesis (Harris, 1954) to group together the similar usages of a word.
word clustering groups together different but semantically similar words that pertain to a specific sense (e.g., money, loan, and finance for the financial sense of bank).
contrasting
train_18961
The shape of a co-occurrence graph crucially depends on the size of the corpus: if the corpus is too small, many relations will not be present, and rare senses will not be captured.
a large corpus may yield a noisy co-occurrence graph.
contrasting
train_18962
1 The subgraph consists of all first-and second-degree neighbors of the target word.
to keep the size of the subgraph manageable, we increment the edge weight threshold as we move away from the target word.
contrasting
train_18963
We described an intrinsic evaluation setup, in which Chinese Whispers algorithm outperformed Markov Clustering.
in word sense disambiguation (WSD) evaluation, Markov Clustering emerged as the winner, with a rather good accuracy of about 75%.
contrasting
train_18964
(2012), and Henrich and Hinrichs (2014).
none of these provide semantic role annotation.
contrasting
train_18965
The VerbNet role hierarchy follows two main principles: lower-level roles are more specific, and restricted by semantic properties and constraints; consequently, roles in a parent-child relation tend to not co-occur.
the hierarchy contains multiple inheritance links and is therefore difficult to conceptualize (cf.
contrasting
train_18966
The changes we propose for the original VerbNet role inventory in Figure 1 are kept as small as possible.
our aim is to strengthen the semantic principles that underlie the role hierarchy in a more systematic way.
contrasting
train_18967
The VerbNet hierarchy was broken down to a flat list of roles, following the model of SemAF-SR. Further, we merged some of the roles to 'multi-roles' on the assumption that the reduced and coarser role inventory makes it easier to distinguish roles.
this assumption was not confirmed, as we obtained low IAA for RI-I.
contrasting
train_18968
The frequent role sets shown in Table 6 typically occur with many predicate types.
we also observe that alternating role sets can be assigned to a specific predicate sense, as illustrated in Table 7.
contrasting
train_18969
It was seen that most of the time the literal senses of a word were placed in ranks above the metaphorical or figurative uses.
at times the ranking order did not adhere to the above mentioned criterion.
contrasting
train_18970
Automatic subtitling was born in response to a high subtitling demand, as a more productive alternative that enabled subtitling in challenging situations, such as live broadcasts, where traditional subtitling was not directly applicable.
at present, automatic subtitling is yet not capable of creating subtitles that equal human quality and, thus, its focus is on facilitating the generation or post-editing of automatic subtitles by professional subtitlers, both in live and pre-recorded settings.
contrasting
train_18971
Its precision reaches 97-98%, the generated delay is low and the severity of errors medium.
learning this technique requires a long time (around three years) and the cost is high (Romero-Fresco 2011).
contrasting
train_18972
The latter tech-Listing 1: JSON Metadata Format nique is relevant for the exploration of photo archives based on image caption text.
in order to develop and deploy these techniques, a large dataset is needed.
contrasting
train_18973
The image datasets provide textual descriptions written by multiple human annotators per image, and are often used during evaluation as gold-standard reference descriptions against a system generated candidate description.
little work has been done to analyse or evaluate the gold-standard descriptions against themselves, i.e.
contrasting
train_18974
As mentioned, noisy, largescale datasets with user-generated captions exist for news images (Berg et al., 2004;Feng and Lapata, 2008) and Flickr (Ordonez et al., 2011;Chen et al., 2015;Thomee et al., 2015).
in this paper, we are mainly interested in literal descriptions of what is depicted in the image, rather than non-literal or non-visual descriptions that require significant inference from additional knowledge about the image context.
contrasting
train_18975
ROUGE-W1.2's absolute scores are lower than ROUGE-L as the measure penalises non-contiguous common subsequences.
to BLEU, ROUGE is not sensitive to number of descriptions per image, as it performs averaging over all reference descriptions.
contrasting
train_18976
In ASL, hand, arm, upper body, and head movement conveys important linguistic information of various kinds, as do facial expressions.
since moving body parts are more relevant for recognition than stationary ones, we extract relevant features solely from image regions with significant motion, without restricting attention to specific body parts.
contrasting
train_18977
The other most influential taxonomy is proposed by Spiegel-Rösing (1977), with 13 categories.
80% of the citation purposes could be classified in one category: Cited source substantiates a statement of assumption, or points to further information.
contrasting
train_18978
The lead-based baseline outperforms all the methods from the literature that we included.
our framework outperforms this baseline.
contrasting
train_18979
Based on the manual evaluation results (Tables 3, 4 and 5), our framework outperforms the commercial system for the Popular and Random Partner datasets.
the same trend was not reflected for the Random Other dataset.
contrasting
train_18980
the issue of which was the best film of 2015?.
issues are rarely explicitly articulated in reader comments, (or the news article, see e.g.
contrasting
train_18981
If the model is not consistent, it will be penalized.
the penalty must be applied in such a way as to give it maximal benefit of the doubt.
contrasting
train_18982
Prefixed words cannot be called compounds in the strict sense of the term because prefixes are not independent lexical units.
some prefixes are very close to the neoclassical roots, compare prefix biwith neoclassical root uniaccording to (Béchade, 1992).
contrasting
train_18983
The UniMorph Schema includes the features necessary to distinguish all these categories, which are marked by surface contrasts in each language and are not decomposed further in any natural language.
5 were a language to be discovered that distinguished a blended dual-trial (2/3) from singular, paucal, greater paucal, and plural, the UniMorph Schema would combine the minimal dual (exactly 2) and trial (exactly 3) features together additively to annotate the blended category (as DU+TRI).
contrasting
train_18984
The average number of inflected forms collected per paradigm differs across systems, as Liebeck and Conrad (2015) only considered forms which fit certain Wiktionary templates and Durrett and DeNero (2013) extracted only paradigms for which they could obtain a fixed set of forms (their software requires all training paradigms to be equal in size).
our system extracts all paradigms, regardless of completeness.
contrasting
train_18985
The intention sometimes can be fulfilled in one single domain (i.e., an app).
it is possible to span multiple domains and requires information coordination among these domains.
contrasting
train_18986
In other word, user can mentally create his own virtual app on top of existing ones.
although intelligent agents can be configured by developers to passively support (limited) types of cross-domain interactions, they are not capable of actively managing apps to satisfy a user's potentially complex intentions, because they do not consider the repeated execution of activities in pursuit of user intentions.
contrasting
train_18987
By contrast, spoken language can effectively convey the user's high-level and complex intentions to a device (e.g., Apple Siri, Amazon Alexa, Google Now and Microsoft Cortana).
speech presents the challenges about 1) understanding both at the level of individual apps and at the level of tasks that span apps; and 2) communicating a task-level functionality between user and agent.
contrasting
train_18988
We were informed by participants that they made use of this feature.
we did not solicit further information about frequency of use or categories of events.
contrasting
train_18989
As shown in Figure 3, SETTINGS would deal with U 1 to setup bluetooth connection and MUSIC would take care of U 2 and U 3 .
sometimes users produce utterances which may involve several apps, e.g., "Boost my phone so I can play [game] spiderman" requires CLEANMASTER to clear the RAM and the game SPIDERMAN.
contrasting
train_18990
For example, the annotation of the following utterance could depend on the context of the dialogue: OK, let's move to pension fund If this utterance is a reply to some Offer, then it would most likely be annotated as Accept, referring to the previous Offer, along with Query(Offer=Pension Fund).
if vaskonov/negochat_guidelines/blob/master/ guidelines.pdf 8 Two labels are considered to be in agreement only if all of their corresponding components are identical.
contrasting
train_18991
As we discussed in the metrics section, it is difficult to interpret the performance of these detectors from the numeric values.
it is interesting to note that the best performing team in the classification-related metrics did not perform as well in these metrics.
contrasting
train_18992
For example, pression artérielleélevée ('high blood pressure', input) is mapped to hypertension (clinical record) thanks to their common CUI (C0020538).
not all terms are recorded in the UMLS.
contrasting
train_18993
In developing our guidelines, we took the earlier MSRP alignment guidelines into account where they were consistent with the Edinburgh ones.
contrary to the MSRP guidelines-where annotators were not told whether two sentences were supposed to be in an entailment relationship-we considered it essential for the alignment decisions to be consistent with the overall decision as to whether the two questions were taken to be paraphrases in the dialogue context.
contrasting
train_18994
On the one hand, given the objective of the study, a fine grained study of the form -function relationship requires a fine grained functional analysis, too.
a part of the literature clearly gave up annotating levels of communication in feedback, as for example Bunt et al.
contrasting
train_18995
In the graph we can see that there were a number of occurrences of pastel red (correctly written short vowel markation) so it seems most children have mastered this skill in the post test.
we can also see that about three kids deserver further study.
contrasting
train_18996
be.1sg in.the beach 'I am in the beach'.
more omissions with adjectival predicates than with nominal predicates were observable, as opposed to what happened at the elementary level: (9) A praia Ø fatástica, ondas Ø boas e pessoas Ø alegres e simpáticas.
contrasting
train_18997
Chinese students are clearly the ones producing more errors.
when compared to the English students, we see that the increase in errors is not proportional to the increase in number of words in the corpus.
contrasting
train_18998
Performing the chi-square test on the null hypothesis that the number of spelling errors with a Damerau-Levenshtein distance up to 2 and over two, and the clinical status of a subject are independent, we receive a p-value of 0.8783, because of which we can not reject the null hypothesis.
there is a visible difference between the percentage of spelling errors of distance 1 and 2.
contrasting
train_18999
One has, of course, bear in mind that among subjects with language disorders, these errors will still occur five times more frequently than among healthy participants, making their texts after spelling correction still less accurate.
there is statistically significant difference in the number of spelling errors of distance 1 and of distance 2 between the two groups.
contrasting