id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19600
These are only two examples and there are others with ambiguity that is more debatable.
we think this highlights the advantage of having positive, negative, and neutral annotation so that we may have more insights into the ambiguity, e.g., mixed sentiment with equal percentages for positive and negative, or weak sentiment with a low percentage of positive (resp.
contrasting
train_19601
The best model for relatedness scores is the skip gram lemma based model with negative sampling, while for the similarity, the best model is the lemma based CBOW model with negative sampling and the size of window equal to 1.
both models are also nearly the best in the other task.
contrasting
train_19602
Many online news outlets apply the same journalistic principles that have been in use for newspapers for decades, especially factchecking.
there are also websites whose primary mission is not to provide high-quality journalistic reporting but, basically, any type of content as long as it produces as many clicks as possible, generating revenue through online advertisements.
contrasting
train_19603
Similarly, custom solutions exist that convert the results of a specific NLP pipeline into a pre-defined knowledge base.
so far there existed no generic solution for populating a knowledge base with the results from a text mining pipeline.
contrasting
train_19604
Generally, exposing implementation details in URIs is discouraged (Wood et al., 2014).
re-running NLP pipelines (and therefore re-generating triples) is an extremely common occurrence in language engineering -for example, after improving a component, a machine learning model, or experimenting with different parameters.
contrasting
train_19605
NLP2RDF 15 is another approach for converting NLP tool output to RDF (Hellmann et al., 2013).
the goals of NLP2RDF are largely orthogonal to the ones from LOD-eXporter: NLP2RDF focuses on the representation of NLP data (such as, words, sentences, strings), with the primary goal of providing an exchange format between different NLP tools.
contrasting
train_19606
datasets for geographical and language data.
we did not reuse these because it requires a large amount of manual mapping effort to retrieve the ca.
contrasting
train_19607
Further, the bibliographic references that correspond to a concept can be selected as another view within the concept data box while the term data above stays visible.
a detailed search interface for the bibliographic resources has not been implemented.
contrasting
train_19608
19 Ur III administrative data are comparably easy in terms of morphosyntax, since morphology is often not expressed in writing (though the information may be inferred from the context).
this also means that morphology is somewhat uninformative, so that effective querying and searching of these data requires structural analysis.
contrasting
train_19609
Their model formalizes three different collections in a coherent model: the CDLI catalogue, the ObjMythArcheo database, 39 a corpus of archaeological objects related to mythological iconography, and Bib- 35 From the CRM documentation, the superproperty crm:P128 carries would suit too, see http://www.
cidoc-crm.org/html/5.0.4/cidoc-crm.html# P128 hence, it must be a physical thing, in our case a E84 Information Carrier if no ?composite is found, then a separate crm:E34 Inscription must be created.
contrasting
train_19610
For the time being, KorAP does not offer facilities for querying the speech data, so we continue to use our own interface.
as it will become available, all CoRoLa data (text and speech) will be searchable in a single unified environment (KorAP).
contrasting
train_19611
Some results will be transferred to the industry for further development and industrialization and their distribution will be restricted.
most outcomes of the mentioned projects will be freely accessible to the interested public.
contrasting
train_19612
A general rule restricts relative markers to nonabsolutive non-finite forms.
a corpus search reveals that the situation regarding absolutives is more complicated: In Abkhaz, absolutives are used in serial verb constructions, where the absolutive and the verb following it are part of the same event.
contrasting
train_19613
Note that, some idioms have multiple idiomatic equivalents in other languages while others have none.
some idioms have definitions with almost equivalent syntactic structures while the semantics of the definitions are very different.
contrasting
train_19614
As factuality tags, we use factuality values proposed in FactBank (Sauri, 2008).
the value Uu is the sole exception.
contrasting
train_19615
To be specific, the noun visit takes two semantic arguments, the visitor and the visited entity, as in 'the visit of the Queen to the Prime Minister'.
when used in to pay a visit, the semantic argument, visitor, is realized as the subject of to pay (The Queen paid a visit to the Prime Minister), and cannot be realized at the same time within the NP headed by visit (*The Queen paid a visit of the Lady to the Prime Minister).Paul transmitted the advice to his sister + Peter's advice →Paul transmitted Peter's advice to his sister.
contrasting
train_19616
(2014) constructs an MWE-annotated corpus based on English Web Treebank (Bies et al., 2012).
the number of VMWE occurrences (1,444) and types (1,155) in their corpus is relatively small-scale.
contrasting
train_19617
Konbitzul was originally created as an NLP-applicable resource.
with its user-friendly interface, it can simply be used as a phraseological dictionary as well.
contrasting
train_19618
wind turbine) are very common in English specialized texts (Nakov, 2013).
all too frequently they show similar external forms but encode different semantic relations because of noun packing.
contrasting
train_19619
Once again, this figure was 100% when the PROCESS category was further refined to LOSS PROCESS and ADDITION PROCESS (see Table 13).
there were some exceptions, such as volcanic eruption, which did not activate the relation has_patient, since volcano cannot be considered the patient of eruption but its agent.
contrasting
train_19620
As can be observed in Table 8, the values for some of the features, such as the coordination of passivization ratios are rather similar in both corpora.
each corpus also shows its own morpho-syntactic profile.
contrasting
train_19621
Figure 6 describes the deep convolutional encoder architecture first used by a Facebook machine translation system (Gehring et al., 2017).
to the RNN, convolutions do not provide explicit way to encode the position of a word in the input sequence.
contrasting
train_19622
As a result, the words associated with a literal target will have larger projection onto a target σvn.
the projections of words associated with an idiomatic target VNC onto σvn should have a smaller value.
contrasting
train_19623
The resulting model in (op Vollenbroek et al., 2016) was the overall winner of the PAN-2016 crossgenre author profiling task (Rosso et al., 2016), obtaining 0.53 joint accuracy (i.e., age and gender prediction put together) when combining results from English, Spanish and Dutch texts.
the authors pointed out that their present (cross-genre) results were considerably lower than those obtained in previous (single-genre) author profiling tasks at the PAN-CLEF series.
contrasting
train_19624
(Hasanaj, 2012) also attempts an evaluation of the two tagsets using a maximum entropy tagger and a perceptron tagger.
his gold standard corpora consist only of 263 tokens for the basic tagset and 641 tokens for the large tagset.
contrasting
train_19625
Using the full tagset, the best tagger achieves 91.00% accuracy -a good six points less than the state of the art for languages like English, French or German.
we have to take into consideration that our corpus is orders of magnitude smaller than for example the Wall Street Journal part of the Penn Treebank and that it is at the same time much more heterogeneous.
contrasting
train_19626
Over the past 25 years, the Linguistic Data Consortium (LDC) has been the main provider of annotated data and annotation standards.
there are two main concerns with the LDC data (Marcus et al., 1995;Maamouri et al., 2003;Lander, 2005) that poses somes questions about its reliability and relevance to contemporary language.
contrasting
train_19627
UniMorph resources are rarely original resources, but rather extracted from existing material, 6 such as Wiktionary (Kirov et al., 2016, first-generation UniMorph inventories) and other dictionaries, bootstrapped from morpheme inventories or corpora (as described here), or generated by rulebased morphologies.
this conversion-based approach means that the segmentation and annotation principles of the underlying resource tend to be preserved.
contrasting
train_19628
mWda-lli mountain-ABL 'from the mountain'.
it can also attach to three other cases.
contrasting
train_19629
If that would be true, multiple cases could always be resolved unambiguously, in that any non-locative case is by definition the inherent case.
the examples given above involve several non-locative cases or even a reversal of this pattern, so that an explicit mechanism to express multiple case marking is required.
contrasting
train_19630
While the text data from micro-blog platforms like Twitters are very convenient and easy to collect, the fact that they are limited in the number of characters (140 for a tweet) differs themselves from daily conversation text and therefore, have limited use in a real-life settings.
multiclass scheme has its own limitation.
contrasting
train_19631
If one emotion exists in one word of the input then the corresponding tendency feature will be set to 1 and 0 otherwise.
solely relying on text would cause problem for emotion detection system.
contrasting
train_19632
As we can observe, our simple system are performing worse than human annotators by a considerable margin.
in comparison with other corpora 's Interannotators agreement F1-score such as Twitter Emotion Corpus (the score is 43.7), we see the potential of the corpus: It is reliable and the performance of emotion analysis system on it is great.
contrasting
train_19633
For example, in "rienà voir avec le seigneur des anneaux, carrément passionnant" [(this film has) nothing to do with the Lord of the rings, (that is) downright fascinating], (1) the phrase "downright fascinating" indicates a very positive opinion, but it is applied to another movie: "the Lord of the rings".
the full phrase indicates a negative feeling about the movie.
contrasting
train_19634
The use of lexicons has sometimes been straightforward, where the mere presence of a sentiment word determines a given polarity.
negation and intensification can alter the valence or polar-ity of that word.
contrasting
train_19635
After many years of research in sentiment analysis, the types of documents that have been dealt with in the field are varied.
two types stand out among the rest: user reviews of products or services, and microblogging short texts, particularly Twitter data.
contrasting
train_19636
Such unstructured textual information contains valuable information for marketing analysis.
customer loyalty has long been a topic of high interest in both academia and industry (Griffin, 2002;Asuncion et al., 2004;Scriosteanu and Popescu, 2010), and is critical for business profitability.
contrasting
train_19637
Tweets are usually multilingual in nature and of varying quality.
machine translation (MT) of twitter data is a challenging task especially due to the following two reasons: (i) tweets are informal in nature (i.e., violates linguistic norms), and (ii) parallel resource for twitter data is scarcely available on the Internet.
contrasting
train_19638
Social media has evolved from being cyber world geek buzz to a massive platform for businesses, entrepreneurs, professionals and organizations that seek greater recognition and identification at a very economical price.
business information sharing is not the only aspect of web services, e.g.
contrasting
train_19639
Recently, Twitter has gained massive popularity and the number of Twitter users has increased significantly during the last few years.
twitter users are often encouraged to write informal texts due to the 140-character limitation.
contrasting
train_19640
It can be observed that when only the Twitter data is used, better BLEU, METEOR and TER scores are obtained without using the sentiment classification ("Sent_Clas") approach ('Twitter (Baseline)').
the sentiment preservation score ("Sent_Pres") is higher when using the sentiment classification (72.66%) method whereas switching it off causes the score to be reduced to 66.66%.
contrasting
train_19641
Initially we restricted the sentiment classes to only negative, neutral and positive.
in future, it can easily be extended with some other sentiment classes such as strong negative, strong positive etc.
contrasting
train_19642
Given a sarcastic text, sarcasm target identification is the task of extracting the subset of words that indicate the target of ridicule.
in some cases, the target of ridicule may not be present among the words.
contrasting
train_19643
In this sentence, the words 'chefs', 'cooktop' and 'phone' are candidate phrases.
only the 'phone' is being ridiculed in this sentence.
contrasting
train_19644
In this work, we have focused on annotating a corpus of general student feedback, which contains more noise and complex data compared to feedback collected through reflective prompts.
unlike traditional reflective prompts, general feedback contains more useful information.
contrasting
train_19645
If certain statistical properties are ensured about the appearance of elements in the tuples, then the number of times an element is chosen as most positive minus the number of times it is chosen as most negative can be used as a sentiment score.
we experienced that if the BWS annotation is performed by directly sampling from the lexicon (or from a general corpus, for that matter), most 4-tuples would not contain any items with a clear non-neutral polarity, let alone one most positive and one most negative item.
contrasting
train_19646
On the one hand, during the training phase (phone-level alignment, creation of acoustic models, henceforth AMs), the phonetic transcription provides information about where to expect recurring patterns in the acoustic signal, which will be abstracted away and generalized in the acoustic models for the individual (tri)phones.
during decoding, it mediates between (to borrow a neurolinguistic analogy) the top-down (predictive) processes based on the linguistic knowledge (the language models, henceforth LMs), and the bottom-up (recognitive) processes based on spotting acoustic patterns in the incoming sound data.
contrasting
train_19647
On one hand, if a user's question does not exist in the forum, a new post is created so that other users can contribute and provide answers or comments.
if similar or related questions already exist in the forum, the system should be able to detect them and redirect the user towards the corresponding threads.
contrasting
train_19648
If this hypothesis can naturally be discussed and criticized, we did an extensive manual verification and we found a strong correlation between users score votes and correct answers.
the voting score can be adjusted to a certain threshold that guarantees reliable answers 5 .
contrasting
train_19649
This might be obvious while answers does not contain only lexical similarities with there corresponding questions.
using a mapping matrix to learn embedding regularities has shown interesting results (MappSent approach).
contrasting
train_19650
might need only take the form of explaining that one of bark's primary functions is to provide protection for the tree.
for a domain novice, such as an elementary science student, this explanation Bark is a protective covering around the trunk and branches of a tree.
contrasting
train_19651
If an inference method requires 2 shared rows, the corpus requirements would increase to approximately 2,500 questions to answer 80% of questions.
if an inference method requires 3 or more rows, this likely would not be possible without a corpus of at least 20,000 questions and explanations -a substantial undertaking.
contrasting
train_19652
Since the FrameNet frames are designed for general purpose, the annotated frame are not necessarily useful for the database search task.
we annotated the information that is relevant for constructing database queries regardless whether it corresponds to the database field or not.
contrasting
train_19653
supervised data, to achieve optimal performance.
because IQA is a challenging task that handles complex input and output information, the cost of naive manual annotation can be prohibitively expensive.
contrasting
train_19654
Roughly speaking, the first three approaches aim to obtain better representations or decision boundaries using unlabeled data.
it is difficult to directly apply these methods to multi-input tasks because multi-modal data often has complex structures, making it unclear how to improve representations or decision boundaries efficiently.
contrasting
train_19655
From a theoretical viewpoint, it is not very surprising that our method outperforms the baselines since our method uses additional information (captions) to generate questions.
considering that we can now easily obtain image-caption pairs from the web, we believe that our method is a practically promising approach to improve an IQA model with less human effort.
contrasting
train_19656
However, such existing training resources for these task mostly support only English.
we study semi-automated creation of the Korean Question Answering Dataset (K-QuAD), by using automatically translated SQuAD and a QA system bootstrapped on a small QA pair set.
contrasting
train_19657
As the advent of deep learning techniques turns algorithms data hungry, building resources of a significant size is a critical task, question answering (QA) systems, as demonstrated not only by SQuAD, but also by ImageNet for object recognition (Deng et al., ).
existing datasets for these tasks are mostly built on English.
contrasting
train_19658
Most of the existing works focus on retrieving answers in the same language in which the questions are posed.
with the rapid growth of multilingual contents on the web, it is necessary to build an automated system that retrieves information from the documents written in multiple languages.
contrasting
train_19659
The direct comparison of our dataset with the Cross-Language Evaluation Forum (CLEF) datasets (Pamela et al., 2010) is not possible because we have created question answers pair in both language (MQA) in contrast the CLEF dataset have the question and answer pair in the different languages.
we have shown the comparison in various terms as shown in Table 6.
contrasting
train_19660
(2016) give an important contribution to the understanding of HS in relation to other social phenomena: by collecting geo-tagged data from Twitter, the authors create a Hate Map that locates the breeding grounds of five different types of hateful content (Homophobic, Racist, Sexist, Anti-semitic and Against disability).
contrary to the approach we have followed in this work, the keywords used for filtering the data consisted of swear words frequently used against the five HS targets.
contrasting
train_19661
Needless to say that this kind of approach has several drawbacks, being a strong disagreement one of those.
in this work we rather look at the latter point as a signal (in Aroyo and Welty's (2015) words) of the inherent complexity of the task, given in particular by the potential ambiguity of the data at hand, as well as of the possibile solving strategies that can be put forward.
contrasting
train_19662
Such findings are quite consistent with our interpretation and definition of implicit incitement as typically based on, and promoting, prejudices, discrimination and hatred against a given target group (see Section 4.1.).
stereotype largely co-occurs also with higher degrees, which suggests us that it might be considered a fundamental factor in the definition of HS.
contrasting
train_19663
The choice of such a rich and fine-grained scheme is not flawless, nor without drawbacks, all highlighted and discussed in this paper.
namely due to its greater complexity, the corpus lends itself to more detailed and systematic analyses of the possible linguistic patterns associated not only with HS itself, but also to all the other categories included in our scheme.
contrasting
train_19664
Using textual features extracted from users' communications, profiles or lists seems a natural way for modeling their interests.
this information source has several drawbacks when applied to large data streams, such as the set of Twitter users.
contrasting
train_19665
Popular platforms in this area are Flixter, themoviedb.org and iCheckMovies.
many of these platforms use the IMDb database, owned by Amazon, which handles information about movies, actors, directors, TV shows, and video games.
contrasting
train_19666
Further note, as we explained in Section 3., that the process of extracting interests from messages is almost free of errors (96% precision), while inducing interests from topical friends and subsequent mapping to Wikipedia has an estimated 10% error rate.
as mentioned in Section 1., semantic techniques can be applied to reliably identify the main categories of interest for each user, an enhancement that we leave to future work.
contrasting
train_19667
Large datasets are particularly of use in this context due to the complex nature of differential responses to gender.
previous computational work on language and gender has focused mainly on language about or portraying persons of a particular gender (Wagner et al., 2015;Flekova et al., 2016;Agarwal et al., 2015).
contrasting
train_19668
(2014) note that commenters are more emotional when the presenter is a woman.
existing sources of this data such as 3 https://www.ted.com/talks https://www.idiap.ch/dataset/ted are not labeled for presenter gender.
contrasting
train_19669
Interestingly, sentiment in responses to women was higher across the board; whether this represents "benevolent sexism" or genuine positive sentiment is an interesting and complex topic for future research.
the contexts represented by each dataset acted very differently.
contrasting
train_19670
A detailed review is provided in (Fernández Gallardo, 2016).
the newly acquired NSC corpus presents: • Clean speech recordings made in the acousticallyisolated Nautilus room (which gives name to this database) with the high-quality AKG C 414B-XLS microphone.
contrasting
train_19671
These findings indicate the plausibility to classify perceived speaker traits based on speech features related to their voice descriptions.
the voice descriptions of more speakers need to be analyzed in order to better determine the statistical effects between the SC and VD dimensions.
contrasting
train_19672
The formant tracker PRAAT showed systematic errors with male speakers, while SNACK and ASSP performed better on male than on female speakers.
when comparing the average RMSE error value for female speakers in PRAAT (194Hz) with male speakers in SNACK/ASSP (234/177Hz) we can see that the performance was in the same range (underlined values in Table 2).
contrasting
train_19673
Moreover, the training environment lacks the noise that is massively present in the real-world VHF radio communication; this might lead to a drastic decrease of the unprepared controller's ability to understand the communication once he is put into service.
the human pseudopilot usually handles several virtual airplanes.
contrasting
train_19674
The difference between predicted and true chunk boundary is half the duration of the pause, which may be several seconds.
the error is less severe than its magnitude suggests, since all words except one end up in the correct chunk.
contrasting
train_19675
For Latvian and Lithuanian languages the situation is better, for example, there are a 100h Lithuanian speech corpus created for the LIEPA project 1 , an 84h corpus from BMMG (Alumäe and Tilk, 2016) and a 100h Latvian speech corpus (Pinnis et al., 2014).
even this is not much, as neural network acoustic models can make use of many thousands of hours of data and achieve significant improvement in recognition accuracy.
contrasting
train_19676
This content can be automatically collected and processed for the purpose of training statistical models.
in many cases, this content is not structured or organised in an accurate and machine-readable form.
contrasting
train_19677
This mapping then can be used to help ASR to recognise this part correctly and produce a better alignment.
before that, there is an optional intermediate second alignment step that is needed to improve the alignment and get more anchor points.
contrasting
train_19678
This means that our existing ASR already recognises such segments correctly.
after adding these segments to the training, we see an improvement in WER.
contrasting
train_19679
Making an ASR system work well for Indian English would involve collecting data for all representative accents in Indian English and then adapting Acoustic Models for each of those accents.
given the number of languages that exist in India and the lack of a prior work in literature about how many Indian English accents exist, it is difficult to come up with a set of canonical accents that could sufficiently capture the variations observed in Indian English.
contrasting
train_19680
Recently, it has been shown that using native language (L1) data could help adapt Acoustic Models to an accent influenced by that L1 (Aditya Siddhant, 2017).
in the case of IE, we face the issue of not knowing which native language(s) should be chosen for adaptation.
contrasting
train_19681
Retrieved results are displayed inside the rings, and users can filter and browse them in real time.
a large-scale database system called "SHACHI" 1 was developed to collect metadata, such as tag sets, formats, and recorded contents, from language resources worldwide (Tohyama, 2008).
contrasting
train_19682
Results revealed that the best performance is obtained when English is used for both training and testing.
in the cases which the users are only capable of speaking their mother-tongue, it is best to use that language for both training and testing.
contrasting
train_19683
Relying on form-filling strategies, with predefined dialog states sequences, falls short of providing a flexible and adaptable system that supports natural conversation, where the interaction is not constrained by a fixed plot with one precise goal.
treating utterances or messages independently from each other fails to recognise the interconnected dialog structure and phenomena that are crucial to correctly understand and generate dialog interactions.
contrasting
train_19684
In recent years, the importance of dialogue understanding systems has been increasing.
it is difficult for computers to deeply understand our daily conversations because we frequently use emotional expressions in conversations.
contrasting
train_19685
It is desirable to construct a corpus through gathering naturally occurring dialogues from somewhere and annotating them with feature change information.
to do this ideal construction procedure, there are problems described below at least: (1) it is difficult to get large scale dialogues, (2) there is no guarantee that utterances gathered from natural dialogues always have simple structures.
contrasting
train_19686
the recorded data together with the survey materials (personality tests and experience assessment questionnaires) will be made open for reuse and repurposing.
since the core information is located in the audio and video signals, complete data anonymization was not an option.
contrasting
train_19687
The size of the corpus remains small.
our objective is not to learn a model (in a machine learning point of view) but to extract automatically and manually information to model the virtual patient's behavior (Porhet et al., 2017) that, then, will be validated through perceptive studies.
contrasting
train_19688
Furthermore, different approaches, which involve the characteristics of sounds and prosody based on speaking styles and the expressions of emotional speech classification in anime films, have also been proposed (Hara and Itou, 2010).
these studies only focused on non-verbal information for recognizing/classifying types of emotions without addressing verbal content.
contrasting
train_19689
Among the deep learning frameworks, TensorFlow is arguably the most widely used, as the Github statistics in table 1. demonstrate.
as far as we know (see section 2. for a more detailed description of the existing tools), there does not exist a toolkit that allows one to quickly design, train and test their own 'baseline' LMs with Tensor-Flow.
contrasting
train_19690
Characterbased models can be generalized in a sense that they do not require a pre-trained morphological analyzer, and they enable to calculate vector representations even for out-ofvocabulary words.
vector representations of characters present the same problem as those of words, due to a trade-off between vocabulary size and token frequency.
contrasting
train_19691
All words for people, animals, and trees are animate, while most other forms are inanimate.
there are many words such as asikan 'sock' or kôna 'snow' 1 that are semantically inanimate but behave as animate nouns in Plains Cree.
contrasting
train_19692
(1) Algonquian person hierarchy (adapted from Wolvengrey, 2011, p. 57) 2 > 1 >> 3 > 3′ >> 0 Both nouns and demonstrative pronouns may stand alone as the arguments of verbs, though nouns and demonstrative pronouns may also co-occur.
due to extensive verbal morphology (discussed below), it is more common that no overt arguments are present.
contrasting
train_19693
Additionally, Vandall and Douquette (1987) speak mostly about history and relate others' stories, so there are many third person verbs, which can potentially occur with multiple overt arguments.
texts like Bear et al.
contrasting
train_19694
FastText performance shows the lowest variance, i.e., it robustly yields good results across many different hyperparameter settings.
bPEmb and characterbased models show higher variance, i.e., they require more careful hyper-parameter tuning to achieve good results.
contrasting
train_19695
Personality-dependent lexical choice does seem to make more accurate decisions for most input properties, including even those with a relatively small number of instances.
a post hoc analysis suggested that the use of personality information is particularly helpful in the lexicalisation of affective information (e.g., properties conveying attributes such as smile, emotion etc.
contrasting
train_19696
Similarly to how the syntactic structure of idioms is described in the syntactic module, the LF rules of the semantic module are templates with placeholders for deep lexical units and names of syntactic relations.
these templates are a lot more intricate and diverse.
contrasting
train_19697
This is desirable of course, to avoid putting, say, a verb where a noun is needed.
the downside is that there is no guarantee that such a lexicalization exists for a given node that satisfies the constraints, so we might end up with a dead-end structure.
contrasting
train_19698
(2016) show that it has better potential than the EW-SEW dataset for the TS task.
up until recently, the Newsela corpus was only provided with alignments at the document level.
contrasting
train_19699
The second is not surprising, given that the Wikipedia dataset (Hwang et al., 2015) does not allow for one-to-many sentence alignments and therefore does not contain sentence splitting examples.
when trained on a dataset that contains examples of sentence splittings (the Newsela dataset), our NTS models were able to learn this simplification operation and successfully apply it on a test set from either the same domain (examples 2a-2c, Table 8) or from another domain (examples 5a-5b, Table 8).
contrasting