id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_18700 | This offers in one part copyright protection over the design of the database schema. | as the structure of translation memories and the process of segmentation are widely understood and even subject to standardisation, e.g. | contrasting |
train_18701 | Revised Translations provided by a human post-editor could be subject to copyright protection as for translated content in general. | if the machine translated output is of a high quality, such that very little post-editing is required, then the claim of the post-editor to providing creative input to the generation of translation may be challenged, weakening the claim to copyright protection over the revised translation. | contrasting |
train_18702 | This is advantageous since the legal aspects of assuring the correct configuration of ODRL files may make their management and quality assurance an expensive task. | this means it is complex to express differing right for different attributes in a CSV resource recorded in different columns. | contrasting |
train_18703 | for a revised translation CSV file en-t-fr-m0-postedit.csv (using the schema from figure 2) the translation data can be referenced as en-t-fr-m0-postedit.csv#col=posteditedText and the post-editing timing information can be referenced as en-t-fr-m0-postedit.csv#col=timeToPostedit. | as the reference from the odrl:target attribute is a URL, one such declaration needs to be added for each CSV table which references that ODRL file, thereby complicating the management of the ODRL file. | contrasting |
train_18704 | using term extraction, named entity recognition, entity linking, part of speech tagging and word sense disambiguation techniques. | if the assertion and assignment of rights over these term-in-context annotations is not easily captured, then this inhibits attribution or compensation for use of this data between parties and disincentivises its capture as reusable datasets. | contrasting |
train_18705 | 4 https://ec.europa.eu/jrc/en/language-technologies 5 http://rma.nwu.ac.za Figure 2: ISLRN submission process -During the edition process of the metadata, multiselection fields were presented in drop-down lists. | some of those lists, like the selection of languages were too painful to fill in as such. | contrasting |
train_18706 | Challenges derive from the highly international nature of an early career applicant pool. | a number of successes and even a few testimonials justify the expense of the program. | contrasting |
train_18707 | On the one hand, we approached the task of the annotation of figurative devices in order to evaluate in a real data set the suitability of a task known as very hard (Reyes and Rosso, 2014;Filatova, 2012;Reyes et al., 2013;Maynard and Greenwood, 2014). | we propose a semantic-oriented annotation assuming that it can give more precise hints about the conversational context, considering that often the meaning of a text varies according to the topic and, in the case of political debates, also according to the specific aspects discussed or to the author (e.g. | contrasting |
train_18708 | This suggests that INTROVERT-EXTRAVERT as well as THINKING-FEELING are predictable from linguistic input alone, while this is much less the case for the other two dimensions. | for languages for which we have fewer than 500 authors, namely Italian and German, the model usually does not outperform the majority baseline. | contrasting |
train_18709 | Creating large corpora for training supervised machinelearning models is hard because it requires time and money that may not be available. | since our dataset was used for disaster relief efforts, volunteers were willing to annotate it; this work can now be leveraged to improve text classification and language processing tasks. | contrasting |
train_18710 | Previous linguistic studies have looked at the structural and functional aspects of spoken and hence, small scale code-switched data. | with the huge amount of text available on social-media there is now an opportunity to study different aspects of this phenomenon on a large scale. | contrasting |
train_18711 | This is because, in other language, the former functions as a determiner. | the Japanese language does not have articles and the traditional Japanese grammar does not have the determiner word class. | contrasting |
train_18712 | Syntactic dependency types in Japanese are defined in order to be as in conformance with the principles of UD as possible. | the definition of Japanese syntax under UD involves several issues that should be discussed. | contrasting |
train_18713 | Coordination We take the first conjunct as the head in the coordinating construction in the fashion of the UD scheme. | because Japanese is a head final language, the last conjunct tends to be the head. | contrasting |
train_18714 | One of the most famous examples is zou wa hana ga nagai "For elephants, trunks are long." | 7 this type is also used for fronted or postposed elements that do not fulfill the usual core grammatical relations; for example, the relation between "office" and "me" in the sentence "This is our office, me and Sam." | contrasting |
train_18715 | ADV PUNCT DET NOUN VERB DET NOUN ADP DET NOUN Figure 2: UD annotation for a French sentence. | (Translation: girls love chocolate desserts.) | contrasting |
train_18716 | Moreover, it should be possible to refine the analysis by adding language-specific subtypes of universal categories. | figure 2 uses the french sentence Toutefois, les filles adorent les desserts au chocolat (the girls love chocolate desserts) to exemplify the different UD annotation layers, which are described in more detail in the following sections. | contrasting |
train_18717 | If we try to directly convert word-based dependency to MWE-aware dependency, we need to combine nodes in an MWE into a single node. | this naive approach often leads to the following problem: A node derived from an MWE could have multiple heads and the whole dependency structure including MWE might be cyclic. | contrasting |
train_18718 | When the Taishō Revised Edition was produced in the 19th century, only 10,000 characters were available to the publishers and thus many substitutions of similar characters had to be made. | the digital version of the Tripiṭaka Koreana reproduced every glyph found in the blocks, making it more accurate for our purposes. | contrasting |
train_18719 | In this last annotation round, annotators were instructed to re-evaluate their sense labels. | they were asked to do so only for the instances on which they disagreed with the other annotator assigned to the same instance. | contrasting |
train_18720 | The average number of NOTA labels per word is 25, which seems reasonable taking into account the dataset size. | there are a few outliers. | contrasting |
train_18721 | Even though we expected a high agreement on this word, due to its few and very well-distinguishable senses, we still find the perfect agreement rather surprising. | word pronaći (to find) was the most difficult one to annotate (IAA of 0.638). | contrasting |
train_18722 | Another disambiguation task focused on WordNet glosses was presented as part of the SensEval-3 workshop (Litkowski, 2004). | the best reported system obtained precision and recall figures below 70%, which arguably is not enough to provide high-quality senseannotated data for current state-of-the-art NLP systems. | contrasting |
train_18723 | We propose an unsupervised system for a variant of cross-lingual lexical substitution (CLLS) to be used in a reading scenario in computer-assisted language learning (CALL), in which single-word translations provided by a dictionary are ranked according to their appropriateness in context. | to most alternative systems, ours does not rely on either parallel corpora or machine translation systems, making it suitable for low-resource languages as the language to be learned. | contrasting |
train_18724 | As the similarity measure for the edge weights, we use simple co-occurrences based on the Spanish part of the annotated Wikicorpus (Reese et al., 2010), of less than 120 million tokens in size. | to the Nynorsk dataset, the Sem-Eval dataset consists of only one context sentence per test instance, which means we cannot make use of a large context window. | contrasting |
train_18725 | In a post-hoc analysis, we constructed our own ranked dictionary baseline, but cannot even come close to it in the best measure. | we see an improvement of 14.3% compared to this dictionary for mode 3, which, however, we cannot directly compare to the Sem-Eval systems. | contrasting |
train_18726 | The task consists in recommending relevant papers to be cited at a specific point in a draft scientific paper, and is universally framed as an information retrieval scenario. | in order to make these suggestions as useful and relevant as possible, we argue here that we need to apply a measure of understanding to the text of the draft paper. | contrasting |
train_18727 | Categorising the information status of noun phrases (NPs) or resolving anaphoric relations automatically requires annotated data for training and testing. | for scientific text, annotated data are scarce: to the best of our knowledge, there is no full-text scientific corpus annotated with both information status and anaphoric relations. | contrasting |
train_18728 | Since the possible set of connectives has been generated using a fixed set of known discourse connectives, it might seem redundant to include connective string as a feature. | including the connective string would allow the model to better understand the distribution of features with respect to each connective. | contrasting |
train_18729 | We argue that in case where no referential ambiguity is present in the context of an information seeking query sessions, the progression of discourse topic can be identified (and also annotated) with a set of simple heuristic rules. | in the case of referential ambiguity, which may be introduced by anaphora in followup queries, disambiguation can be achieved through automated anaphora resolution. | contrasting |
train_18730 | Our annotation layers cover syntactic cues, semantic relations, discourse entities and discourse topic development. | in this paper we only present the method to annotate discourse entities and topic development, and leave out syntactic and semantic categories, which have been commonly discussed in the literature. | contrasting |
train_18731 | Consequently, Topic can be characterized as given, whereas Focus as new discourse information. | what we identify as Topic or Focus changes whether we consider referential givenness/newness (e.g. | contrasting |
train_18732 | Due to the non-entailed character of modal auxiliaries, questions that include these elements do not primarily ask about when and how an activity took place, but whether it took/can take place at all. | from the point of view of the issuer's intention, which is of our primary interest here, the Topic of Q 1 shifts from the identity of the manufacturer to the activity of buying in Q 2 . | contrasting |
train_18733 | This is because the knowledge of the annotators increased and the guidelines for annotators were improved. | in the present study there are some tags where even expert annotators have struggled to detect them compared to others, such as the CLARIFYING (CLA) tag for both disciplines and, the ARGUING (ARG) tag for the economics discipline. | contrasting |
train_18734 | DAMSL is a de facto standard in dialogue analysis, due to its theoretical foundation (acts are annotated as context update operations), its genericity (high level classes allow for the annotation of a wide range of conversations types) and multidimensionality (each utterance can be annotated with several labels). | its dimensions are not discussed and lack conceptual significance. | contrasting |
train_18735 | "here's my question") is rare, unlike in forums and especially emails. | the EVAL-UATION ("OK let me see"), ATTENTION-PERCEPTION-INTERPRETATION ("I understand") and PSYCHOLOGICAL STATE ("I'm feeling good") dimensions are much more prevalent there, which shows that grounding as well as informational and emotional synchronization between participants is more important in synchronous conversations than in asynchronous ones. | contrasting |
train_18736 | As mentioned earlier, our definition does not distinguish between irony and sarcasm. | the annotation scheme allows to signal variants of verbal irony that are particularly harsh (i.e., carrying a mocking or ridiculing tone with the intention to hurt someone), as shown in Figure 4. | contrasting |
train_18737 | is the only group containing three languages. | we also noted that classifiers showed very high degree of confusion when discriminating between Bosnian and Croatian texts. | contrasting |
train_18738 | Compilation (and later annotation) of the ACC is an ongoing project. | version 1.0 of the corpus, with some 1,877,615 word tokens, and drawn exclusively from the Internet, already attempts to classify a wider range of children's genres than any of the above datasets. | contrasting |
train_18739 | Based on the tests conducted in WebBootCat, and the problems outlined above, we found both tool options for automatic corpus collection unsuitable for our purposes. | the 'Create Corpus' tool in SketchEngine (Kilgarriff et al 2014) enables researchers to gather texts in a range of different formats and then upload them into SketchEngine format. | contrasting |
train_18740 | Version 1.0 of our corpus classifies texts via two overarching categories: Fiction or Non-Fiction, and further specifies genres according to the following types (Figure 2). | figure 2: Genre classification scheme used in the ACC classifying texts in terms of their primary genre does not preclude further sub-categorisation. | contrasting |
train_18741 | for the RCV1 corpus contains valuable background information about the editorial processes at Reuters at the time the corpus was created, and most of their findings appear to be applicable to RCV2 as well. | while the RCV1 documentation is helpful, there are still many undocumented features in the corpus. | contrasting |
train_18742 | For example, one coder might well decide that the direction of the sales change is no longer sufficiently clear, as is the impact of the weather on the sales change, and, as a result, code this statement as neutral, that is as NEU, EXT, NEU under Sent_Tone, Attr, Attr_Tone. | another coder might well decide to make the ambiguity inherent in the statement explicit by marking this sentence as UNSURE, EXT, UNSURE. | contrasting |
train_18743 | Assessing OSS software quality has traditionally focused on analysing the source code behind the software to calculate quality indicators and metrics. | complimentary information about OSS quality can be extracted by analysing messages posted to communication channels (newsgroups, forums, mailing lists), and issue trackers supporting OSS projects. | contrasting |
train_18744 | Since the set of types is very relevant to the topic of this paper, it was taken into account when developing our novel hierarchy. | our novel hierarchy of content types is more detailed and contains content types that specifically describe the content of online messages related to OSS. | contrasting |
train_18745 | Assessing the quality of discussion threads has also been attempted (Kim and Beal, 2006). | very simple metrics, such as the number of messages, the length of messages and the number of responses, have been employed. | contrasting |
train_18746 | P, R and F can be computed as follows: For non-final types, we could average the F-measure scores associated with its descendants. | this would not take into account the actual number of annotations of each descendant class. | contrasting |
train_18747 | The best performing unigram frequency threshold is 1 or 2, achieving 70% accuracy. | using other thresholds can achieve similar accuracy. | contrasting |
train_18748 | Many supervised learning methods have been employed to solve this problem. | supervised methods require a large amount of labelled training data. | contrasting |
train_18749 | The classifier trained on selected hashtag dataset achieves comparable result with the manually annotated NLP&CC2013 dataset. | the recall the selected lexicon is lower. | contrasting |
train_18750 | Debole and Sebastiani (2005) analysed the complexity of the different subsets of the Reuters-21578 corpus in terms of the relative hardness of learning classifiers on the subcorpora, a strategy which does not assume monolinguality in the corpora. | they were only interested in the relative difficulty and give no measure of the complexity as such. | contrasting |
train_18751 | There are many ways to combine two (or several) information sources, in particular if they are independent; see, e.g., Genest and McConway (1990) for an overview. | p partially depends on max{t Li }, 4 which, for example, rules out the common logarithmic opinion poll: Instead we will use the linear opinion poll: Combining f m (x) and f p (x) gives a revised utterance level measure for N (x) > 0: where w m and w p are weights (w m + w p = 1). | contrasting |
train_18752 | In the summer term, an active usage of the LT in all seven lectures could be observed. | in three lectures this was only the case at the day of the presentation of the system and for a short amount of time (about 10 minutes). | contrasting |
train_18753 | We can conclude that the lists obtained with WER and ATENE are the most consistent with respect to the different systems, having a mean correlation above 0.8. | the metric In is the one that seems to be less consistent with a mean correlation lower than 0.7. | contrasting |
train_18754 | In order to avoid any misunderstanding, we position our work as an extrinsic detection, the aim of which is to find near-matches between texts, as opposed to intrinsic detection whose aim is to show that different parts of a presumably single-author text could not have been written by the same author [Stamatatos et al 2011a], [Stein et al 2011], [Bensalem et al 2014]. | our main objective is to deal with the entry level of the detection. | contrasting |
train_18755 | In Table 1, we observe that simply using all terms of a story as a query to retrieve a ranked list of images does not produce satisfactory results, as can be seen from the low MAP and P @k values. | even a very simple approach of weighting the terms in the text of the story by their tfidf weights can produce a significant improvement in the results. | contrasting |
train_18756 | These methods are useful in order to arrive at an overview of the content of large corpora. | these techniques do not identify which predicates relate co-occurring elements with each other. | contrasting |
train_18757 | In the open RE task there is a great diversity of relations, which makes their classification more difficult. | for the extraction of pre-defined relations (closed RE), the CRF classifier only needs to learn specific types of relations. | contrasting |
train_18758 | Each output keyphrase is considered correct if it matches one of the reference keyphrases. | the choice of the appropriate textual unit (keyphrase) for a topic is sometimes subjective and evaluating by exact matching underestimates the performance. | contrasting |
train_18759 | High-quality corpora are extremely important for conducting humanities research in areas such as history, cultural studies, literary studies or linguistics. | to build such corpora, usually high manual effort is involved. | contrasting |
train_18760 | Previous work on event schema induction was evaluated on the MUC-4 corpus (Grishman and Sundheim, 1996). | this corpus raises two main issues: • It was annotated with templates describing all events with the same set of slots. | contrasting |
train_18761 | 1 We also annotated entity coreference chains in documents. | only the entities appearing at least once as an event argument were annotated with coreference chains. | contrasting |
train_18762 | The additional experiments showed that in less than 2 % of cases a speaker does not pronounce the assimilated consonant instead of the original one, or vice versa pronounces it in the context where there is no assimilation process. | one would expect the increase of such mispronunciations in speech of non-native Russian speakers or in conversational speech where the mismatch rate between orthoepic transcription and real pronunciation is greater. | contrasting |
train_18763 | ), long pauses, or partner's speech. | due to very frequent clipping in some speakers' recordings, defective fragments shorter than 100 ms were allowed. | contrasting |
train_18764 | Fortunately, the last several years have seen the release of phonetic/phonological databases that are much richer and have better coverage, such as Phoible 1 (Moran et al., 2014) and the dataset accompanying a recent comparison of phonological and genetics patterns of diversity (Creanza et al., 2015) curated by Merritt Ruhlen 2 . | such segment-level databases have a major drawback intrinsic to their design in that they cannot be directly used for analyses that require generalizations over classes of segments that share theoretically interesting features, such as "front rounded vowels", "retroflex stops" or "clicks". | contrasting |
train_18765 | Actually, we can not find similar artists information for artists with low degree on the last.fm's website. | our method compensates this shortage. | contrasting |
train_18766 | which is an iterative approach. | a straightforward alternative would have been to type the correct transcription for each segment from-scratch. | contrasting |
train_18767 | Taking a closer look at the random fit for each transcriber ( Figure 5) reveals that the poor performance of FS − was largely attributed to the non-expert group CROWD, who were not able to beat the ASR output with this method. | note that even for several top transcribers, iterative interface design increased quality. | contrasting |
train_18768 | Given that from-scratch transcription required typing of all the words, its slower speed is not surprising. | in Section 2. we hypothesized an inherent overhead for post-editing due to verification and navigation between errors that need correction. | contrasting |
train_18769 | Moreover, FS + slightly but statistically significantly improves speed over FS − , possibly because displaying the ASR hypothesis helps the transcriber recall the words uttered in the audio faster. | bear in mind that most segments will have an edit rate much lower than 67%, meaning that the reduced typing effort of post-editing will usually outweigh its inherent overhead disadvantage. | contrasting |
train_18770 | (2013) show that respeaking by non-experts can be an option when ASR transcripts are to be improved, but not made perfect. | respeaking requires recording equipment, a quiet environment, and a clear speaker. | contrasting |
train_18771 | Even though only ER data can be released at the time of writing, for most of the users, the process to collect ID data for M03 and F02 is ongoing. | m02 ID data alone allows us to demonstrate the quality and the amount of data that are going to be recorded with the homeService system. | contrasting |
train_18772 | In these papers, spontaneous speech was found to be less intelligible and contained more disfluencies than read speech. | patients suffering from ALS present similar and even lower anomaly rates on spontaneous speech compared to reading (32% and 36% respectively). | contrasting |
train_18773 | At the end of the validation process, a total of 2,290 personalities constitute our speaker dictionary (Table 1 shows the detail). | it has to be kept in mind that, despite our efforts to reduce it, such dictionaries are by nature greatly imbalanced. | contrasting |
train_18774 | For instance by generating new queries: "find media where A speaks with B" or "get me contents where C talks about D", etc. | a boost of performances could be obtained by using multimodality to confirm, correct or invalidate the identity of detected speakers. | contrasting |
train_18775 | In chats, we observe no difference between the number of words by B-messages and the number of words by I-messages. | i-messages are significantly longer in the phone corpus. | contrasting |
train_18776 | Bavaria is one of 16 federal states in Germany. | it is important to distinguish the state of Bavaria ("Freistaat Bayern") and the Bavarian dialect, which is not per se identical: Not all inhabitants of Bavaria speak Bavarian dialect, and there are also speakers of Bavarian dialect outside of Bavaria, e.g. | contrasting |
train_18777 | These results suggest that DN is an easier task and as W1 has more true DN markables it could be expected that the W1 corpus would be annotated to a higher quality. | this is not the case due to the poor performance of interpretations of DO markables in the W1 corpus. | contrasting |
train_18778 | 16 The nature of multiple coreference anaphora resolution that involves reflexive/reciprocal pronouns implies the use of syntactic parsing for finding the multiple antecedents, given that antecedents and pronoun should depend on the same head. | as the parser performance is not optimal in patents (e.g., it does not always detect correctly coordinations or the scope of the clause) (Burga et al., 2013), instead of relying on the resulting dependency tree, we use other criteria to establish the clause and, therefore, restrict the search: the system only evaluates cases in which there is at most one content verb between the pronoun and the antecedent candidate (with the candidates having the same structure as NP 1 and NP 2 in (12) and (13) above), and where there are no determined punctuation marks (":" and ";") between them. | contrasting |
train_18779 | Again, restricting the annotation scope allows for reducing the manual effort per document and thus for increasing the corpus size. | a dataset with all the nominal markables annotated provides material for training mention detection systems. | contrasting |
train_18780 | (2010b; are notable in this regard, proposing an approach using probabilistic finite automata and promising completely unsupervised extraction. | extraction is an exercise in semantic interpretation, which imposes a penalty in the form of domain-specificity. | contrasting |
train_18781 | We find that a SVM learned on the pairwise features on one type for all 6 data sets gives better results than the cosine similarity and different types of features classification. | feature addition, as used by Hagiwara, seems not always to be the best choice and only gives best results when the feature vectors are normalized before addition. | contrasting |
train_18782 | In 5 the arrival event is annotated as POS as it may still occur (conditioned on the accomplishment of the if clause). | in 6 it is annotated as IND because the event has eventually happened (or not happened), but that information is not deductible from the text. | contrasting |
train_18783 | This result evidences that the users in the target group are less fluent in their turns. | there are differences between fundamental frequency but their standard deviation is high. | contrasting |
train_18784 | 1 We assume, that titles as found in the Wikipedia are often definition labels, which would help explain such a bias. | the term definition must be used with caution since Wikipedia articles are not all necessarily definitions in the strict sense. | contrasting |
train_18785 | 12 This data is an approximation as other word classes are being dismissed partly due to set incompatibilities in the word class labels. | they present the main classes of content words, which would make them most numerous as opposed to function words. | contrasting |
train_18786 | Exemplarily, looking at both ends of the continuum, see Table 3, within the unknown words present as Wikipedia titles (UW) of the unknown nouns therefrom (N), the Stanford Tagger mistagged only few nouns (SMT). | uW contained some non-nouns (NN), the majority of which led to redirects, disambiguation pages or deleted pages leaving some errors through the method (WE) but excluding relatively many assignments. | contrasting |
train_18787 | We were not able to confirm that the participants did not simply choose color associations in a random fashion. | because the annotation results fit our hypothesis (see also Tables 2 and 3 in the next section), we think they are somewhat reliable. | contrasting |
train_18788 | He reported that the order of the most frequently associated colors was identical to the Berlin and Kay order (i.e., white, black, red, ...). | from the second row in Table 1, it is apparent that the order is different in Japanese (i.e., white, red, blue, ...). | contrasting |
train_18789 | To try to attract more annotators we increased the rewarding but this did not help in rising the number of contributors. | we decided not to decrease the threshold of accuracy in the TQs filter in order not to affect quality of the data. | contrasting |
train_18790 | Concerns have been expressed as to the quality of crowdsourced data, which some assess as part of a trade-off for speed and economy (Snow et al., 2008;Madnani et al., 2011;Ball, 2014), with others describing methods to filter out errors (Gadiraju et al., 2015;Schmidt et al., 2015), or indeed encouraging researchers to 'embrace' error (Jamison and Gurevych, 2015;Krishna et al., 2016). | as the old adage goes, you only get what you pay for. | contrasting |
train_18791 | We address this issue by assessing a sample of crowdsourced soundfiles in section 4. | aSR needs to, and does already deal with speech data captured in less-than-ideal recording environments (e.g. | contrasting |
train_18792 | Furthermore, we used the interface to calibrate our task for changing parameters such as the amount of time required to complete a rating task, and the desired accuracy level to derive the payment. | crowdsourcing is prone to spammers trying to get paid without performing the task. | contrasting |
train_18793 | The caveat has to be made that this is a pilot study, with a limited number of samples per class; the results will be reviewed and verified with larger databases and crowdsourced ratings collected in the future. | the results lend further weight to the assumption that crowdsourcing can be applied as a reliable annotation source for computational paralinguistics given a sufficient number of raters and suited measurements of their reliability. | contrasting |
train_18794 | Harada (1997) supported this theory and reported that the three different communication devices, i.e., video conference, telephone and text message (e-mail and chat), changed the speaker's subjective evaluation on online communication using these devices. | dennis and Kinney (1998) concluded that the new media (i.e. | contrasting |
train_18795 | A positive correlation implies that the listener accurately understood the speaker's emotions throughout the dialog. | a negative correlation implies that the listener understood the speaker's emotions to be the opposite of what they actually were. | contrasting |
train_18796 | This implies that they did not experience difficulty in delivering their message in either method of communication. | when the binomial test was performed using the data grouped by chat type, the test revealed a marginally significant difference between TX and FaceToFace (p < 0.08, the right panel in Fig. | contrasting |
train_18797 | Those emotional words and emoticons can be helpful for the speaker's understanding of the partner's pleasantness. | arousal and dominance were considered to be difficult to express with words and emoticons. | contrasting |
train_18798 | Laughter can be conceived of as a non-linguistic or paralinguistic event. | with the amount of interest for the conversational speech data, laughter appears as an essential component of human spoken interaction (Glenn, 2003). | contrasting |
train_18799 | Laughter can appear in earnest context and function as nervous social or polite cue among others. | we can suppose that laughter related to humorous context will be frequent in our corpus. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.