id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_18800 | These instructions have been specially selected to elicit humor in the dialogs. | they were not exhaustive and participants often spoke very freely about other topics, in a conversational speaking style. | contrasting |
train_18801 | For disgust, the mean RMS energy values seem to be higher for higher levels. | this is less obvious in the case of startle and surprise. | contrasting |
train_18802 | With the monologue, we gather prominent, simple, and isolated emotional speech, useful for preliminary multimodal emotion research. | the dialogues are to provide less stereotypical, more subtle, and contextualized emotion, counteracting the shortcomings of typical emotion portrayal. | contrasting |
train_18803 | the feeling of joy is indicated by positive valence while fear is negative. | arousal measures the activity of emotion; e.g. | contrasting |
train_18804 | ja: Then it was reported in India, Kenya, Tanzania, Malawi, Uganda, and Current lexicon-based algorithms also require the input to be segmented into words (Ma, 2006). | like sentence segmentation, word segmentation is not a trivial issue especially for Asian languages like Chinese or Japanese with no explicit word boundaries (spaces), and matching rate between the input words and lexicon entries decreases if different segmentation rules are used. | contrasting |
train_18805 | Many SMT based systems are evaluated in terms of the information gained from the word alignment results. | there is not a lot of parallel data available for these languages making it necessary for specialized techniques that improve alignment quality has been felt (Sanchis and Sánchez, 2008;Lee et al., 2006;Koehn et al., 2007). | contrasting |
train_18806 | Their results show that this method improved precision without loss of recall in English to German alignments. | if the same unit is aligned to two different target units, this method is unlikely to make a selection. | contrasting |
train_18807 | The line representing the Cartesian product approach clearly shows the degradation of MT output for English -Hindi. | the sentential approach shown minor improvements for a varied number of topic models. | contrasting |
train_18808 | This is due to the copyright issue. | since there was no pre-existing resource for Japanese-Chinese, ASPEC-JC was constructed by manually translating the Japanese documents into Chinese from scratch. | contrasting |
train_18809 | The basic idea is that indomain training data can be exploited to adapt all components of an already developed system. | previous work showed small performance gains after adaptation using limited in-domain bilingual data (Bertoldi and Federico (2009), Daumé III and Jagarlamudi (2011)). | contrasting |
train_18810 | This way, articles are grouped by categories and the category hierarchy forms a graph. | many articles are not associated to the categories they should belong to. | contrasting |
train_18811 | We also checked whether among the correctly classified pairs there were similar low frequent words, and indeed it was not the case. | for some other errors the explanation is less obvious. | contrasting |
train_18812 | (1991) and Gale and Church (1993) (the former measures sentence length in terms of tokens and the latter in terms of characters). | for texts available in less-friendly formats, such as PDF, from which we cannot avoid extracting some noise intermixed with the text (such as figure and table captions, page headers and footers, etc) we need more robust aligners that take into account the actual text within sentences and not only their lengths. | contrasting |
train_18813 | As the speaker utters the exact reference sentence, if there were no ASR mistakes, the best strategy would be to consider the ASR output as the final translation (thus scoring 100% BLEU), simply discarding the MT output. | aSR is far from perfect, and we therefore experimented with several levels of aSR accuracy to study the combination of aSR and MT. | contrasting |
train_18814 | This approach yields translations for the majority of sentences. | for some of them (around 20% sentences for the considered dataset), the references still cannot be reached. | contrasting |
train_18815 | As we switch to "pessimistic" scheme, the number of "BAD" labels in the data increases which results in more partial matches. | the strict score does not follow this pattern. | contrasting |
train_18816 | Our experience in Quality Estimation led us to look at a novel approach based on sequences of adjacent words, socalled phrase, as a natural balance between the too fine grained word-and too coarse sentence-levels. | an intrinsic challenge comes along with this new level: how to find phrases which correspond to actual machine translation errors. | contrasting |
train_18817 | Empirically, it is shown that all words have real IDF scores that deviate from the expected value under a Poisson distribution. | keywords tend to have larger deviations than non-keywords. | contrasting |
train_18818 | In addition, GENIA consists of very short abstracts and as a result, many legitimate terms may be removed due to the frequency threshold and lexical pruning. | this can be easily rectified by relaxing the pre-filters. | contrasting |
train_18819 | For example, on the ACL RD-TEC dataset, phrases containing 'treebank' are very highly ranked. | many of them are not valid terms. | contrasting |
train_18820 | This is clearly an invaluable resource for many computational applications. | for the specific purpose of our study, i.e. | contrasting |
train_18821 | the "window method" described in Seretan, 2011 for collocation extraction). | an initial extraction attempt showed that in a sentence window the two perceptual lexemes were seldom synaesthetically connected, as in (3): 3) Staffed by bright [SIGHT/Source] , young things who live and breathe music [HEARING/Target] , they tend to represent clients because they are passionate about their music. | contrasting |
train_18822 | The sentences that are extracted are therefore of the type in (4) and 5 Those in (4) and 5are good examples of synaesthesia, correctly identified in the corpus. | a rather heavy manual inspection of the extracted data was needed. | contrasting |
train_18823 | As for the types of sensory associations that have been found, almost every possible combination of senses has been attested. | in terms of frequency the directionality generalisation is confirmed: most transfers go from the lower to the higher modalities in both English (62%) and Italian (74%) (see Strik Lievers, 2015a for a discussion on directionality). | contrasting |
train_18824 | A quite heavy component of manual inspection of the extracted data is still needed. | if we take into account the extreme rarity of synaesthesia, then utility of the described methodology clearly emerges. | contrasting |
train_18825 | Most of the time, we would be tempted to simplify the model and treat all of them as multiword tokens or words-with-spaces (Sag et al., 2002). | accidental co-occurrence, like in example 2, creates ambiguities that are hard to solve at tokenisation time, specially given the simplicity of most automatic tokenisation approaches in French. | contrasting |
train_18826 | In previous experiments, we demonstrated that this approach is superior to treating all units systematically as words with spaces (Nasr et al., 2015). | this was only demonstrated for a small set of 8 CCONJs and 4 determiners in French. | contrasting |
train_18827 | 2 There are already quite a few catalog entries of LRs that include MWEs in international infrastructures such as the CLARIN and META-SHARE repositories. | they are not always easy to find since the information about MWEs in these repositories is often scarce, non-uniform, or non-explicit. | contrasting |
train_18828 | (2007)), aims at estimating how familiar a term is. | the CHV only covers the English language, and limited attempts have been made to cover other languages such as French and Portuguese. | contrasting |
train_18829 | This is because, on the one hand, of the scarcity of annotated data with specialization degrees for French. | assessing specialization degrees manually is difficult to carry out and is a time-consuming task. | contrasting |
train_18830 | Collocation dictionaries, such as the Oxford Collocations Dictionary or the MacMillan Collocations Dictionary group collocations in terms of semantic categories to facilitate that language learners can easily retrieve the collocate that expresses the meaning they want to express. | this categorization (or classification) is not always homogeneous. | contrasting |
train_18831 | For instance, in the MacMillan Dictionary, the entries for admiration and affinity contain the categories 'have' and 'show', each with their own collocates, while for other headwords, such as, e.g., ability, collocates with the meaning 'have' and 'show' are grouped under the same category; in the entry for alarm, cause or express are not assigned to any category, while for other keywords the categories 'cause' and 'show' are used (see e.g., problem for 'cause' or admiration for 'show'); and so on. | in the case of some headwords, the categories are very fine-grained (cf., e.g., amount, which includes glosses like 'very large', 'too large', 'rather large', 'at the limit', etc. | contrasting |
train_18832 | Furthermore, thanks to the use of a precise annotation guide, it becomes possible to extract automatically information and apply machine learning techniques to study the distribution of different phenomena. | the type of information classically encoded into a treebank remains at a high level of generality, that moreover very often remain implicit. | contrasting |
train_18833 | It remains difficult to explore or compare the syntactic characteristics of various languages using the complete grammar (independently of the formalism, constituents or dependencies). | it is possible to compare some specific properties, in accordance with established practices in typology. | contrasting |
train_18834 | Weigthing the properties A first weight w 0 (equation 1) can be directly obtained by calculating the ratio of occurrences of the validating rules to the sum of both subsets -the properties satisfied in all cases corresponding then to w 0 = 1. | if w 0 allows a first filtering of the properties, it does not provide any information about their actual weight. | contrasting |
train_18835 | Most often, the creation of these out-of-domain treebanks follows the existing annotation guidelines created by the creators of the newspaper treebank, which would -at least in principle -make it possible to evaluate the generalization performance both of purelysupervised and domain-adapted parsers on these new datasets. | some caution is in order, as in many cases, annotation guidelines or even informal practices guiding the annotation deviate.In the case of Dredze et al. | contrasting |
train_18836 | We implemented different variations on the basic idea of keyness: from a simple ratio of the smoothed fre-quencies to Pearson's X 2 statistic, to an interval estimate of the odds (Johnson, 2001). | we found that a more intuitive way than comparing all rules or all rule bigrams found in the treebank was to compare just the rule bigrams for one particular category -Tiger has less than 25 categories of nonterminals, so it is definitely feasible to look at these individually -and to display these visually. | contrasting |
train_18837 | Of these, a small number (around 2-3% of the whole corpus) do not agree with existing annotations, and cursory inspection yielded a mixture of clear errors (a prepositional phrase with the NP node label), likely errors (an adverb that post-modifies a PP), and cases where only experts in the Tiger annotation scheme could make a firm prediction. | the modeling-based approach would find parses where an extraposed constituent was attached to a VP according to both parsers but to the S node in the treebank. | contrasting |
train_18838 | Many shallow natural language understanding tasks use dependency trees to extract relations between content words. | strict surface-structure dependency trees tend to follow the linguistic structure of sentences too closely and frequently fail to provide direct relations between content words. | contrasting |
train_18839 | Most systems that require a syntactic representation use basic SD trees which are guaranteed to be a strict surface syntax tree. | most systems that are concerned with the relations between content words use the collapsed or CCprocessed SD representations. | contrasting |
train_18840 | In Urdu, the case clitic ne is often indicative of an agent, and as 'ARG0' is mostly associated with agentivity, this can sometimes provide a clue about the identity of Arg0. | note that an Arg0 argument does not necessarily get 'ne-marking'. | contrasting |
train_18841 | However, having in mind the theoretical concepts of Czech functionally-oriented linguistics and general characteristics of the individual styles we tend to classify legal texts as texts belonging to the administrative-legal style (according to (Jelínek, 1996)) which is now earmarked as a unique functional style, standing next to other styles, such as professional, journalistic, literary or scientific. | due to their specific function legal texts in many ways overlap with the professional style. | contrasting |
train_18842 | For example, impersonal style of legal texts understandably excludes the use of question marks and exclamation marks. | we observe an extremely high usage of semicolon for purposes like enumeration, itemization and various types of listings. | contrasting |
train_18843 | Most published concept lists, however, only contain a concept label. | certain concept lists have been further expanded by adding structure, such as rankings, divisions, or relations. | contrasting |
train_18844 | This degree of diversity in concept labelling is important to check how well we have succeeded in linking the data, since wrongly assigned links will also yield diverse concept labels. | it reflects scholars' problems to denote concepts when compiling their concept lists. | contrasting |
train_18845 | For example, Thesaurus Rex categorizes the concept dog as both small and large depending on the context. | if our system chooses both attributes, we would have contradictory comparisons in the riddle. | contrasting |
train_18846 | Following the described process to generate the riddles, the subsequent evaluation points out that the word associations obtained by our system are useful for generating these riddles. | the evaluation also shows that a manual selection of comparisons is useful because confusing comparisons may be generated when the target of the riddles is a polysemic concept or presents some contradictory attributes. | contrasting |
train_18847 | At the beginning the sizes of plWN and PWN adjective domains were comparable 1 . | the process of mapping has been carried out parallel to the process of the extension of adjective category in plWN, and at the final stage of 1 This is based on the data from plWN 2.1 version, downloadable from http://nlp.pwr.wroc.pl/plwordnet/download/?lang=pl mapping the number of adjective synsets in plWN outgrew that of PWN twice 2 . | contrasting |
train_18848 | FrameNet (Baker et al., 1998) is such a resource and provides fine-grained semantic relations of predicates and their arguments. | frameNet does not provide an explicit link to real-world fact types. | contrasting |
train_18849 | The syntactic restriction ontology. | to VN, which relies on a rich repertoire of more than 40 binary features to describe syntactic restrictions, MMO's descriptions of English frames make use only of 4 attributes: clausetype with 6 possible values, tense with 3 possible values, and the binary poss(essive) and num(ber The semantic restriction ontology. | contrasting |
train_18850 | As seen in previous section, the name variations found in infobox are in the structured form, making it easier to manually extract the correct name variations for any given topic. | it is well known that the goal of DBpedia (Mendes et al., 2012) is to extract structured information from Wikipedia and make it available on the web for querying. | contrasting |
train_18851 | The other candidate RDF schemas was Lemon (McCrae et al., 2012), the purpose of which is to enable people to be able to share lexicons on the Semantic Web. | to model a dictionary in Lemon, it is necessary to identify name variations by morphology, spelling variants, etc. | contrasting |
train_18852 | This is in part due to the lack of information about the licensing of the resources and ongoing discussions within the group about the use of non-commercial licenses. | we expect to reach a consensus within the next few months. | contrasting |
train_18853 | The life cycle typically ends with the distribution and publication of a language resource; this includes making a resource available by download or, for example, as Linked Open Data through an API. | the life cycle can continue if maintenance checks or user feedback result in minor updates or when a new version of an already existing resource needs to be prepared. | contrasting |
train_18854 | Allied to the growth in the amount of food data such as recipes available on the Internet is an increase in the number of studies on these data, such as recipe analysis and recipe search. | there are few publicly available resources for food research; those that do exist do not include a wide range of food data or any meal data (that is, likely combinations of recipes). | contrasting |
train_18855 | Others, like gensim (Řehůřek and Sojka, 2010), a topic modelling framework in Python, and Apache Lucene (Cutting et al., 2004), a Java library for document indexing and search, are designed for specific tasks. | projects GATE (Cunningham et al., 2011) and Apache UIMA (Apache, 2010), represent a comprehensive family of tools for text analytics. | contrasting |
train_18856 | For each operation in the pipeline, ESTNLTK comes with a sane default implementation. | a user can provide an alternative implementation through the constructor of the class Text. | contrasting |
train_18857 | In simple cases, it is sufficient to provide replacement components as keyword arguments to the Text constructor. | miscellaneous use cases may benefit from subclassing the Text class and develop the custom behaviour directly into it. | contrasting |
train_18858 | This can significantly speed up certain search operations, as linear scan over all documents can be replaced with simple index lookup. | the right structure of the index object depends on a particular task and is wasteful for online processing of documents. | contrasting |
train_18859 | This combined with the power of the Drupal-based Islandora (Islandora Community, 2016) for the online user interface provides a good starting point for FLAT. | to meet the very CLARIN specific requirements (i.e., support for CMDI and persistent identifiers) additional development work needs to done. | contrasting |
train_18860 | Some of this functionality could be provided out of the box by Islandora, e.g., file type checking using the FITS (OpenScholar, 2016) solution packs. | islandora ingests new objects directly into the Fedora Commons repository, while in FLAT a temporary workspace is required where further ingest is halted until collection managers have reviewed the contents. | contrasting |
train_18861 | It will therefore enable the investigation of existing research questions in new ways, create opportunities for investigating research questions that could not be addressed before, and for formulating and investigating completely new research questions. | this digital turn is not going to be easy! | contrasting |
train_18862 | On the one hand, it implies vast efforts invested in persuading IPR holders to contribute to a cultural action in a way that does not hinder their marketing plans. | it means reaching agreement on what texts and how much of them to include in the corpus. | contrasting |
train_18863 | In the training data, however, only a limited number of lemmas appear frequently enough for a reliable language modelling, and many words in new texts are out-ofvocabulary. | there are many categories of words (such as numerals or several groups of proper names) with identical syntactic behaviour. | contrasting |
train_18864 | Besides, in both cases, a web interface is offered for easier access to corresponding tools, and, moreover, for using tool chains for solving specific text analysis tasks. | it must be noted, that, while CLARIN-D already has a mature linguistic chaining tool WebLicht 7 (Hinrichs, Hinrichs and Zastrow 2010) with the web interface access, the web interface and tool chaining service for LKSSAIS is less flexible and still in its development phase, lacking convenient configuration management and visualization options. | contrasting |
train_18865 | For this reason presently LKSSAIS provides only GUI and API based services for developers and CLARIN ERIC community. | it is expected that the Lithuanian legal framework will change in the near future. | contrasting |
train_18866 | The fact that the harmonised vowel is always front and unrounded is presumably related to a pronounced-but unwritten-epenthetic vowel that occurs between ‹б› and ‹ль› in the bare stem forms. | since no vowel is inserted in forms with a following vowel (e.g., ансамбли, рубли), this phenomenon provides an interesting case of phonological opacity-an analysis of which is beyond the scope of the present paper. | contrasting |
train_18867 | zāle: [zãle] (level tone) 'hall, large room' vs. [zâle] (broken tone) 'grass, herb'. | two specific graphemes -'e' pronounced as 'e' or 'ae', and 'o' pronounced as 'uo < ' (as in doma 'thought'), 'O' or 'O:' -require an informed choice to pronounce the word correctly, and the pronunciation may vary across inflectional forms, even with the same spelling. | contrasting |
train_18868 | Estimated number of native speakers are approximately 59 million in these two countries (Khubchandani, 2003). | it is also spoken by people in various other countries. | contrasting |
train_18869 | In the first paradigm, called avenir we can see that x 1 always ends in the letter v, and that x 2 is always the string n. 2 In the second paradigm (negar), there is no clear pattern regarding the shape of x 1 . | x 2 in all 14 inflection tables that produced the paradigm, is always the string eg. | contrasting |
train_18870 | For example, only one lexeme is considered since both its meanings ('vapour' and 'pair') have exactly the same inflectional forms. | sGJP has three lexemes with the lemma -, since its 3 meanings ('swimmer', 'great diving beetle', and 'float') result in paradigms differing in the accusative. | contrasting |
train_18871 | This situation is strictly related to the still quite limited amount of linguistically annotated textual data for Latin, which can help the building of new lexical resources by supporting them with empirical evidence. | projects for creating new language resources for Latin have been launched over the last decade to fill this gap. | contrasting |
train_18872 | This situation is strictly related to the still quite limited amount of linguistically annotated textual data for Latin, which can help the building of new lexical resources by supporting them with empirical evidence. | projects for creating dependency treebanks for Latin have been launched over the last decade, as well as for creating fundamental lexical resources, like the (still very small) Latin WordNet (Minozzi, 2010). | contrasting |
train_18873 | Like these well-known projects, our primary research goal is to establish a computational method to discover crosslingual correspondences in a word-sense/lexical-concept level, given LSRs in different languages. | we further explore a methodology for classifying discovered cross-lingual correspondences with a broader range of semantic relation types, not limited to synonymy. | contrasting |
train_18874 | That is, the two-tiered classifiers configuration could be more feasible for classifying cross-lingual correspondence candidates be- tween different LSRs. | further experimentations with a larger data set would be necessary to make this more concrete. | contrasting |
train_18875 | For instance, an Instrument for PISAĆ 'write' could be a pen, a ballpen, a pencil etc. | in PLWORDNET their direct hypernym is artykuł papierniczy-1 'writing materials' which is evidently too wide (as it includes, e.g., 'notebook'). | contrasting |
train_18876 | That is, they lexicalize the receiver of the action (the Buyer) as the complement of a PP headed by {a}. | (9) Ho venduto il libro a Giulia ( I sold the book to Julia) COMPRARE verbs reflect the Buyer's perspective, which in this case is the Agent that acquires some Goods from a Seller. | contrasting |
train_18877 | Verb classes have been defined as sets of semantically related verbs that share the same patterns and constructions. | the differences between the two systems notwithstanding, the resulting classification remains compatible with VerbNet/Levin taxonomy. | contrasting |
train_18878 | (2012) performs offline clustering of word contexts and thus is computationally expensive for large corpora. | adaGram can be considered as an online clustering of contexts, which therefore can scale to large corpora keeping a reasonable memory footprint. | contrasting |
train_18879 | The aim of this project is to link together different predicate resources via manual mappings. | the Predicate Matrix is built by automatic methods. | contrasting |
train_18880 | Then, for each lexical unit, SemLink also supplies a mapping between the semantic roles of PropBank and VerbNet, as well as the roles of VerbNet and FrameNet. | semLink has some limitations. | contrasting |
train_18881 | For example, the ili ili-30-00007739-v represents the English synset eng-30-00007739-v blink 1 wink 3 nictitate 1 nictate 1 and the Spanish synset spa-00007739-v pestañear 1. | in the new multilingual Predicate Matrix this Spanish synset also have associated the Spanish word senses parpadear and guiñar. | contrasting |
train_18882 | One of the few projects working on the integration of the predicate information is SemLink (Palmer, 2009). | the mappings of this resource has been developed by manual means and only cover verbal predicates. | contrasting |
train_18883 | Princeton WordNet identifies synonymy, antonymy, troponymy, hypernymy, entailment, and cause. | verbOcean identifies similarity, strength, antonymy, enabled, and happens-before. | contrasting |
train_18884 | At the same time, it is a rather broad relation, hence it can be acquired with a good precision (0.56). | intensity occurs rarely, and is also a more strict relation. | contrasting |
train_18885 | Overall, VCO-Pre has a macro-precision of 0.40 and a rather low macro-recall of 0.10. | vCO-Cov has well-balanced macroprecision and macro-recall of around 0.37. | contrasting |
train_18886 | (2008) present an approach which reduces sample selection bias by first exposing it in the dataset structure through clustering, and then rebalancing the dataset. | this method requires the availability of additional unlabeled data. | contrasting |
train_18887 | For polarity classification, COP leads to a significant better result, which reflects its sentiment-oriented nature. | combining other features with COP still leads to significant improvement, indicating that adding semantic information helps for polarity classification. | contrasting |
train_18888 | Among the other features, WV is still the most informative feature. | it does not dominate SSI, indicating the possibility for fine tuning the word embedding with prior knowledge (SSI in our case) as in (Faruqui et al., 2014). | contrasting |
train_18889 | Some of the recent systems that have emerged are (Toh and Wang, 2014;Chernyshevich, 2014;Wagner et al., 2014;Castellucci et al., 2014;Gupta et al., 2015). | almost all these research are related to some specific languages, especially the English. | contrasting |
train_18890 | Several benchmark datasets for sentiment analysis for resource-rich languages like English exist and these have been made freely available for research, e.g., SemEval 2014 datasets (Pontiki et al., 2014). | indian languages are still far behind in terms of such resources. | contrasting |
train_18891 | The first review contains only one aspect term and its polarity is positive. | the second review does not have any aspect term. | contrasting |
train_18892 | This is what is known as Standard Modern Arabic (MSA) which may be seen as a simplified version of Classical Arabic 2 (CA). | when it comes to daily life, MSA is rarely, if at all, used as many people find it ridiculous to use MSA with their friends or families, instead they use their own dialects. | contrasting |
train_18893 | Words which can be found in an MSA dictionary: this means that these words do exist in MSA. | many of them have a conflicting part of speech (PoS) or totally a different meaning, i.e. | contrasting |
train_18894 | In example 3, the word العافية [wellness] is used in Gulf Arabic in the same meaning as in MSA which is positive. | in other dialects, namely in North African, العافية means fire which is negative. | contrasting |
train_18895 | For instance, if a document contains more than one sentiment of the same polarity (positive/negative), the document, whatever its length, is counted as one (positive/negative) document. | if it contains any number of different opinions given that there is at least one positive and one negative, the document is classified as mixed. | contrasting |
train_18896 | Of course, there is a better alternative to the non-match, namely apply a preprocessing step (normalization and spelling correction) to the corpus. | this is not very helpful in our case because of the different usage of words between MSA and Gulf Arabic and more important diacritics are ignored. | contrasting |
train_18897 | These approaches can capture some aspects of the shifters effectively. | they depend upon the availability of an annotated corpus in which shifter words and their scopes are tagged. | contrasting |
train_18898 | There are several theories of discourse structure for texts: RST (Mann and Thompson, 1987), LDM (Polanyi et al., 2004), the graphbank model (Wolf and Gibson, 2005), DLTAG (Forbes et al., 2003), PDTB (Prasad et al., 2008), and SDRT (Asher and Lascarides, 2003). | data from our corpus rule out DLTAG, LDM, and RST as candidate theories because they posit tree-based discourse structures. | contrasting |
train_18899 | Other frequent relations are: ELABORATION, EX-PLANATION, CONTINUATION, PARALLEL, CONTRAST, ALTERNATION, and CONDITIONAL, variants of which are also used in many models for the discourse annotation of single-authored text besides those based on SDRT. | other relations, in particular temporal relations like NARRATION, TEMPORAL-LOCATION or BACK-GROUND, are not at all frequent in the Settlers corpus. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.