id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_94800 | For instance, in "The consumer shall return to the supplier any sums and/or property he has received from the supplier without any undue delay and no later than within 30 calendar days", the consumer is clearly bound to an action, and the duty is therefore taken into account. | in addition, "sums" has also been erroneously annotated as DutyCounterPart. | neutral |
train_94801 | In this way we try to identify relations within Hohfeldian Duty constructs which involve any defined term or term definition elements. | these are not suitable to the Semantic Web, which requires models that are computationally tractable; (Francesconi, 2015) provides just such a rich and tractable model. | neutral |
train_94802 | Gamification allows us to obtain resources without paying users by carefully designing a game suitable for information extraction. | (2014) constructed video games with the purpose of validating and extending knowledge bases. | neutral |
train_94803 | We extracted English spelling errors using a word-typing game as in Rodrigues and Ritting (2012). | in the former approach, it is not clear which word corresponds to the spelling error, and the latter approach requires an annotation cost for the crowdsourcing. | neutral |
train_94804 | From big data, in form of huge text corpora, we automatically extract sentences that may contain definitions and present them to a user (lexicographer/specialist) working on a particular concept; then the user has a possibility to simply re-use a particular definition, or adjust one of them according to others. | the Czech web corpus results are much worse than in English -we do not have a good explanation of this interesting observation, perhaps the Czech internet contains less educative texts. | neutral |
train_94805 | Wikipedia is considered an excellent source of texts for IE systems due to its broad variety of topics and advantageous characteristics such as the quality of the texts and their internal structure. | • How does information obtained from dictionaries compare to information obtained from encyclopedias? | neutral |
train_94806 | We did not, however, evaluate results obtained from the English Wiktionary. | these resources could also be combined since the information contained in each one complements the others. | neutral |
train_94807 | Both kind of approaches were studied for the case of English-Basque pair on (Saralegi and Lopez de Lacalle, 2010). | the results correspond very well to the improvement in translation quality reported in the previous section. | neutral |
train_94808 | The third strategy, which uses the translation of the initial query as context tr("PS 2 joku")="PS 2 game", provides a correct translation tr(qr)="PS 2 game new". | only one team achieved a significant statistical improvement. | neutral |
train_94809 | Since most Vietnamese words are composed of more than one syllable where each syllable is separated by blanks (Dinh et al., 2008), using common tokenizers such as replacing blanks with word boundaries does not work for Vietnamese. | the CRFbased tool of the group (Tu et al., 2006), the PVnSeg tool (unknown source) and the hybrid method of (Hong Phuong et al., 2008) are compared using a test corpus containing 1,264 articles from Politics-Society section of a Vietnamese online newspaper "Tuoi Tre", where words have been manually segmented by linguists. | neutral |
train_94810 | Then, in Section 3, we provide an overview of the Saffron system which we use for generating the dataset. | the related work in this area can be roughly categorized in two main directions. | neutral |
train_94811 | Figure 1 shows a future forecast for some example keywords based on individual polynomial models. | solutions based on citation analysis are less appropriate for contemporary analysis of emergent trends, as this type of data is less robust for recent documents and can not be applied as soon as documents become available. | neutral |
train_94812 | There have been efforts to create extractive summaries (Abu-Jbara and Radev, 2011;Qazvinian et al., 2013) and flows of scientific ideas (Shahaf et al., 2012). | from survey responses, we create the structured representation deterministically. | neutral |
train_94813 | Ideally, annotators would highlight relevant information in the text and link it to the structured representation. | ideally, annotators would highlight relevant information in the text and link it to the structured representation. | neutral |
train_94814 | We have described the creation of a corpus of clinical text, Asthma Timelines, annotated for asthma status and other supporting information. | the final S 3 -step is shown to not improve results on any metric, suggesting that the finite number of errors in human annotation may have been largely corrected. | neutral |
train_94815 | This is due to the fact that the S-step is where new rules in the system will posit more (and perhaps inaccurate) matches. | this context includes the characteristics of the underlying patient population that was sampled from the EMR, as well as what data was available from the implementation of the EMR. | neutral |
train_94816 | As English is OUP's largest dataset and it is used as a pivot language for translations, querying response times tend to be greater, so the source filter was particularly useful for performance optimisation and content separation. | due to the lack of maturity of the triplestore technology, scaling up has proved to be a challenging experience; this has occasionally moved us towards rethinking some of the modelling decisions. | neutral |
train_94817 | For instance, let's consider that the API endpoint that provides the data for a dictionary headword is invoked to retrieve the headword Book in English. | for this we need a lexical ontology. | neutral |
train_94818 | For example, assuming English to isiZulu and an English to Northern Sotho datasets exist, we are able to extract isiZulu to Northern Sotho translations. | for example, as the number of ingested datasets was growing, a need was discovered for a filtering mechanism both for speeding up search, and for distinguishing between these datasets. | neutral |
train_94819 | The above principles should be established once and for all, particularly for LRs created on the basis of the reuse of public data and/or funded by public money. | language corpora, both parallel and monolingual are considered most important for instance for MT, not only SMT but also hybrid MT. | neutral |
train_94820 | In addition to access to LAPPS Grid tools and data, we have developed and contributed the following capabilities of the LAPPS Grid for use in Galaxy in order to support NLP research and development within that platform, including: 1. exploitation of our web service metadata to allow for automatic detection of input/output formats and requirements for modules in a workflow and subsequent automatic invocation of converters to make interoperability seamless and invisible to the user; 2. incorporation of authentication procedures for protected data using the open standard OAuth 9 , which specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials; and 3. addition of a visualization plugin that recognizes the kind of input (coreference, phrase structure) and then uses appropriate off-the-shelf components like BRAT and Graphviz to generate a visualization. | an additional, and potentially hugely significant, outcome of the LaPPS/Galaxy collaboration is that it enables the use of LaPPS Grid NLP services to extract information from repositories of biomedical publications such as PubMed 14 and pass it on to biomedical analysis and visualization tools available in Galaxy. | neutral |
train_94821 | However, the field of NLP research and development has been plagued by a chronic lack of potential for replicability of results, as discussed in several recent publications (Pedersen, 2008;Fokkens et al., 2013), blogs 10 , and workshops 11 . | docker 13 allows users to package an application with all of its dependencies into a standardized unit into a docker image, which is an easily distributable full-fledged installation that can be used for testing, teaching, and presenting new tools and features. | neutral |
train_94822 | In order to facilitate the discovery of the data needed for the project, the criteria should be as specific as possible and should also cover all aspects of the data use. | each corpus comprises about 15,000 tweets and 10% of these data are also considered for double annotation. | neutral |
train_94823 | Mirroring body behaviours, and especially facial expressions, are common in social interactions and are important social cognitive mechanisms since they enable the observer to understand not only the goal of an observed motor act, but also the intention behind it (Rizzolatti, 2005;Rizzolatti and Fabbri-Destro, 2008). | there seems to be a correlation between co-occurring Dominance values (p = 0.0059). | neutral |
train_94824 | We will next explore supervised and ensemble methods. | solid lines represent high cosine similarity between a pair of captions. | neutral |
train_94825 | The next section presents related work in the field of eye-tracking during reading tasks. | and summarised in (Rayner, 1975;Rayner, 1998;Rayner et al., 2012). | neutral |
train_94826 | Data inaccuracies usually result from poor calibration or system imprecisions typical of all eye trackers (Duchowski, 2009). | the corpus presented in this paper suffers from several limitations, which need to be taken into account when designing experiments involving this data. | neutral |
train_94827 | The selected 9 control participants were also divided into two significantly different groups of "skillful" and "less skillful" readers (U = 5724, N1 = 5, N2 = 4, p < 0.001, two-tailed) ( Table 2). | the main issue with the language assistance tools mentioned above is that they need to rely on robust research into the reading difficulties people with autism face and the specific linguistic components which need to be simplified. | neutral |
train_94828 | Mean age (m) for the ASD group was m = 30.75, with standard deviation SD = 8.23, while for the control group it was m = 30.81, SD = 4.8. | we had Group A readers including 4 autistic and 4 non-autistic "skillful" readers, which did not differ significantly in terms of answering scores (U = 5512, N1 = N2 = 4, p = 0.129, two-tailed) and a Group B readers consisting of 5 autistic and 5 non-autistic "less skilful" readers with no statistically significant difference in their answers (U = 8244, N1 = N2 = 5, p = 0.193, two-tailed), as shown in Table 2. | neutral |
train_94829 | One of the biggest challenges in the design of this study was to decide on such a number of texts to be assessed by each participant that would not cause fatigue in them and thus be in bridge with ethical requirements. | until now, eye tracking has not been used to investigate reading in autism, possibly due to the number of procedural difficulties related to this kind of research with autistic participants (Section 3), and thus there is no reliable information about the particular types of phrases which need simplification for readers with autism. | neutral |
train_94830 | It can be seen that the MI.log_f and MI scores ranked as the best AMs for predicting the right collocates of the Arabic keyword list. | it can be seen that the Mi.log_f and Mi measures achieved the highest MAP scores with a MAP score of over 0.85, while the t-score and Mi3 were the least useful scores in terms of identifying FSs among the high frequency lexical items, with MAP scores below 0.50. | neutral |
train_94831 | The MWT obloga trake 'belt coating' has the structure N2X (a noun followed by a noun in the genitive), and its components are nouns obloga 'coating' and traka 'belt'. | this research was supported by the Serbian Ministry of Education and Science under the grant #47003 and #178003. | neutral |
train_94832 | (2) Vojáci dostali od velitele rozkaz střílet. | the number of CPs in the PDT is limited to 2,778 instances of CPs in 2,558 sentences. | neutral |
train_94833 | Combining the output from annotators is required since some NEs have been discovered by both annotation engines, while others come from only one annotator. | one can also observe that only a little more than 50% of all NEs are associated with a KB node. | neutral |
train_94834 | The semantic types of db:General_Motors and db:United_States_dollar correspond to the semantic types suggested by the hybrid module based on the textual context of these mentions, which are Organization and Miscellaneous, respectively. | this knowledge enables an entity linking system to model and perform reasoning over the semantic context of the real-world entities it links to. | neutral |
train_94835 | We propose an adaptive entity linking approach consisting of two steps ( Figure 1): a general-purpose hybrid module and a domain adaptation module. | an intelligent system that anchors entities from text to existing entities in a knowledge base, namely an entity linking system, could benefit from the structured semantic knowledge describing those entities in the knowledge base. | neutral |
train_94836 | At the first stage, we attained distributed representations of words by employing a fast unsupervised learning method on a large unlabeled corpus. | most of our NER models are trained on annotated Turkish news data by Tür et al. | neutral |
train_94837 | With this motivation, we explored both local and non-local features but observed that we achieve better results without non-local features. | all-capitalized, is-capitalized, all-digits, contains-apostrophe, and is-alphanumeric. | neutral |
train_94838 | Entity Linking (EL), such as the EL track at NIST Text Analysis Conference Knowledge Base Population (TAC-KBP), aims to link a given named entity mention from a source document to an existing Knowledge Base (KB) (Ji et al., 2014). | every vertex (m, c) has an initial similarity score iSim(m, c) between m and c. We split m and c into sets of tokens T m and T c and recognize two cases: 1) if T m and T c have any tokens in common then their similarity is 1.0; 2) otherwise it is a reciprocal of the edit distance between m and c: the pairwise initial similarity for "Buenos Aires" vs "Buenos Aires Wildlife Refugee" and for "Buenos Aires" vs "University of Buenos Aires" equals to 1.0. | neutral |
train_94839 | As we can see the number of Company and Product entities in the Synthesio corpus is almost four times more than in the Ritter Corpus. | we present some results of domain adaptation on this corpus using a labelled Twitter corpus (Ritter et al., 2011). | neutral |
train_94840 | Irish is a VSO language on the Celtic branch of the Indo-European language family tree. | mETEOR is based on the harmonic mean of precision and recall, whereby recall is weighted higher than precision. | neutral |
train_94841 | Although the evaluation showed improvement as more data is added to IRIS, the manual evaluators annotated the translation quality rather low. | within the main dialects however, the language is even more diverse, with sub-dialects spoken by individual language communities. | neutral |
train_94842 | To support this, a language model is created where the phrasal patterns (sequences of tokens) are recorded, together with the frequency-of-occurrence of each pattern. | the algorithmic steps are then described that form the process for augmenting the language model. | neutral |
train_94843 | Improvement has been reported when translated from French (+1.6 BLEU), German (+1.95 BLEU) or Hungarian (+1 BLEU) into English. | obtained three translations for each chunk are then evaluated and the best translation for the chunk is selected. | neutral |
train_94844 | The corpus contains 1.4 million unique legal domain sentences. | seven different translations for each source sentence were obtained. | neutral |
train_94845 | Nutch follows the standard IR model of Lucene 6 with document parsing, document Indexing, TF-IDF calculation, query parsing and finally searching/document retrieval and document ranking. | the collected corpus is noisy and contains some non-German as well as non-English words 7 http://www.statmt.org/wmt15/translation-task.html and sentences. | neutral |
train_94846 | When the translator selects the colour-coded TM alternative (c.f., Figure 3), the given input sentence is also colour-coded to reflect the matching and unmatching parts in the input sentence. | the substitution log can serve as a valuable resource for training a statistical automatic post-editing system. | neutral |
train_94847 | Since the data collection is task based, we thought the subjects would be focused on providing and receiving clear instructions. | 3 The ILMT-s2s System is activated by a "Push to talk" button that the subject will click-and-hold for the duration of the utterance, and release once the subject has finished the utterance. | neutral |
train_94848 | This test set contains English transcriptions of 12 TED conference talks (and their French translations), selected in such a way that the texts include a reasonable number of instances of some less frequent pronoun types. | previous approaches to pronoun translation evaluation include the automatic precision/recall-based measure of Hardmeier and Federico (2010), the (manual) pronoun selection task used in the DiscoMT 2015 shared task evaluation (Hardmeier et al., 2015) and methods based on manual counting (Le Nagard and Koehn, 2010;Guillou, 2012;Novák et al., 2013). | neutral |
train_94849 | The systems generally performed well on the translation of addressee reference "you", as compared with the baseline. | the manual annotation of the DiscoMt2015.test dataset was funded by the European Association for Machine translation (EAMt). | neutral |
train_94850 | Out-of-vocabulary (OOV) word is a crucial problem in statistical machine translation (SMT) with low resources. | the BtEC corpus is a multilingual speech corpus containing tourism-related sentences. | neutral |
train_94851 | @5) """) """) There are many publicly available word embedding toolkits. | we used the skipgram, which inputs each current word to a log-linear classifier with a continuous projection layer, and predicts its context words within a certain window. | neutral |
train_94852 | This deterministic handling of Twitter-specific syntax is applied to all further experiments in Table 3. | our experiments use a standard trigram HMM tagger 3 (Brants, 2000) and the openNLP maximum entropy tagger. | neutral |
train_94853 | All experiments are performed on the development part of our dataset. | we are interested in how much parse quality can be gained by text normalization. | neutral |
train_94854 | We believe there are two reason for this phenomenon: the first is that the QUESTION samples in the training set are not sufficient; the second, which we consider as the more crucial one, is that the sample we apply, a 5-word sequence, does not cover the sentence beginning, where the typical question pattern locates, such as "what do you ..." or "how can I ...". | obviously, our proposed models failed to predict even one question mark in the test dataset. | neutral |
train_94855 | Mini-bached AdaGrad (Duchi et al., 2011) and dropout (Srivastava et al., 2014) are used for optimization. | to our knowledge, this is the best speed result in the literature for constituent parsing. | neutral |
train_94856 | D1: The distance between a dependent word w i and its candidate head w j . | with these models, we can figure out which annotation is effective for raw text parsing. | neutral |
train_94857 | In the latter, mere numbers are used as a notation for the position of the token with respect to the next one in the input, numbered 0. | the annotation style of the It-tB resembles that used for the so-called analytical layer of annotation of the Prague Dependency treebank for Czech. | neutral |
train_94858 | The proposed approach applies the Lexicon-Grammar (LG) framework and its language formalization methodologies, developed by Maurice Gross during the '60s. | the previous automaton may process a query as the following one: (1) Tutti gli archeologi che sono stati anche scrittori nati nel '900 (All the archaeologists who have been also writers and were born in 19th century) 3 . | neutral |
train_94859 | The AMs in this procedure use (stacked) MFCC features and were trained using the Kaldi toolkit (Povey et al., 2011). | hence, more anchor segment data can be extracted as training material for the next AM which, in turn, will hopefully lead to an AM recognizing a larger amount of speech correctly. | neutral |
train_94860 | The resulting diagnoses of the ILSE participants are shown in Table 3. | speaking style and Crosstalk: The speaking styles of interviewers and participants differ substantially: While the semi-standardized questions of the interviewers are usually short and well planned, participants answer in detail, great length, and in a very spontaneous fashion. | neutral |
train_94861 | the position and orientation of the microphone placed on the table. | as expected, the outof-vocabulary (OOV) rate of the training word types for the interviewers' speech is considerably lower than for the participants' speech. | neutral |
train_94862 | These sequences are then split at speaker turns and used to train a new AM for the next iteration. | we can make use of the transcriptions which originally could not successfully be aligned with the audio. | neutral |
train_94863 | What names are displayed? | if inter-speaker variability is an important factor, intra-speaker variability is not less important (Kahn et al., 2010). | neutral |
train_94864 | The talent was also instructed to be consistent in pausing in case the corpus is to be used for prosody modelling. | to choose criteria for iteratively choosing sentences, a simple count was adopted where each sentence was scored by the following formula: Where ( , ) is the "Sentence Score" of the sentence relative to corpus , ( ) is the "Sentence Unit Frequency" which is the number of times a specific unit indexed by appears in the sentence and ( ) is the "Corpus Unit Frequency" which is the number of times a specific unit indexed by appears in the corpus . | neutral |
train_94865 | Knowing that the speakers had a constant gait velocity, we resampled the resulting skeleton tracks by considering linear interpolation, which yielded data points with equally spaced time-intervals. | then we determined the point on the trajectories where the squared error of a detection exhibited the global minimum, and mapped the detection to this point of the trajectory (see Fig. | neutral |
train_94866 | The XML document contains the same speech fragment that is depicted in Figure 1. xml version="1.0" encoding="UTF-8"?> <files> <file name="[Recording Name]"> <fragment place="2" recorder="2" speaker="8" type="2"> <part length="0.52" audio_file="[File Path]" >(0.52)</part> <part length="1.611" audio_file="[File Path]" >čau <c type="exclamation_mark"> izsaukuma zīme </c></part> <part length="0.528" audio_file="[File Path]" >(.h)</part> <part length="1.482" audio_file="[File Path]" >nopērc lūdzu pienu</part> <part length="1.119" audio_file="[File Path]" >(. | this allowed us to identify issues in the annotations that would possibly corrupt the data in our speech recognition system training workflows (e.g., incorrect number of words in the orthographic annotation that a spelling correction is linked to, unclosed tags, overlapping tags, etc.). | neutral |
train_94867 | This means that only the SAT was repeated. | the acoustic models get more adapted to the 100 hours of non-dictation speech data. | neutral |
train_94868 | There are some children's speech databases for EP, such as Speecon with rich sentences (Speecon Consortium, 2005); ChildCAST (Lopes, 2012;Lopes et al., 2012) with picture naming; the Contents for Next Generation (CNG) Corpus targeting interactive games (Hämäläinen et al., 2013) and (Santos, 2014;Santos et al., 2014) with childadult interaction. | <mãe> [mˈɐj] 2 and <bem> [bˈɐj]); the presence of consonant clusters (e.g. | neutral |
train_94869 | According to the outcomes of the efficacy tests presented in (Beijer et al., 2014), the user satisfaction appears to be quite high. | the first dysarthric speech data collection of Dutchspeaking patients aimed to use the data for a pilot study to investigate the performance of speech-to-text systems on deviant speech (Sanders et al., 2002). | neutral |
train_94870 | These corpora consist of single and 1 According to Ethnologue: www.ethnologue.com/ language/urdLast visited: 04-03-2016 2 DUC: www-nlpir.nist.gov/projects/duc/ Last visited: 04-03-2016 3 TAC: http://www.nist.gov/tac/ Last visited: 04-03-2016 multi-document summaries written by humans. | they contain real text as written by the native speakers. | neutral |
train_94871 | We further apply normalization, part-of-speech tagging, morphological analysis, lemmatization, and stemming for the articles and their summaries in both versions. | urdu Morphological Analyzer is built in Haskell (Marlow, 2010) (using Functional Morphology Toolkit (Forsberg and Ranta, 2004)), but it is not updated from a long time. | neutral |
train_94872 | Our three freelancers (2 men and 1 woman) are university student and native speaker of Bahasa Indonesia. | as discussed in previous section, we utilize Whatsapp, one of the most well-known online instant messaging application, to construct the summarization corpora. | neutral |
train_94873 | Most of these logs contain conversations discussing daily activity such as: soccer group, running hobby, faculty organization, family group, and software team development as shown in Table 1. | we construct the first ever Indonesian corpora for chat summarization by employing three native speakers to manually build the summary. | neutral |
train_94874 | Our method is based on the well established linguistic premise that semantically related words occur in similar contexts (Turney et al., 2010). | table 2 also shows the results of other SERA variants including discounting and query reformulation methods. | neutral |
train_94875 | It then uses the KL divergence between the document and the summary content models for selecting sentences for the summary. | the content quality of a given candidate summary is evaluated with respect to this pyramid. | neutral |
train_94876 | Since its introduction, ROUGE has been one of the most widely reported metrics in the summarization literature, and its high adoption has been due to its high correlation with human assessment scores in DUC datasets (Lin, 2004). | these lists of results are based on a rank cut-off point n that is a parameter of the system. | neutral |
train_94877 | Our aim is to analyze the effectiveness of the evaluation metrics, not the summarization approaches. | we asked two human annotators to review the gold summaries and extract content units in these summaries. | neutral |
train_94878 | For instance, comment sentences linked to the same article sentence can be seen as forming a "cluster" of sentences on a specific point or topic. | i daresay we can 'adapt' to a certain extent but there are limits. | neutral |
train_94879 | discusses online forums and the OnForumS corpus creation, Section §3. | the crowdsourcing Human Intelligence Task (HIT) was designed as a validation task (as opposed to annotation), where each system-proposed link and labels are presented to a human contributor for their validation with both article sentence and comment sentence placed within context (see Fig. | neutral |
train_94880 | WordNet represents a cornerstone in the Computational Linguistics field, linking words to meanings (or senses) through a taxonomical representation of synsets, i.e., clusters of words with an equivalent meaning in a specific context often described by few definitions (or glosses) and examples. | the former is a top-down and often human-generated representation of a domain whereas the latter comes from free tags associated to objects in different contexts. | neutral |
train_94881 | For this reason, the algorithm Then, for each synset S i we compute the set of all candidate semantic ConceptNet triples P conceptnet (S i ) as the union of the triples that contain at least one of the terms in T i . | it aimed at enriching WordNet with semantics containing direct relations-and words overlapping, preventing associations of semantic knowledge on the unique basis of similarity scores (which may be also dependent on algorithms, similarity measures, and training corpora). | neutral |
train_94882 | We have ensured a fully random selection of verbs to annotate. | on the whole, there are so few alternative readings in the data set that they could hardly harm the interannotator agreement, so we have not processed them in any sophisticated way we had been considering before obtaining the results. | neutral |
train_94883 | Before drawing any conclusions from the data, we measured the interannotator agreement. | we had speculated that WSsim/USim data might have been slightly easier to annotate, given the lemmas processed: while VPS-GradeUp contains only Figure 2: The annotation form using Google Forms verb lemmas (29), WSsim contains 11 lemmas of various parts of speech (4 verbs: add, ask, order, and win, 5 nouns: argument, function, interest, investigator, and paper, and 2 adjectives: important and different). | neutral |
train_94884 | (GermaNet provides twelve such relation types, which are all (sub)classes of meronymy/holonymy, entailment, causation, and association.) | we next consider the semantic relations that link the successfully annotated target senses to their lexical substitutes. | neutral |
train_94885 | Finally, we note that the majority of substitutes cannot be reached by following semantic relations of a single type. | the latter basically instructed the adjudicator, for each instance on which the annotators disagreed, to accept one or the other set of annotations, or the union of the two. | neutral |
train_94886 | This metric, borrowed from information retrieval, measures the accuracy of each cluster with respect to its best matching gold class: where Ω = {ω 1 , ω 2 , . | olympionike ("olympian") suggests that one of the annotators has exploited his or her real-world knowledge of the context's subject (in this case, Hollywood actor and competitive swimmer Johnny Weissmuller). | neutral |
train_94887 | A typical example is Sentence 9. | our classifiers are able to distinguish between literal and idiomatic uses of infinitive-verb compounds with quite a high degree of precision. | neutral |
train_94888 | Spelling became a political issue in Germany through the German orthography reform, which was decreed in 1996 and caused a fierce controversy carried out in public as well as uncertainty even among professional writers about the correct spelling. | to see the influence of individual groups of features, we performed an ablation test (see Figure 1): Disappointingly, selectional preference information has a slightly negative effect. | neutral |
train_94889 | The better the annotators agree on a particular pattern -concordance match, the smaller the range of their judgments is. | the WSD decisions associated with a given sentence ID are copied to all relevant rows. | neutral |
train_94890 | We hope that this research will help with a more nuanced evaluation of the classification and entry-building tasks. | 2 Median and Range translate as "Goodness of match between a pattern and a concordance" and "Agreement on this goodness of match", respectively. | neutral |
train_94891 | We also define class pmonb:Tag to capture (via property pmonb:tag) some specific annotations of markables (e.g., PRD, REF, SUPPORT) in the examples. | preMOn is freely available and accessible online in different ways, including through a dedicated SpARQL endpoint. | neutral |
train_94892 | NomLex-PT offers a list with some 4,240 pairs of related verb/noun forms in Portuguese. | since we have no Portuguese pair that directly corresponds to "apportion" and "apportionment", morphosemantic links generated from this pair, if any, are considered as connecting alocar-alocação in Portuguese. | neutral |
train_94893 | To construct NomLex-PT, we semi-automatically translated the original English NomLex (Macleod et al., 1998), the French Nomage (Balvet et al., 2011), the Spanish AnCora-Nom (Peris and Taulé, 2011) and manually verified the pairs acquired. | the work we describe here consists in adding to the pairs of translated morphosemantic links, that is to pairs of senses of verbs/nouns in Portuguese, a label from Princeton's table and making such a triple, a link of the OpenWordnet-PT. | neutral |
train_94894 | In order to bridge the gaps that currently separate the various existing morphological data resources and models described above, we developed the MMoOn Core model 23 . | the examination of the related works in the domain of morphological data revealed five types of language resources. | neutral |
train_94895 | We plan to deliver the chapter of the letter bā' by the time of LREC 2016. | a similar treatment might be envisaged for schemes. | neutral |
train_94896 | golpe de * que le, 'stroke of that his', third element), where * indicates the location of the target word in the context. | 48), and it corresponds with the frequency of the complex inflected word (fortuna, 'fortune', second element) in its context (e.g. | neutral |
train_94897 | Algorithm First, we modified the Spanish OpenThesaurus and created our List of Senses. | openThesaurus does not have information of what synonyms are simpler or more complex, something crucial to perform lexical simplification. | neutral |
train_94898 | In this method, a lexicon is used to spell-check OCR-recognized words and correct them if they are not present in the dictionary. | furthermore, when many consecutive corrupted words are encountered in a sentence, it is difficult to choose good candidate words. | neutral |
train_94899 | For this purpose, Optical Character Recognition (OCR) systems have been developed to transform scanned digital text into editable computer text. | this is called the language model. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.