id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_19100 | in the US over 200 million citizens are eligible to vote and thus can participate in RegulationRoom (Farina and Newhart, 2013;Park et al., 2012)). | the volume of data makes the task of its interpretation and summarisation extremely challenging. | contrasting |
train_19101 | As tweet-level geotagging remains rare, most prior work exploited tweet content, timezone and network information to inform geolocation, or else relied on off-the-shelf tools to geolocate users from location information in their user profiles. | such user location metadata is not consistently structured, causing such tools to fail regularly, especially if a string contains multiple locations, or if locations are very fine-grained. | contrasting |
train_19102 | Many existing geography-based Twitter visualisations are therefore limited to this highly biased subset of data. | tweet location is not a good proxy for a person's home location and can significantly distort the results of any study or visualisation which tries to capture information for different countries or regions. | contrasting |
train_19103 | The official Twitter documentation states that this record field is "[n]ot necessarily a location nor parseable". | we showed that the Edinburgh Geoparser can be adapted to carry out effective geolocation on user profiles. | contrasting |
train_19104 | Often, the proportion of tweets in favor of a target may not be similarly numerous as those against it. | we did not want scenarios where there are no tweets in favor of a target or no tweets against it. | contrasting |
train_19105 | It is clear that this method emphasized precision over recall, as it emulates a grep-style filtering program. | it does set a more challenging baseline than random assignment. | contrasting |
train_19106 | The tweet length is suitable for typical crowdsourcing tasks, thus, there is no need for pre-processing of T weet2014DS1 and T weet2015DS2 datasets. | to optimize the length of the news articles for the crowdsourcing task, each news article was split into text snippets, i.e., sentences. | contrasting |
train_19107 | The fact that about 84% of the total amount of tweets are considered relevant, out of which about 90% could indicate highly relevant tweets, shows that retrieving tweets based on relevant keywords or domain experts' seed words returns acceptable results. | there is still room for improvement, i.e., assessing the relevance of the tweets with regard to the event "whaling" is still necessary. | contrasting |
train_19108 | By performing this comparison we can state that usually, non-expert people have different ways to express or refer to a given event, in contrast to domain experts that have very specific terms to compose the space of an event. | this difference does not prove that the tweets can not contain useful information, e.g., T W EET 2 in Example 2, but it gives meaningful insights that the topic space given by the experts can be further enriched. | contrasting |
train_19109 | As a general trend, we observed that overlapping text snippets usually have higher similarity scores, while the non-overlapping text snippets have lower scores. | the relevance scores provided by the automated method have a much smaller interval, between 0 and 0.66. | contrasting |
train_19110 | used in similar context even if not together. | all the models are closest to the average scores, suggesting that the models learn a sort of combination of Similarity and Relatedness. | contrasting |
train_19111 | The proposed method was evaluated on a semantic similarity task (WS353 dataset), where the multimodal model was found to yield higher performance compared to the textual one. | the best performance was moderate (0.32 correlation coefficient). | contrasting |
train_19112 | (2011) trained multiple classifiers to handle coreference between event mentions of different syntactic types (e.g., verb-noun coreference, noun-noun coreference) on the OntoNotes corpus. | since event coreference links and entity coreference links are not distinguished in OntoNotes, Chen et al. | contrasting |
train_19113 | For the command recognition task, an equally-likely finite state grammar formed by all the unique possible command sentences was initially used. | in practice it was necessary to use an extended command grammar incorporating the background model to better handle inaccurate segmentations provided by the automatic SAD. | contrasting |
train_19114 | A considerable contribution to the recall decrease is due to the miss segmentation errors introduced by the SAD module (10.7%). | the inserted segmentation errors in combination with the challenging characteristics of the data contribute to the precision performance drop. | contrasting |
train_19115 | When using automatic segmentation, the command recognition performance was also remarkable in most of the acoustic conditions. | it dramatically degraded in the presence of overlapping and continuous voice-like noise, such as TV and radio. | contrasting |
train_19116 | For the emotion dimension estimation, the automatic crosscorpus emotion labeling was effective for some dimensions, showing only slight performance degradation. | we could not obtain sufficient performance for the emotion category estimation. | contrasting |
train_19117 | Speech-enabled interfaces have the potential to become one of the most efficient and ergonomic environments for human-computer interaction and for text production. | not much research has been carried out to investigate in detail the processes and strategies involved in the different modes of text production. | contrasting |
train_19118 | P19), and show much worse performance than in the translation mode. | it seems that the more time a translator needs for translation, the more likely he or she will be quicker with post-editing and dictation. | contrasting |
train_19119 | This might also explain an observation of Ciobanu (2014), who reports that "less experienced translators tend to stay away from ASR at the beginning of their careers", while "within the professional experience groups … ASR does have a positive impact on productivity": Translation students struggle often more with source text comprehension than expert translators, which may make it more difficult for them to overcome a word-by-word translation mode and to produce longer target text sequences, which are at the same time also crucial to reduce the ASR error rate and to enhance word recognition. | as outlined above, a number of parameters may play a role in a better acceptance and usage of the ASR technology in the translation community. | contrasting |
train_19120 | This route can complement existing tools for harvesting IGT from PDF documents, in particular ODIN (Lewis and Xia, 2010), which have to cope with much more noise in the source channel resulting from lossy pdf-to-text conversion. | parsing L A T E X comes with its own challenges, as detailed in section 5. | contrasting |
train_19121 | As there is no widely accepted orthographic standard for Swiss German, we use a set of recommendations proposed by Dieth (1986). | the implementation of the recommendations changed over the time. | contrasting |
train_19122 | The decrease in accuracy observed with Test1 was expected, since this document comes from a dialect that has not been seen in the training, which also shows in the higher proportion of unknown (New) words in this text. | automatic normalisation showed even better accuracy for Test2 than for crossvalidation; this text was dialectologically more closely related to the ones used for training the tool. | contrasting |
train_19123 | Data inputs are referred to here as examples, and the desired output values as labels. | learning algorithms require examples in specific formats. | contrasting |
train_19124 | If this feature extractor were instead called with the word constituent for "Jane" as its argument, it will extract the same features because its left context is the same. | a feature extractor that used the right context of the focus would produce different results, since the NER constituent covers an additional word, and the right context for that constituent will begin after the second word. | contrasting |
train_19125 | Up to our knowledge, we are the first to provide the readability annotation feature as a collaborative online tool to help in constructing training corpora for different readability services. | mADAD can also be used as a general-purpose annotation tool for Arabic text. | contrasting |
train_19126 | A way to understand the model by Louis and Nenkova (2012) is to see it as an alignment model between syntactic items. | that model does not have any latent variables, which is possible under the assumption that all available alignment configurations have been directly observed in the training data. | contrasting |
train_19127 | A limitation of the previous guidelines and annotation was the provision for a single CoreSC concept per sentence. | it may be the case that more than one CoreSC concept (e.g. | contrasting |
train_19128 | The smallest unit to be annotated are still sentences, even though in principle individual clauses may fit better with CoreSCs and reduce the overlap in annotations. | we decided against the annotation of clauses, as recently employed clause recognition algorithms seem to be purpose-built for specific application areas (e.g (Del Corro and Gemulla, 2013) or (Kanayama and Nasukawa, 2012)) and prior built clause detection mechanisms performed with Fmeasures up to 78.63% (Tjong et al., 2001), which in itself could introduce noise to the task of automatically identifying CoreSC concepts. | contrasting |
train_19129 | It means, however, that if there is disagreement on the first label potential agreements on the second and third labels do not count. | if two annotators agree on the first label but not on the rest, this can return a higher value than weighted kappa. | contrasting |
train_19130 | Table 3 shows that the annotators tend to disagree on when multiple annotations need to be assigned to a sentence which is why the resulting consensus largely (96%) consists of sentences with single annotations. | these single annotations per sentence may be more accurate overall as opposed to the previously published corpus as the final decision on the gold standard annotation has been derived from a conservative consensus, taking the priority of concepts and the most reliable annotator into account. | contrasting |
train_19131 | Table 4 shows that distributions of the different CoreSC categories are more or less the harmonised values from the different annotators. | the Model category ('Mod') stands out from the others in that it almost disappears from the gold standard. | contrasting |
train_19132 | The 8 features we used might not be as effective in capturing their distinct qualities. | it could also be that the content of satirical and non-satirical entertainment articles does not differ as much as is the case for the other topics, making it harder for the classifiers to differ between both classes. | contrasting |
train_19133 | WikiExtractor is designed to extract the text content, or other kinds information from the Wikipedia articles, while Sweble is a real parser that produces abstract syntactic trees out of the articles. | both WikiExtractor and Sweble are either limited or complex as users must adapt the output to the type of information they want to extract. | contrasting |
train_19134 | A similar German database is OpenThesaurus 4 , which is available under the GNU license. | it only provides synonym and association relations (Naber, 2004). | contrasting |
train_19135 | CL-ESA seems to show better results on comparable corpora, like Wikipedia. | cL-ASA obtains better results on parallel corpora such as JRc, Europarl or APR collections. | contrasting |
train_19136 | Statistical Machine Translation (SMT) relies on the availability of rich parallel corpora. | in the case of under-resourced languages or some specific domains, parallel corpora are not readily available. | contrasting |
train_19137 | The amount and diverse studies and efforts show that 1 http://www.accurat-project.eu/ 2 http://www.taas-project.eu/ 3 https://comparable.limsi.fr/ comparable corpora is certainly a useful resource to determine helpful material for SMT but also for related studies such as bilingual term extraction, cross-lingual information retrieval, etc. | most efforts have been related to European languages and less in middleeast languages. | contrasting |
train_19138 | There have also been a few attempts at implementing tools to aid in correspondence discovery (Lowe and Mazaudon, 1994;Covington, 1996). | these tools only assist in finding examples for hypothesized correspondences and do not allow to easily find completely new ones. | contrasting |
train_19139 | Usage in our scenario: It is easy to see that two-part MDL naturally guards against overfitting by weighing each rule's complexity against its utility. | if the desired correspondences are not purely statistical in nature, as is the case here, then we may benefit from abusing the MDL formalism slightly. | contrasting |
train_19140 | We check for applicability in the most general way possible: by simply seeing whether the strings associated by a rule are present in any of the word pairs. | many of the applicable-but-not-found rules correspond to quite deep linguistic processes. | contrasting |
train_19141 | in бос -босс (bos -boss) 'boss' or дискусия -дискуссия (diskusija -diskussija) 'discussion', found in iteration 141). | the model did not recognize the following regular orthographic correspondences between Bulgarian and Russian (Gribble, 1987;Valgina et al., 2002;Ivanova et al., 2011): (л:лл) [l:ll] (алигатор -аллигатор (aligator -alligator) 'alligator', колега -коллега (kolegakollega) 'colleague'); (р:рр) [r:rr] (перон -перрон (peron -perron) 'platform' etc. | contrasting |
train_19142 | Our diachronically-based orthographic correspondences already include both mentioned correlates. | in the Pan-Slavic word list, there are long forms of adjectives as cognates for Russian: BG зъл (zăl) -RU злой (zloj) 'wicked'. | contrasting |
train_19143 | Among the BG-RU correspondences that the model suggested are the following correspondences of noun endings, sorted by frequency upon discovery 7 (iteration in brackets): e.g., (а#:а#) 149 (12) (for feminine); (я#:я#) 40 (38) (for feminine); (о#:о#) 36 (45) (for neuter) etc. | the last ending is ambiguous and may also be an adverb ending, for example, BG-RU:много -много (mnogo -mnogo) 'a lot of'. | contrasting |
train_19144 | This architecture enables multiple annotators to work on different tasks simultaneously. | the administrator manages only one central database. | contrasting |
train_19145 | The [Begin, End] time interval of the French translation fragments will allow the alignment tool to identify the right fragment (the one that includes the [Begin, End] time interval of the sign). | since the very same translation fragment may be linked to several and possibly numerous successive LSFB signs, the tool will then need to further slice the translation fragment in smaller fragments, each relating to one "clause-like" unit in LSFB. | contrasting |
train_19146 | The alignments order was shuffled and this enabled the corpus to be available under CC-BY license through META-SHARE. | this prevents the research in language units over the sentence level. | contrasting |
train_19147 | Additionally, unexposed broadcast data from prior LDC collection efforts contributed additional recordings for Indian English, Mandarin, Modern Standard Arabic and US English. | to CTS, identification of individual speakers in the broadcast data is unfeasible. | contrasting |
train_19148 | We present FlexTag, a highly flexible PoS tagging framework. | to monolithic implementations that can only be retrained but not adapted otherwise, FlexTag enables users to modify the feature space and the classification algorithm. | contrasting |
train_19149 | Training FlexTag Most other trainable taggers only support one input format and users are supposed to transform their data in the required format. | flexTag makes no assumptions about the input format and relies on the UIMA reader concept supporting all readers that are compatible with the DKPro type system. | contrasting |
train_19150 | In this case, we simply request the text of the current token and check whether it starts with an @ sign. | more complicated actions like accessing neighbouring tokens are easily possible. | contrasting |
train_19151 | Croatian lijep, Serbian lep) which was already encoded during the Croatian annotation. | after each round of annotation, the list of OOVs was regenerated, and the paradigm prediction model was retrained 1 http://sourceforge.net/p/apertium/svn/ HEaD/tree/languages/apertium-hbs/ 2 HBS is the ISO 639-3 code for the macrolanguage covering the three languages in question lemmas surface forms hrLex 99,680 4,971,257 srLex 105,358 5,327,361 even after finishing all the annotation rounds and greatly expanding the apertium lexicon, its format is still not ideal for our purposes. | contrasting |
train_19152 | Corpora are, additionally, cheaper resources to produce. | adding the large lexicons does push the results significantly further, a phenomenon much less observable with the HunPos tagger. | contrasting |
train_19153 | In total there are 163 contractions in TGermaCorp, which are summarized in Table 1. | since we do not assume that concatenated POS constitute proper parts of speech, we bifurcate them and use the split, "atomic" categories for analyses. | contrasting |
train_19154 | Our assessment shows that in terms of lexical similarity of sentences and their complexity the TGermaCorp and the Tiger treebank are comparable. | part of the diversity is due to the influence of proper names, which occur with different frequencies in various resources. | contrasting |
train_19155 | As the results presented show, models with Senna initialization outperformed those initialized with GloVe in almost all experimental setups; slightly on the POS and in a larger extent on the NER dataset. | since these vectors were trained on different datasets, we cannot conclude that the model that generated Senna is the better suited to this task, we only compare the utility of the resulting word vectors. | contrasting |
train_19156 | Religious text such as the Holy Qur'an are fully diacritized. | in newswire text, 1.6% of all words have at least one diacritic indicated by their author, mostly to disambiguate the text for readers (Habash, 2010). | contrasting |
train_19157 | This is why the partial diacritization system performs better than the baseline even with a 0% diacritization rate (0.3% absolute increase on Star). | applying the partial diacritization system on a raw text performs worse than the baseline (0.4% absolute decrease on Star). | contrasting |
train_19158 | Similar to two-part conjunctions there are also pairs of clitics. | the clitics that can make up the first part of a pair can also occur on their own. | contrasting |
train_19159 | Data needed for training dedicated tools for these texts is often not available. | in case the non-canonical text is related to another language, e.g., the modern stage of the language, we can exploit this relatedness to facilitate tagging. | contrasting |
train_19160 | This is crucial to know since this means that neither tools developed for MHG (Dipper, 2010;Bollmann, 2013) nor for standard German will work reliably. | its relative closeness to both can nevertheless be beneficial. | contrasting |
train_19161 | Model transfer from MHG and stacking outperform all approaches without external resources with accuracy scores of 0.73 and 0.77, respectively. | the LSTM can outperform two of the extended approaches, namely model transfer from NHG and tritraining. | contrasting |
train_19162 | There is related work on the automatic discovery of concept signatures, described by overlapping (Lin and Pantel, 2002) or fuzzy (Velldal, 2005) word clusters. | although words in the same cluster have a strong relation, they are not exclusively synonyms, and thus a cluster cannot be seen as a wordnet synset. | contrasting |
train_19163 | Those numbers can be compared with TeP's, handcrafted, and thus a possible reference. | the average number of senses and synset size in CLIP 2.0 are closer to TeP's. | contrasting |
train_19164 | Such query would, however, have been far less accurate since on the one hand, in the case of lyrics it would have included non-lyrical text within volumes of poems, such as prose (prefaces, introductions, explanatory texts, notes), indices, title pages etc. | poems contained by works of prose would have been evaluated as prose. | contrasting |
train_19165 | This pipeline architecture is appealing for various reasons, including modularity, modeling convenience, and manageable computational complexity. | it suffers from the error propagation problem: errors made in one sub-task are propagated to the next sub-task in the sequence. | contrasting |
train_19166 | In particular, when applying an MLN to different problem instances, the first-order formulas remain unchanged: all we need to change are the evidence predicates (i.e., the observations). | 5 an ILP program cannot encode a problem instance compactly. | contrasting |
train_19167 | Then, they asked a single member in their research group to rate the credibility of those 1000 articles on a 5 Likert scale. | this corpus is in English, and was rated by a single person only, which makes it inapplicable in our case. | contrasting |
train_19168 | For example, judging the credibility of a tweet using only its text is not enough; instead one may additionally rely on the author background, expertise and external web references. | a blog post might contain enough cues in its text to assess its credibility. | contrasting |
train_19169 | For example, consider Figure 1, which shows random input vs. input selected using AL to consider the network structure. | this brings us to what we define as the chickenand-egg corpus and model conundrum, which refers to how AL often happens in a closed-loop process, the underlying model or models directly influencing which data is selected for annotation, which improves the model's accuracy, and so on. | contrasting |
train_19170 | squared error loss (f (x) − y) 2 . | the task of active learning is then to find a set of points 1 While in practice it may be sufficient to optimise a given model, and therefore mould a corpus using AL to fit it, this loses sight of the bigger aim, that of studying phenomena for understanding how something works; further, it has been shown to even be detrimental in some cases to apply data acquired for one model using AL to another model (Baldridge, 2004;Rubens and Sugiyama, 2006;Sugiyama and Rubens, 2008 to label X L as to minimize the expected prediction error: in our settings the task is unknown, i.e. | contrasting |
train_19171 | Predominately, the purpose of the subgraph-based characteristics should be to ensure properties of the whole graph are preserved well witin the subgraph. | note that the subgraph does not need to precisely reflect the characteristics of the overall graph (this is a goal of network sampling introduced in Section 3.3. | contrasting |
train_19172 | Given a relatively regular structure of reported speechalthough quite different in different languages, and even across varieties, see (Santos, 1998) for some discussion -, rule-based approaches to QE are often extremely successful. | purely formal marks that indicate the presence of a quotation, such as quotes in English, are not unique to this purpose, hence recognizing the specific verbs that are used in these contexts is highly relevant. | contrasting |
train_19173 | This QM tries to find out whether the Agent uses positive words during the conversation with the customer. | since agents do not use very negative (rude words for example) or very positive words, the association of words with sentiment labels is highly subjective. | contrasting |
train_19174 | Thus, for instance, to find sentences in a topic area, one must join a clutch of tables together (posts, discussions, topics, texts, and sentences). | this is balanced by the fact that we do not have to loop over the entire dataset to find objects of interest. | contrasting |
train_19175 | Corpora such as the HCRC MapTask corpus of dyadic information gap task-based conversations (Anderson et al., 1991), ICSI and AMI multiparty meeting corpora (Janin et al., 2003;McCowan et al., 2005), and resources such as recordings of televised political interviews (Beattie, 1983) have contributed greatly to our understanding of different facets of spoken interaction such as timing, turntaking, and dialogue architecture. | the speech in these resources, while spontaneous and conversational, cannot be considered casual talk, and the results obtained from their analysis may not transfer to the less studied 'unmarked' case of casual conversation. | contrasting |
train_19176 | Therefore, it may seem that the processing for the purposes of statistics and research (including, arguably, the improvement of the translation model) may be allowed. | according to WP29's opinion one of the key factors in assessing purpose compatibility should be 'the context in which the data have been collected and the reasonable expectations of the data subjects as to their further use'. | contrasting |
train_19177 | Since mobile devices have feature-rich configurations and provide diverse functions, the use of mobile devices combined with the language resources of cloud environments is high promising for achieving a wide range communication that goes beyond the current language barrier. | there are mismatches between using resources of mobile devices and services in the cloud such as the different communication protocol and different input and output methods. | contrasting |
train_19178 | In addition, critical issues of composing, and integrating different types of services need to be solved. | these tasks are not easy for all developers. | contrasting |
train_19179 | The speech recognition service, translation service and text-to-speech service are required. | both mobile device and the cloud provide the speech recognition service and text-to-speech service. | contrasting |
train_19180 | Unfortunately, many tools and formats were created for just a single kind of annotation, for instance the Tiger-XML format (Lezius et al., 2002), which was created exclusively for constituents, or the MMAX2 format (Müller and Strube, 2006), which was created exclusively for coreferences. | in order to build multi-layer corpora we need to combine different kinds of annotation. | contrasting |
train_19181 | Thus, the maximum size of corpora it can process mainly depends on the number of nodes and edges in a corpus, not on token figures. | since ANNIS uses the PostgreSQL relational database 12 it is key to have a powerful server with sufficient main memory when querying corpora containing more than 1m nodes. | contrasting |
train_19182 | Dutch and Swedish data will not be used to create corpora because they are represented too sparsely in the original data sets. | based on the previous experience with similar crawl data, the English COCO1507 corpus is expected to be 110 billion tokens large. | contrasting |
train_19183 | By the time of this writing, they have been implemented and tested on small data sets but not on the actual large data sets. | both the COCOA data files and the Python tools will be made available still in 2016. s per file download WARC and COCOA 60s COCOA/WARC merger 20s total 80s Table 2: Estimated overhead for end users involved in the corpus reconstruction process on a single Intel Core i7 CPU with a 100 Mbit/s downstream. | contrasting |
train_19184 | This feature allows multiple users that have access on the same document to interact with each other (create, edit or delete annotations) and view the changes at the same time. | to this approach, annotation solutions such as GATE Teamware and WebAnno create a separate view of the annotated document for each annotator and only the curator of the annotation task is able to review the results of the annotation process. | contrasting |
train_19185 | For instance, within the META-SHARE project, the META-SHARE Commons set of licenses was produced to allow META-SHARE members and Extraneous Depositors to make their resources available to other META-SHARE network members only. | the abandon of these licenses is now highly recommended, due to the fact that they are now well and fully covered within the latest Creative Commons version 4.0. | contrasting |
train_19186 | LRE does not explicitly seek to work on low resource languages. | since LRE's goal is to develop robust technologies that perform well even as the number of linguistic varieties increases, and since the number of well-resourced varieties is relatively small, it is inevitable that LRE would include low resource varieties. | contrasting |
train_19187 | If the language has too few resources, the project could mire in LR creation. | if the language were too well resourced, the experience might not represent other low resource languages. | contrasting |
train_19188 | This type of content is more stable, since it is meant to stay unmodified overtime. | corporate-generated content also includes more dynamic textual types, such as press releases, news, event announcements, calendars and even blog entries. | contrasting |
train_19189 | Another strategy is to refine the SMT system by including a pre-processing step in which potential spelling errors are modelled (through a Confusion Network) and subsequently recovered by the decoder on the basis of a character n-gram language model (Bertoldi et al., 2010). | both approaches have drawbacks. | contrasting |
train_19190 | As a matter of fact, for an e-commerce website to be able to replicate this approach in a realistic situation, it would need to have a huge amount of translated reviews already available to train on (which would be applicable only to a handful of very big e-commerce players). | by testing on a different dataset, our approach is more robust and its applicability open to any e-commerce player, e.g. | contrasting |
train_19191 | To meet the demand, several benchmarks have been constructed for English TOEFL , BLESS (Baroni and Lenci, 2011), LENCI/Benotto and EVALution 1.0 . | to date there is no dataset especially designed for DSMs in Mandarin, Chinese. | contrasting |
train_19192 | The presence of a prefix (pro-) in prokuhao indicates that the described event is completed; i.e., verb aspect is encoded via a lexical derivation. | this encoding is not direct, as similar verb affixes are not formal aspect markers. | contrasting |
train_19193 | We come back to the the cases where such forms are not marginal later. | changing grammatical aspect from perfective to imperfective without a prefix is realised through various changes in the verb stem. | contrasting |
train_19194 | pisatinapisati -*napisivati 'write',čistiti -očistiti -*očišćavati 'clean', citatipročitati -*pročitavati 'read'). | there is no obvious regularity as to which prefixes behave in this way. | contrasting |
train_19195 | pisnuti 'peep' is not derived from pisati 'write') In these cases, relatedness between derivations cannot be established based on a simple set of rules. | the fact that related forms tend to be morphologically similar and carry similar meanings constitutes a sound basis for various stochastic approaches involving machine learning. | contrasting |
train_19196 | If there is a shared synset node between these two paths and this node distance is short, the word in the sentence is regarded as 'Literal'. | it is possible to be a metonymic expression if there is no shared synset or the node distance is long. | contrasting |
train_19197 | It would be preferable to encode only the fact that Mary broke a vase. | because the relation CAPABLE-OF cannot account for the semantics of the verb "break", it generates an underspecified relation between Mary and the vase. | contrasting |
train_19198 | Attempts to address these limitations was tackled in our previous work (Goodwin and Harabagiu, 2013) where a qualified medical knowledge graph (QMKG) was presented. | the QMKG does not capture compositional semantics of medical concepts, nor the lexical relations between them. | contrasting |
train_19199 | This allowed us to leverage Spark's GraphX library (Xin et al., 2013) for graph-parallel computations. | processing such a large knowledge graph is challenging, even in a distributed architecture. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.