id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_9000 | Since HPYLM is based on a Bayesian framework, we can integrate other probabilistic models theoretically for other problems and apply optimization methods in accordance with a Bayesian framework. | other smoothing methods has several parameters that need to be tuned manually. | contrasting |
train_9001 | Running a language model on a recurrent neural network (RNN) (Mikolov et al., 2010) is, of course, a reasonable choice because of the good prediction performance for closed-vocabulary task. | a neural network LM usually does not include a generative process, so it is difficult to apply to unsupervised training of a language model or lexicon acquisition from speech signals. | contrasting |
train_9002 | For example, the probability of p(sing|he, will) is estimated reliably. | the probability of p(sing|will, sh..), which includes a hesitation ("sh.."), is estimated unreliably because the hesitation does not appear in the corpus. | contrasting |
train_9003 | The reason the RNN outperformed our model might be due to the closed vocabulary set in this experiment. | the context information in RNN might be Figure 4: Weakness of segmental context model suffered from the contiguous noisy words that were caused by the combination of the noise word and OOVs, "<unk>", in the artificial noisy data, and RNN degraded prediction accuracy for artificial noisy data. | contrasting |
train_9004 | Since the type hierarchy of entities is typically built from knowledge bases such as DBpedia, which is regularly updated with new types (especially fine-grained types) and entities, it is natural to assume that the type hierarchy is growing rather than fixed over time. | current FNET systems are impeded from handling a growing type set for that information learned from training set cannot be transferred to unseen types. | contrasting |
train_9005 | WSABIE maps input features and labels to a joint space, where information is shared among correlated labels. | the joint embedding method still suffers from label noises which have negative impacts on the learning of joint embeddings. | contrasting |
train_9006 | Again, each of the contextual models perform much better than the no context baseline. | we see an interesting pattern when comparing the four contextual models; unlike in Turkish, the CRF model performs the best. | contrasting |
train_9007 | Their model was able to achieve an accuracy of 84% over ambiguous tokens in Turkish. | our proposed model uses long short-term memory (LSTM)-based architectures to capture longer range dependencies between a target word and its surrounding context. | contrasting |
train_9008 | The test set contains around 40% of the data set, with 8,055 sentences. | this data set is relatively new and we do not have good feature templates to implement the dense-CRF model. | contrasting |
train_9009 | Thus, it is required that which is solved by the condition of (11). | we analyze the requirement of the first term that (1 − cγ) t (a 0 − a ∞ ) ≤ ϵ q . | contrasting |
train_9010 | Our evaluations suggest that the model is more accurate and faster than alternative techniques. | it would still be good to analyze the performance of the model more deeply. | contrasting |
train_9011 | Incorrect dependency parsing and co-reference resolution will reduce the accuracy of extracting event information. | it also verifies that the method that summarize texts based on accurate event information is effective. | contrasting |
train_9012 | For example, the Jensen Shannon divergence (JS divergence), an information-theoretic measure not relying on human-written summaries, compares system summaries with source documents regarding their underlying probability distribution of n-grams (Lin et al., 2006). | unlike ROUGE, JS divergence is neither submodular nor factorizable (see, e.g., Louis and Nenkova (2013)), and consequently can not be optimized via ILP or submodular function optimization. | contrasting |
train_9013 | Recently, submodular functions have been extensively studied and simple algorithms have been proved to yield nice solutions (Krause and Golovin, 2014;Schrijver, 2003). | we might not want to restrict ourselves to particular kinds of functions, because often, the landscape of the objective function does not have easy to exploit properties (Bianchi et al., 2009). | contrasting |
train_9014 | For long-range search, the scout bees regularly look for new locations and investigate each new area for at least t rounds (where t is the hyper-parameter controlling the number of retry before becoming a scout bee). | in the Swarm Summarizer, the mid-range search is limited compared to the reproduction mechanism, because it is achieved only by either successfully applying several local movements, or by randomly scouting the mid-range areas, both of which are unlikely. | contrasting |
train_9015 | We know from section 4.1 that summaries produced by our algorithms and ICSI have many words and bigrams in common. | since extractive MDS is a combinatorial problem, we can compare two summaries by comparing which sentences have been selected. | contrasting |
train_9016 | All these models use word-level semantic annotations. | providing these word-level semantic annotations is costly since it requires specialised annotators. | contrasting |
train_9017 | We plan to extend our work beyond English data, as predictive parsing is language independent. | beuck and Menzel (2013) find a larger set of virtual nodes to be optimal for German for which we want to assess the merit to our method. | contrasting |
train_9018 | Another core problem for disfluency detection is to keep the generated sentences grammatical. | the sequence tagging methods and RNN method have no power of modeling the linguistic structural integrity. | contrasting |
train_9019 | The label "dislocated" is originally defined in the universal dependencies for languages such as Japanese to describe the syntactic relation of words in a topic-comment structure, but is not defined for Chinese. | in Chinese it is frequent to see the topic-comment structure in a sentence, for example: 1. | contrasting |
train_9020 | Research into character-level models is still in fairly early stages, and models that operate exclusively on characters are not yet competitive to word-level models on most tasks. | instead of fully replacing word embeddings, we are interested in combining the two approaches, thereby allowing the model to take advantage of information at both granularity levels. | contrasting |
train_9021 | Most research on SBD focus on languages that already have a well-defined concept of what a sentence is, typically indicated by sentence-end markers like full-stops, question marks, or other punctuations. | as we study more contexts of language use (e.g. | contrasting |
train_9022 | can only be decided based on capitalization, but especially in the social media datasets this signal is not very reliable. | to nouns, we do not expect adjectives to improve much, as comparative and superlative can be quite easily distinguished from the base form of the adjective and from each other. | contrasting |
train_9023 | A short sequence may temporarily be the best candidate and fulfill the goal, while longer (and possibly correct) sequences are incomplete and may be lost. | long sequences have more features, therefore their score may be arbitrarily inflated. | contrasting |
train_9024 | The preceding and following consonants defined by the onset and coda respectively may or may not be present in a syllable. | every syllable must have a nucleus. | contrasting |
train_9025 | (2013) used CRF using two level tagging (phone boundary and phone distance) to perform syllabification for Romanian language. | unlike their study, our approach uses only one level tagging as discussed in the section 5.2. | contrasting |
train_9026 | In order to generate unicode compatible text corpus, we deploy a Bengali OCR 3 . | the accuracy of the OCR is very poor i.e., 52.6% words accuracy. | contrasting |
train_9027 | Learning representations of words is a pioneering study in this school of research. | paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as sentiment classification and document summarization. | contrasting |
train_9028 | Theoretically, paragraph-based representation learning is expected to be more suitable for such tasks as information retrieval, sentiment analysis and document summarization (Huang et al., 2013;Le and Mikolov, 2014;Palangi et al., 2015), to name but a few. | to the best of our knowledge, unsupervised paragraph embedding has been largely under-explored on these tasks. | contrasting |
train_9029 | After that, well-developed text processing frameworks can then be readily applied. | such imperfect transcripts usually limit the associated efficacy. | contrasting |
train_9030 | A closer look at the output reveals that the baseline marked only those boundaries that included a clear pause, i.e., "safe" candidates. | the tagger marked not only those clear pauses, but also more subtle boundaries that involved an intensity decrease and not necessarily a pause. | contrasting |
train_9031 | First, we note that our experimental settings are rather different from previously considered settings for domain adaptation in many aspects: • Sufficient target data: In a typical setting for domain adaptation, one generally assumes that the source domain has a sufficient amount of labeled data but the target domain has an insufficient amount of labeled data. | we have sufficient amounts of labeled data for all domains: our goal is to effectively utilize all of them. | contrasting |
train_9032 | The Hindi-Urdu treebanking project is one such example where the influence of differences between Hindi and Urdu texts have led to the creation of separate treebanks for Hindi and Urdu (Bhatt et al., 2009;Bhat et al., 2015). | pursuing them separately in computational linguistics makes sense. | contrasting |
train_9033 | More importantly, if we choose a third script, we have to manually develop a reasonably-sized corpus of transliteration pairs for training the transliteration models. | transliteration pairs in Devanagari and Perso-Arabic scripts can be automatically extracted from the corpora available in these scripts (see §3.1.1 for more details). | contrasting |
train_9034 | Not surprisingly, the results shown in Table 4 (all triggers) and 5 (verbal triggers) are overall lower than when using gold features. | switching from surface to deep syntax leads to higher gain for predicted data than for gold data: 5.1 points (56.7 to 61.7) for all trigger, instead of 4.2 for gold data and 6.7 points (61.3 to 68) instead of 6.4 on gold data for verbal triggers. | contrasting |
train_9035 | The key idea of annotation projection can be summarized as follows: through word alignment in parallel text corpora, the annotations are transferred from the source (resource-rich) language to the target (under-resourced) language, and the resulting annotations are used for supervised training in the target language. | automatic word alignment errors (Fraser and Marcu, 2007) limit the performance of these approaches. | contrasting |
train_9036 | Many automatic word alignment tools are available, such as GIZA++ which implements IBM models (Och and Ney, 2000). | the noisy (non perfect) outputs of these methods is a serious limitation for the annotation projection based on word alignments (Fraser and Marcu, 2007). | contrasting |
train_9037 | Unsupervised RWR -General Relations (B3) -In this approach, we form the graph G for the sentence, similar to as mentioned in Section 3. | we only use path-types 1,2 and 6 of Table 1. | contrasting |
train_9038 | With efforts from Huet, an automated analyser for exhaustive syntactically valid analysis of a Sanskrit sentence, called as the 'Sanskrit Heritage Reader' is available (Goyal et al., 2012;. | the system provides all the possible syntactically valid segmentation and it requires human assistance to choose the relevant segmentations so as to form the semantically correct sentence. | contrasting |
train_9039 | Mongolian person name always express in one word. | when transliterating Chinese person into Mongolian, the person name length unchanged. | contrasting |
train_9040 | This best F-measure even surpassed the performance of POS and ORT feature combination about F 1 is 83.35 under SE method in the same condition. | the overall performance reduced when added all type clusters features to the feature set. | contrasting |
train_9041 | Quantifying semantic relationships between linguistic terms lies at the core of many NLP applications (Pilehvar and Navigli, 2015). | hard matching between words has long been an obstacle in identifying the relatedness of two sentences (ShafieiBavani et al., 2016b). | contrasting |
train_9042 | For large and sparse inputs, sparse operations can be used to construct output layer of an autoencoder. | input layer should also be reconstructed from dense output vectors for which sparse operations cannot be used in the original algorithm. | contrasting |
train_9043 | It is possible to build different DAE models and different kNN classifiers for each level-1 category, just like we do for DBN classification models. | doing so increases the number of models twice and it requires 4 kNN searches to classify a product, instead of 2. | contrasting |
train_9044 | Paragraph vectors obtained from PV-DM and PV-DBoW are shared across context words generated from the same paragraph but not across paragraphs. | a word is shared across paragraphs. | contrasting |
train_9045 | We intend to extend the gwBOVW approach to incorporate the path class label in some fashion during the embedding. | most results are shown on proprietary e-commerce datsets. | contrasting |
train_9046 | Previous supervised summarization systems often perform the two tasks in isolation. | since reference summaries are the trade-off between relevance and saliency, using them as supervision, neither of the two rankers could be trained well. | contrasting |
train_9047 | have become growingly popular in this area. | most current supervised summarization systems often perform the two tasks in isolation. | contrasting |
train_9048 | For generic summarization, people read the text with almost equal attention. | given a query, people will naturally pay more attention to the query relevant sentences and summarize the main ideas from them. | contrasting |
train_9049 | For example, (Wan and Xiao, 2009) adopted manifold ranking to make use of the within-document sentence relationships, the cross-document sentence relationships and the sentence-to-query relationships. | to these unsupervised approaches, there are also various learning-based summarization systems. | contrasting |
train_9050 | Sentence compression presupposes a prior step of sentence extraction; it is the only other way how abstractive summaries can be constructed. | this previous step is rarely talked about in the literature. | contrasting |
train_9051 | Bridging the gap between source and target language does come at an additional cost in performance. | there are a number of possible ways to attack this gap in future work, including using targetlanguage lexical resources if available, unsupervised mining of large amounts of parallel data for lexical entries, and also improving the parsing model itself with recent advances in CCG semantic parsing. | contrasting |
train_9052 | We borrow the term 'substitution' from TAG formalism (Joshi and Schabes, 1997). | when k-best constituent parsing outputs are kept for reducing error propagation, further search strategy for an optimal subset of these redundant subtrees will be discussed in Section 3. | contrasting |
train_9053 | Since k-best constituent parsing outputs are used, this work resembles the re-ranking works. | we do not perform k-best list re-ranking, e.g. | contrasting |
train_9054 | This tagging task can be performed with a precision above 93%, with a perceptron tagger. | Table 5: Results on CTB5, the same terms of Table 3. high recall of 'B' and 'H' is more crucial for our task, i.e. | contrasting |
train_9055 | This affirms that pre-word pauses carry constituency-like information. | typing behavior of users differs, as illustrated in Figure 2. | contrasting |
train_9056 | In fact, user keystroke biometrics are successfully used for author stylometry and verification in computer security research (Stewart et al., 2011;Monaco et al., 2013;Locklear et al., 2014). | also eye tracking data like scanpaths (the resulting series of fixations and saccades in eye tracking) are known to be idiosyncratic (Kanan et al., 2015). | contrasting |
train_9057 | For instance on the Ritter data, for 25/38 participants using their keystroke information as auxiliary task helps to improve overall chunking performance. | if we combine all data and train a single model, performance degrades on chunking. | contrasting |
train_9058 | (Note that all compounds are considered complex, while words that are not compounds are considered simplex-e.g., book, rain). | to previous approaches, which have primarily sought to identify the dictionary forms of compound constituents, this approach aims to identify the exact location of word boundaries in compounds. | contrasting |
train_9059 | On average, language modeling alone achieved an accuracy of ∼94%. | a language model coupled with linguistic constraints achieved a much higher accuracy, hovering around 97%. | contrasting |
train_9060 | The presence of constraint errors cautions us that a segmentation approach that uses phonotactic constraints is sensitive to loanwords. | it was only with loanwords where each candidate violated a constraint. | contrasting |
train_9061 | Were yksinomaan truly simplex, its syllabification would be *yk.si.no.maan. | it syllabifies as if it were a compound, with a syllable boundary falling in between the two stems: yk.sin.o.maan. | contrasting |
train_9062 | Despite the success of above methods in learning knowledge representations, most of them mainly consider knowledge base as a set of triples and models each triple separately and independently. | in reality, triples are connected to each other and the whole knowledge base could be regarded as a directed graph consisting of vertices (i.e., entities) and directed edges (i.e., relations). | contrasting |
train_9063 | So far, the translation of a graph context, π(•), takes the embedding results of each subject contained in the context equally. | in reality, different subjects may have different power of influence to represent the target subject. | contrasting |
train_9064 | We ascribe this result to the information bias of sentence embeddings generated by GRU, that is, GRU tends to pay more attention to the words in the end of a sentence. | for the task discussed by this paper, complete semantics provide more help to context-aware candidate selection as discussed in Section 2.1. | contrasting |
train_9065 | The primary metric for recommender system is prediction accuracy. | focusing solely on this metric is reported to limit user satisfaction by always recommending predictable items, such as a new comedy for a comedy fan who can discover it without recommendation. | contrasting |
train_9066 | We may consider selecting a typical entity among the existing entities in the category (such as dog for MAMMAL). | typical properties such as 'lifespan' can be missing with dog due to data sparsity. | contrasting |
train_9067 | Therefore, high P (lifespan|giraffe) does not mean giraffe has an unexpected property. | case B: Property is unexpected, but relevant (serendipitous) Figure 2(b) shows a distribution skewed to low P (a|e). | contrasting |
train_9068 | For example, if a method shows high performance on Q3 but not on Q2, its trivia quiz tends to deviate much from the image/topic. | the method is anyway presenting interesting trivia quiz. | contrasting |
train_9069 | Licence details: http:// creativecommons.org/licenses/by/4.0/ eyetracking data in English (Just et al., 1982). | to date, no similar correlation analyses have been conducted for Japanese. | contrasting |
train_9070 | The results have shown that the unsuitability of MLPs for cyberbullying, reaffirmed the suitability of SVMs for text classification. | in addition, the results show that decision tree based algorithms, Random Forest and J48, can also perform well in short text classification, which is contrary to most of the previously reported results. | contrasting |
train_9071 | In our Complex Word Identification study we learned that words which are simpler to non-native English speakers have much higher probabilities according to language models, both alone and in context, while those which are more complex to them tend to have a smaller number of senses in WordNet. | with what was reported by (Rello et al., 2013b) in experiments with readers who suffer from Dyslexia, we found no evidence of a relationship between the non-natives' perception of complexity and neither word length or number of syllables. | contrasting |
train_9072 | Considering 20cLM (see again Figure 5), it shows lower perplexity values for 20cA than for 19cA. | the difference is relatively small in comparison to the difference observed for 19cLM on 19th and 20th c. abstracts. | contrasting |
train_9073 | 6 The program can analyze poems and check if the predominant stress pattern is iambic or anapestic. | if the input poem's meter is not one of those two, the system forces each line into one of them. | contrasting |
train_9074 | AnalysePoems is another tool for identification of metrical patterns written by Plamondon (2006). | with other programs, its main goal is not to perform a perfect scansion, but to only identify the predominant meter in a poem. | contrasting |
train_9075 | Both classifiers predict an iambic pentameter most frequently, but, in the case of the Linear Support Vector Machine there are approximately 50 analyses that only appear once (with slight differences between them). | the number of unique analyses in the CRF results is around 30. | contrasting |
train_9076 | (2014) report leading results of 0.81 F -measure (harmonic mean of precision and recall) on SU boundaries with their DNN-CRF model, outperforming a DT-CRF baseline with 0.774 F -measure. | most of the above-mentioned systems have been trained and evaluated on native speaker telephone conversation and broadcast news dialogues: the Switchboard and Broadcast News datasets prepared for the RT-03/04 shared tasks in MDE by the National Institute of Standards and Technology, U.S. Department of Commerce. | contrasting |
train_9077 | The two parallel sentences don't share any word so that the exact match strategy fails to identify the alignment between them. | the alignment can be captured based on POS, syntactic role and semantic matches. | contrasting |
train_9078 | The differences among score ranges in argumentative essays are more obvious so that high quality sentence parallelism might be a useful indicator of well written argumentative essays. | long chunks appear much less in narrative essays across all score ranges. | contrasting |
train_9079 | In particular, with this features/dataset combination, with the glm library we see slight rises in Precision@5, F1-Score@15 and MAP from 24% to 24.4%, from 19.22% to 20.30% and from 0.127 o 0.136 respectively; for C5.0, while F1-Score rises from 18.95% to just 19.29%, Precision@5 and MAP shows a more significant improvement from 22.2% to 26.8% and from 0.123 to 0.137 respectively, thus supposedly showing a better ranking of the keyphrases found. | using the same feature set over the preprocessed documents still shows an improvement from the baseline, but with slightly lower scores. | contrasting |
train_9080 | Introducing more contextual information seems necessary for dealing successfully with a construction like this one. | in this particular case it is not entirely clear how to determine the lexical aspect of the event in an automated fashion. | contrasting |
train_9081 | We speculate that if a typological feature is prone to horizontal transmission, it is likely judged to be unstable by the exclusively vertical model. | this cannot be confirmed in a straightforward manner. | contrasting |
train_9082 | Like feature stability indices explained in Section 3.2, our model assumes that if the same feature value is shared by a group, the feature in question is stable. | the collection of such groups is implicitly represented as a single neighbor graph. | contrasting |
train_9083 | Therefore, Natural Language Processing (NLP) tasks that need to be sensitive to lexical meaning should treat MWEs as single units. | this is a challenging problem since many MWEs can have multimple morphosyntactic variants, which makes them difficult to recognise or generate. | contrasting |
train_9084 | For English, our original intention was to extract combinations from the Elhuyar English-Basque dictionary, in part because the Basque translations would be useful for the translation process in the MT system. | the dictionary contained too few combinations for this study, so instead we decided to use the Oxford Collocations Dictionary (Deuter, 2008). | contrasting |
train_9085 | The precisions of methods B and C were not as good as that of method A. | the evaluation on instances identified by both B and C methods reveals that detection quality is still very high when linguistic data specific to VNCs is combined with parsing (the second row of scores in Table 5). | contrasting |
train_9086 | 72.20% 99% Method B+C but not A 20.85% 97% Method B only 4.12% 93% Method C only 2.83% 83% Table 6: Identification precision for the additional VNCs detected in Spanish As the corpora and parsers we used were different for English and Spanish, the experiments in both languages are not really comparable. | it is evident that the improvement obtained for English was considerably higher than the one obtained for Spanish. | contrasting |
train_9087 | In this paper, we investigated the use of computational methods to analyze the language used in openended comments from student evaluations of teaching effectiveness in order to explore the possibility that gender bias exists in these evaluations. | to previous research that relies on numerical ratings, our results fail to reveal differences by instructor gender in overall student satisfaction, as expressed in written comments. | contrasting |
train_9088 | In the United States, Spontaneous Reporting Systems (SRSs) is the official channel supported by the Food and Drug Administration. | these system are typically under-reported and many ADRs are not recorded in the systems. | contrasting |
train_9089 | This problem can partially be solved by using bi-grams or trigrams. | this leads to the number of features exploding, and the models are thus easily overfitted. | contrasting |
train_9090 | The preposition error detection is one of error cases in NLP-TEA shared tasks. | the preposition correction is beyond the scope of NLP-TEA. | contrasting |
train_9091 | This phenomenon suggests that GRU-ME better fits the training data, but may suffer from overfitting. | the GRU model with a large hidden size is more robust when it is applied to another corpus. | contrasting |
train_9092 | Only for Zulu is the precision mediocre. | we want to choose a single grammar independently of the language, as we do not want to tune our approach to the language (we have no annotated tuning set). | contrasting |
train_9093 | As we can see, the 15 most frequent affixes found for English are indeed correct affixes of English, but among the subsequent 25 most frequent affixes, only seven are correct. | 5 for German, a language with much richer inflectional and derivational morphology, all but three affixes are correct among the top 40 (and the top 25 are all correct). | contrasting |
train_9094 | We note again that the performance of LIMS-Scholar depends on the quality of the scholar-seeded knowledge, so that one should be cautious with drawing conclusions. | it appears overall that the cascaded approach of LIMS is a perfectly adequate alternative to using the scholar-seeded knowledge required for LIMS-Scholar. | contrasting |
train_9095 | They announce the best results for German and Spanish datasets. | demir and Ozgur (2014) utilize pretrained word embeddings along with their corresponding word cluster ids to achieve top performance on Czech dataset. | contrasting |
train_9096 | In general, it does this in quite reasonable ways, for instance for tagging the Latin words pastor in in a list of parish priests. | since the foreign words in the SUC training corpus are mostly in English, a language scarcely attested in our early modern corpus, it sometimes overgenerates the UO tag in incorrect contexts, for instance for the word tree 'three' (modern spelling tre). | contrasting |
train_9097 | Other classifiers could be used to model this relationship. | dBN are expected to perform well on high dimensional word-embedding and are able to model non linear relationships between word pairs. | contrasting |
train_9098 | We see from Figure 5 that the performance of KEA improves as the number of training cases increases, for both MLC and CSDS. | on MLC is does worse than the random baseline, likely because it picks keyphrases that are not even among the candidate keyphrases. | contrasting |
train_9099 | It did not take the history information of the user into account. | the microblogs a user posted in recent history can represent their interests to some degree. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.