id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_8900 | Because of integrating out all Gs in all of the priors, interdependency between samples of α ij,pq |B ij,pq = b or β ij,pq |B ij,pq = b is introduced, resulting in simultaneously obtaining multiple samples impractical. | blocked sampling, which obtains sentence-level samples simultaneously Johnson et al., 2007a) is attractive for the fast mixing speed and the easy application of standard dynamic programming algorithms. | contrasting |
train_8901 | Human-defined word class knowledge is useful to address this issue. | the manual word class taxonomy may be unreliable and irrational for statistical natural language processing, aside from its insufficient linguistic phenomena coverage and domain adaptivity. | contrasting |
train_8902 | It takes the empirical rules and probabilities from a Treebank. | due to the context-free assumption, PCFG does not always perform well (Klein and Manning, 2003). | contrasting |
train_8903 | On one hand, this procedure provides a more rational hierarchical subcategorization structure according to data distribution. | the order of the division criterions represents the priorities the grammar induction takes for each criterion. | contrasting |
train_8904 | It would be promising if this work stacks with the content word knowledge. | the work with content word knowledge have to handle the polysemous words in the semantic taxonomy, so they split the categories according to the data, and then find a way to map the subcategories to the node in the taxonomy, and constrain their further splitting. | contrasting |
train_8905 | For example, given a phrase pair ⟨"yingyun", "business"⟩, its orientation is more likely to be monotone if it is preceded by a noun phrase pair such as ⟨"xinyongka", "credit card"⟩. | the probability of the discontinuous orientation is higher if the previous phrase pairs contain verbs such as ⟨"gaishan", "improve"⟩. | contrasting |
train_8906 | Note that some words may not be aligned correctly, like "NULL-musiciens". | generating these tuples can be viewed as a language model process that exploits previous source and target words, and current source word contained in previous tuples like "les-musiciens". | contrasting |
train_8907 | For example, there exists slight difference between the word-bilingual-tuple-I and the word-bilingual-tuple-II for the English-French task. | for the English-German task, the word-bilingual-tuple-II significantly outperforms the word-bilingual-tuple-I by 0.4 BLEU scores. | contrasting |
train_8908 | Somewhat surprisingly, this is not the case. | both 5-and 7-gram class-based models perform slightly worse than the stream-based models. | contrasting |
train_8909 | The dependence of translation choice on domain suggests that the word alignments themselves can better be conditioned on domain information. | in the data selection setting, corpus C mix often does not contain useful domain markers, and C in contains only a small sample of in-domain sentence pairs. | contrasting |
train_8910 | Their estimate during training might be a reasonable selection cut-off threshold. | we found that it is not entirely clear whether these cut-off criteria might exclude other relevant/irrelevant pairs that are not exactly in-domain. | contrasting |
train_8911 | This work looks at modeling the relevance of sentence pairs from the mix-domain corpus to a task represented by an in-domain sample. | with previous work we cast this as a translation problem with a latent domain variable. | contrasting |
train_8912 | There is a growing interest in automatically predicting the gender and age of authors from texts. | most research so far ignores that language use is related to the social identity of speakers, which may be different from their biological identity. | contrasting |
train_8913 | Most of the studies on language and age focus on chronological age. | speakers with the same chronological age can have very different positions in society, resulting in variation in language use. | contrasting |
train_8914 | We observe that in the Wikipedia part of the Lassy Large corpus, modal clusters occur in the 1-2 order only 0.5% of the time. | they are considered to be grammatical. | contrasting |
train_8915 | This process avoids subjectivity beyond the definition of the construction. | it limits the variables that can be used in the study to the ones that are, or can be, automatically annotated in a corpus. | contrasting |
train_8916 | Clearly the model predicts better than that. | it should be noted that this is not a typical predictive task. | contrasting |
train_8917 | In section 4 we discussed our choice for the Wikipedia part of the Lassy Large corpus. | this corpus consists of other kinds of sources as well, and now that we have a highly automated way of building the model, it is relatively easy to test it on a different part of the corpus to see whether the same variables hold in a different domain of text. | contrasting |
train_8918 | While extractive techniques are generally preferred over abstractive ones (as abstraction can introduce disfluency), existing extractive summarizers are either supervised or based on heuristics of certain desired characteristics of the summarization result (e.g., maximize n-gram coverage (Nenkova and Vanderwende, 2005), etc.). | when it comes to online reviews, there are problems with both approaches: the first one requires manual annotation and is thus less generalizable; the second one might not capture the salient information in reviews from different domains (camera reviews vs. movie reviews), because the heuristics are designed for traditional genres (e.g., news articles) while the utility of reviews might vary with the review domain. | contrasting |
train_8919 | While there is a large overlap between text summarization and review opinion mining, most work focuses on sentiment-oriented aspect extraction and the output is usually a set of topics words plus their representative text units (Hu and Liu, 2004;Zhuang et al., 2006). | such a topic-based summarization framework is beyond the focus of our work, as we aim to adapt traditional extractive techniques to the review domain by introducing review helpfulness ratings as guidance. | contrasting |
train_8920 | The idea of using sLDA in text summarization is not new. | the model is previously applied at the sentence level (Li and Li, 2012), which requires human annotation on the sentence importance. | contrasting |
train_8921 | Text simplification has often been defined as the process of reducing the grammatical and lexical complexity of a text, while still retaining the original information content and meaning. | text can also be simplified in other ways; for instance, by removing peripheral information to reduce text length, through sentence compression or summarisation. | contrasting |
train_8922 | Out of 50 participants, 3 rated themselves as native. | they could get only about 28% of the answers correct, showing the fact that the participants had overestimated themselves. | contrasting |
train_8923 | Her choice of properties may be sufficient to distinguish it from all its distractors in the current context. | unlike (1), (2) is a composite utterance consisting of two communicative modalities, each of which contributes to the communicative intention (Enfield, 2009). | contrasting |
train_8924 | This combination is closely matched for accuracy by the combination involving descriptive features, Distance and DistProps. | dropping Distance (using only descriptive features and DistProps) results in worse performance. | contrasting |
train_8925 | Adding only DistProps to the descriptive features improved the accuracy of the Logistic classifier somewhat, though it had a greater impact on J48. | distance seems to have the greatest impact of the two physical features. | contrasting |
train_8926 | This combination also exceeds the combination of descriptives, DistProps and Distance, though only marginally. | this does remain the next best combination for J48, consistent with the results on the complete dataset. | contrasting |
train_8927 | The experiments show that many acrostics based on common English words can be generated in less than a minute. | we see our main contribution in the presented technology paradigm: a novel and promising combination of methods from Natural Language Processing and Artificial Intelligence. | contrasting |
train_8928 | Schwarzenegger himself characterized the appearance of that acrostic a "wild coincidence". | 1 such a coincidence is highly unlikely: Using the simplistic assumption that first letters of words are independent of each other if more than ten words are in between (line length in the Schwarzenegger letter) and calculating with the relative frequencies of first letters in the British National Corpus (Aston and Burnard, 1998), the probability for the acrostic in Figure 1 can be estimated at 1.15 • 10 −12 . | contrasting |
train_8929 | Other early methods use machine translation (Quirk et al., 2004). | the need for large and expensive parallel manual translation corpora cannot be circumvented by using multiple resources (Zhao et al., 2008). | contrasting |
train_8930 | Still, the rich variety of the full data set can form a semantically strong operator. | in our pilot experiments, the full PPDB patterns often decreased text quality unacceptably, such that we refrained to use PPDB as a single operator in our experiments. | contrasting |
train_8931 | Therefore, it is possible for translation retrieval to have access to a huge volume of monolingual corpora that are readily available on the Web. | the MT + IR pipeline suffers from the translation error propagation problem. | contrasting |
train_8932 | In Figure 1, the search graph has 9 nodes, 10 edges, 4 paths, and 3 distinct translations. | the translation option graph has 6 nodes, 9 edges, 10 paths, and 10 distinct translations. | contrasting |
train_8933 | Now that we have explored the role of syntax in this project, our next step is try to further improve our QE system by adding semantic information. | there are many other ways in which the research in this paper could be further extended. | contrasting |
train_8934 | For example, r 8 in Figure 1 has a good Chinese syntactic structure indicating the reordered translations of NP and VP. | such a rule would not normally be included in a Hiero grammar, as it would require consecutive source language non-terminals (see Figure 3). | contrasting |
train_8935 | We see that left-heavy binarization is very helpful and exp08 achieves overall improvements of 1.2 and 0.8 BLEU points on the newsire and web data. | right-heavy binarization does not yield promising performance. | contrasting |
train_8936 | However, all these methods still resort to rule extraction procedures similar to that of the standard phrase/hierarchical rule extraction method. | we use the GHKM method which is a mature technique to extract rules from tree-string pairs but does not impose those Hiero-style constraints on rule extraction. | contrasting |
train_8937 | In general, such a list could contain hundreds or even thousands of entities. | the mention should be linked to at most one entity in the list. | contrasting |
train_8938 | 2 Related Work Research on the extraction of event relations has concerned both the analysis of the temporal ordering of events and the recognition of causality relations. | the two research lines have progressed quite independently from each other. | contrasting |
train_8939 | Causality is a concept that has been widely investigated from a philosophical, psychological and logical point of view, but how to model its recognition and representation in NLP-centered applications is still an open issue. | information on causality could be beneficial to a number of natural language processing tasks such as question answering, text summarization, decision support, etc. | contrasting |
train_8940 | they involve a source and a target event. | while a list of relation types is part of the attributes for TLINKs (e.g. | contrasting |
train_8941 | Another possibility would be to automatically generate additional data from the Penn Discourse TreeBank corpus, where causality is one of the discourse relations annotated between argument pairs. | a further processing step would be needed to identify inside the argument spans the events between which a relation holds, which may introduce some errors. | contrasting |
train_8942 | to determine the source and target event. | the converse may not hold, since the causal links in the data set are very sparse, and only 2% of the total TLINKs overlap with CLINKs. | contrasting |
train_8943 | The types of entity mentions in a relation are important indicators for the very type of relation. | the coarse (only four types) entity types may not capture sufficient constraints to distinguish a relation. | contrasting |
train_8944 | In Figure 3(b), Aug top1 and Aug top2 achieve similar performances. | when adding one more type with k = 3, we obtain a lower curve, which contradicts the trend showed in the curves of the substitution method (Figure 3(a)). | contrasting |
train_8945 | Biber (1993) mentions, among other things, that it has to be decided for what target population a corpus is meant to be representative, that estimates concerning the quantities of various text types are required, and that decisions with regard to the number of individual text samples and their sizes have to be made. | there is no easy and well established way to verify the success of these measures. | contrasting |
train_8946 | To mine more event mentions, we use the simple trigger-entity pair to represent event pattern in this paper. | lots of event mentions still cannot be extracted due to the ellipsis of arguments. | contrasting |
train_8947 | The bootstrapping procedure of the document-centric view selects frequent patterns in relevant docu-ments and ignores those infrequent patterns both in relevant or irrelevant documents. | the number of infrequent patterns in Chinese is larger than that in English, due to its open and flexible sentence structure, as mentioned in Subsection 3.1. | contrasting |
train_8948 | Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. | these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. | contrasting |
train_8949 | WordNet (Fellbaum, 1998), on the other hand, is a very rich resource on synonymy and hypernymy. | its coverage of binary relations (as opposed to unary predicates, mostly nouns) is restricted to (mostly) single-word verbs. | contrasting |
train_8950 | SimRank with Fingerprints: Unfortunately, SimRank has very high computational complexity: the run-time of a straightforward implementation is O(Kn 4 ), where n is the number of vertices in the graph and K is the number of iterations in an iterative fixpoint computation (in the style of the Jacobi method). | there are much faster approximations of SimRank. | contrasting |
train_8951 | By the acyclicity of the WordNet hypernymy structure, the process yields a proper DAG. | the output contains redundant links (direct ones and transitive ones connecting the same pair of phrases); these are subsequently eliminated by a transitive reduction algorithm (Aho et al., 1972). | contrasting |
train_8952 | Example: in wmt12, the gold-based LMAE for the top 20% sentences is higher than 0.8 when the system is trained only on the official train set (0% of the post-edited sentences added), but reaches 0.4 when about half of the post-edited sentences are added to the training set. | the global MAE (which takes all the sentences into account) increases from 0.7 (0%) to 0.9 (50% of the post-edited sentences added): since the system assigns more scores in the top tail, it makes larger errors globally. | contrasting |
train_8953 | A head-dependents relation (HDR) is composed of a head and all its dependents, which can be viewed as an instance of a sentence pattern or phrase pattern. | since dependency trees are much flatter than constituency trees, the dependency-to-string model suffers more severe non-syntactic phrase coverage problem (Meng et al., 2013) than constituencybased models (Galley et al., 2004;Liu et al., 2006;Huang et al., 2006). | contrasting |
train_8954 | They acknowledge the utility of a graph structure in representing the content of an ontology, and assert that this structure preserves well the semantics of the ontology. | they do not proceed to examine the ontology in the context of a real-world semantic evaluation. | contrasting |
train_8955 | As traditional WSD algorithms are designed to generate output for every word in the text, recall and precision are the same value. | our algorithm works on the principle of semantic relevance, and there is no guaranteed output; senses with sufficient weight after spreading activation will be displayed. | contrasting |
train_8956 | The hyponymy relation (and converse hypernymy) which forms the ISA backbone of taxonomies and ontologies such as WordNet (Fellbaum, 1989), and determines lexical entailment (Geffet and Dagan, 2005), is asymmetric. | the cohyponymy relation which relates two words unrelated by hyponymy but sharing a (close) hypernym, is symmetric, as are synonymy and antonymy. | contrasting |
train_8957 | For example, we would expect to be able to replace any occurrence of cat with animal and so all of the contexts of cat must be plausible contexts for animal. | not all of the contexts of animal would be plausible for cat, e.g., "the monstrous animal barked at the intruder". | contrasting |
train_8958 | Looking at the results for the hyponym BLESS data set, we can see that the SVM methods do generally outperform the unsupervised methods. | the best performing model is svmSING, suggesting that, for this data set, it is best to try to learn the distributional features of more general terms, rather than comparing the vector representations of the two terms under consideration. | contrasting |
train_8959 | A common approach to tackling the MUC template filling task has involved the employment of pattern-based methods, e.g., Riloff (1996). | supervised learning approaches have constituted a more popular means of approaching the ACE tasks 2 . | contrasting |
train_8960 | (2013), that detects triggers, arguments and their roles. | in contrast to the structured perceptron employed in Li et al. | contrasting |
train_8961 | Evaluation is performed by taking into account the 33 event subtypes, rather than the 8 coarser-grained event types. | evaluation of events according to the GENIA specification considers only the correctness of complete events, after nested events have been broken down. | contrasting |
train_8962 | These results suggest that Wikipedia is a good resource from which to learn whether a food item is a brand or not. | this task could not be completely solved by WIKI since not all food items are covered by Wikipedia ( §4.9). | contrasting |
train_8963 | On the one hand, we produced an auxiliary categorization of food items according to the Food Guide Pyramid, and assumed that a food item is a type when it belongs to a category that is unlikely to contain brands. | we directly modelled the task of brand detection by using seeds provided by the output of the textual ranking features. | contrasting |
train_8964 | Note that the two contexts have no words in common, therefore syntagmatic (neighbor based) contextual features will fail to capture their similarity. | paradigmatic features such as the top substitutes "chairman", "directors", etc. | contrasting |
train_8965 | The figure shows that the performance of the instancebased induction model does not degrade as much as the word-based model as the ambiguity of the words increase. | only 14.94% of the instances in the PTB consists of words with GP greater than 1.5 and 45.71% consists of words with GP exactly 1. | contrasting |
train_8966 | Previous work tend to employ either character-level language models or dictionary-type word lists. | word-level language models have a potential of improving the accuracy and speed of decipherment. | contrasting |
train_8967 | Direct comparison of the execution times with the previous work is difficult because of variable computing configurations, as well as the unavailability of the implementations. | on ciphers of the length of 128, our MCTS version takes on average 197 seconds, which is comparable to 152 seconds reported by , and faster than our reimplementation of the bigram solver of Ravi and Knight (2008) which takes on average 563 seconds. | contrasting |
train_8968 | Various kernels, such as the convolution tree kernel (Qian et al., 2008), subsequence kernel and dependency tree kernel , have been proposed to solve the relation classification problem. | the methods mentioned above suffer from a lack of sufficient labeled data for training. | contrasting |
train_8969 | We will see that the word representation approach can capture contextual information through combinations of vectors in a window. | it only produces local features around each word of the sentence. | contrasting |
train_8970 | For example, both of the following instances S 1 and S 2 have the relationship Component-Whole. | these two instances cannot be classified into the same category because Component-Whole(e 1 ,e 2 ) and Component-Whole(e 2 ,e 1 ) are different relationships. | contrasting |
train_8971 | Twitter is one among these microblogging services that counts about a billion of active users and 500 million of daily messages 1 . | the analysis of this huge amount of information is still challenging, as language is very informal, affected by misspelling and characterized by slang and #hashtags, i.e. | contrasting |
train_8972 | Notice how no lexical nor syntactic property allows to determine the sentiment polarity. | if we look at the entire conversation that follows: it is easy to establish that a first positive tweet has been produced, followed by a second negative one so that the third tweet is negative as well. | contrasting |
train_8973 | Performance scores report the classification accuracy in terms of Precision, Recall and standard Fmeasure. | in line with SemEval-2013, we also report the F pnn 1 score as the arithmetic mean between the F 1 s of positive, negative and neutral classes. | contrasting |
train_8974 | For such languages, it is natural to want to exploit morphologically rich data to improve parsing performance. | the effect of this is an explosion in the number of possible models due to a huge number of potential features. | contrasting |
train_8975 | The natural language processing task of grammar induction in principle should provide models for how children do this. | previous work on grammar induction has learned from small datasets, and has dealt with the resulting data sparsity by modifying the input and using careful search heuristics. | contrasting |
train_8976 | Bisk and Hockenmaier (2013) used combinatory categorial grammar (CCG) to learn syntactic dependencies from word strings. | they initialise their model by annotating nouns, verbs, and conjunctions in the training set with atomic CCG categories using a dictionary, and so do not learn from words alone. | contrasting |
train_8977 | The former are widely studied by existing methods (Gupta et al., 2007;Aker et al., 2010;Ouyang et al., 2011;Galanis et al., 2012;Hong et al., 2015). | to our knowledge, the latter are firstly incorporated into a regression model in this paper. | contrasting |
train_8978 | (2016) partially address this challenge by linking a summary back to a group of sentences that support the summary. | this linkage is weak since it tells only that there is one sentence or more supporting the summary within the group, without explicitly telling which one(s). | contrasting |
train_8979 | We propose the same hybrid approach combining rule-based and supervised classifiers for the identification of causal relations. | while temporal order has a clear formalization in the NLP community, capturing causal relationships in natural language text is more challenging, for they can be expressed by different syntactic and semantic features and involve both situation-specific information and world knowledge. | contrasting |
train_8980 | The presented approach would probably have more impact if implicit causality was also considered, which we did not take into account because it is not annotated in the Causal-TimeBank corpus. | we plan to investigate this issue in the near future. | contrasting |
train_8981 | This matrix includes several low-correlated words, making several vertically irregular lines. | the time shift operation arranges the irregular words to match the IDSC reports, producing a beautiful horizontal line, as shown in Figure 2b. | contrasting |
train_8982 | In earlier studies, Lasso was employed to model influenza epidemics by Lampos and Cristianini (2010). | in the case of vocabulary size |V |, which is much larger order than sample size T , it has been observed empirically that the prediction performance of l 1 -penalized regression, the Lasso is dominated by the l 2 -penalized one. | contrasting |
train_8983 | Consequently, it caused a considerable decrease of the forecasting accuracy. | some words, such as "fever" and "symptom", showed consistently similar time shifts. | contrasting |
train_8984 | It is therefore not sensitive to outliers, non-linear relationships, or non-normally distributed data. | most intrinsic evaluations of STS systems only report the Pearson correlation. | contrasting |
train_8985 | Note: In case dissimilar pairs are more important for the targeted task, nCG and nDCG can simply be modified by reversing the order of vector m. • Accuracy is a common evaluation measure for many tasks. | as the STS scores are continuously valued, it is unclear how to compute it. | contrasting |
train_8986 | Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from information available in X. | there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications). | contrasting |
train_8987 | Initially we considered the corpus released as part of WMT'12 (Callison-Burch et al., 2012) which contains roughly 44M English-French parallel sentences from various sources including News, parliamentary proceedings, etc. | our initial small scale experiments showed that this does not work well because there is a clear mismatch between the vocabulary of this corpus and the vocabulary that we need for generating captions. | contrasting |
train_8988 | The need to automatically process an increasing number of languages has made obvious the extreme dependency of standard development pipelines on in-domain, annotated resources that are required to train efficient statistical models. | for most languages, annotated corpora only exist for a restricted number of domains, when they exist at all. | contrasting |
train_8989 | PoS sequences thus seem to provide an appropriate level of abstraction for cross-lingual transfer. | contrary to what this intuition suggests, the transfer of the ADJ-NOUN dependency often fails in practice. | contrasting |
train_8990 | It shows that accuracies of related sources are only marginally mod- ified when source sentences are transformed according to WALS, which could be expected as related languages share most of their typological features. | large gains are obtained for distantly related languages. | contrasting |
train_8991 | Second, data selection mostly targets sequences very far from the target syntax: sentences that only disrespect a local preference of child position are less fluent, and consequently have a lower rank, than their hypothetical counterpart with switched positions; but they are not ungrammatical enough to be pushed into the 10% worst territory. | the data transformation approach is not restricted to preexisting n-grams, and it directly confronts the given sequence with its counterpart to keep only the most fluent, thus acknowledging local preferences. | contrasting |
train_8992 | by being based on some form of phonetic, graphematic, or semantic similarity measure (Jurish, 2010;Bollmann, 2012;Amoia and Martinez, 2013). | neural networks -and particularly deep networks with several hidden layers -are assumed to work best when trained on large amounts of data. | contrasting |
train_8993 | For bi-LSTM, this is the multi-task learning setup-using MTL improves the results by +0.7 pp (±2.8) on average, but again there is a high variance within the individual scores. | for the other methods, adding the 10,000 randomly selected samples to the training set actually decreases the average accuracy, by −0.4 pp for Norma and −2.0 pp for CRF. | contrasting |
train_8994 | The results on the Restaurant domain is similar to those on the Hotel domain, where the neural model significantly outperforms the discrete model. | the neural model gives similar results compared with the discrete model on the Doctor domain. | contrasting |
train_8995 | As shown in the figure, most black dots are on the top-right of the figure and most red dots are on the bottom-left, showing that both models are correct in most cases. | the dots are relatively more disperse in the x-axis, showing that the neural model is more confident in scoring the inputs. | contrasting |
train_8996 | First, the classifiers trained on Hotel reviews apply well to the Restaurant domain, which is reasonable due to the many shared properties among Restaurant and Hotel, such as the environment and location. | the performance on the Doctor domain is much worse, largely due to the difference in vocabulary. | contrasting |
train_8997 | An issue of G-LDA is that the word weights in Gaussian topics are measured by the Euclidean similarity between word embeddings. | the Euclidean similarity is not an optimal semantic measure, since most of word embedding algorithms use exponentiated cosine similarity as the link function (Li et al., 2016a). | contrasting |
train_8998 | The generative process of G-LDA is as follows: 3 MvTM G-LDA defines Gaussian topics, which measure word weights in topics by the Euclidean similarity between word embeddings. | the Euclidean similarity is not an optimal semantic measure of word embeddings. | contrasting |
train_8999 | The topics of LDA contain some noise words, e,g., "mr" and "don"; and G-LDA contains some less relevant words, e.g., the second topic of G-LDA is incoherent. | the topics of MvTM are more precise and clean. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.