id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_9700 | We extend the result in the sense that we show the interactions are not only non-linear but also highly complex. | we show that we do not need a non-stationary function to model the relationship (i.e. | contrasting |
train_9701 | The form of the model learned by Kernel Ridge Regression (KRR) is identical to SVR. | kRR uses squared error loss while SVR uses epsilon-insensitive loss. | contrasting |
train_9702 | Second, most features in the feature space are useful. | each of them should have a certain degree of relevance to the task. | contrasting |
train_9703 | So far, linguistic differences have mostly been studied in the context of dialects, usually African-American Vernacular English -AAVE (Jørgensen et al., 2015), using message-level data which offered insight into syntactic (Stewart, 2014) or lexical markers (Blodgett et al., 2016). | not all users from a racial or ethnic group use these markers or, more generally, an associated dialect and usage is different across sociodemographic traits -use of the AAVE is correlated with lower income and education (Rickford, 1999). | contrasting |
train_9704 | The two methods based on the data from (Blodgett et al., 2016) obtain overall good prediction results, with the message-level model surpassing the user-level model in all four cases. | the performance is consistently lower than the best classifier trained on our data set, despite this being an order of magnitude smaller. | contrasting |
train_9705 | Combining surname-based methods with the best text-based model results leads to significantly better results than using text alone, showing that these contain complementary information. | further adding the demographics (age, gender, income, education) as features does not add to the predictive performance, showing that the results are not impacted by any imbalance in the demographic makeup of our data set. | contrasting |
train_9706 | We presented a detailed study of user-level race/ethnicity prediction along the lines of previous work on predicting user traits from text (Burger et al., 2011;Rao et al., 2010;Pennacchiotti and Popescu, 2011;Schwartz et al., 2013;Sap et al., 2014;Volkova et al., 2014;Preoţiuc-Pietro et al., 2015b;Flekova et al., 2016b;Preoţiuc-Pietro et al., 2016;Preoţiuc-Pietro et al., 2017). | with previous research on race/ethnicity, we used labels obtained by directly surveying Twitter users, rather than distantly supervised geo-located data or perceived race labels, which lead to multiple biases and lower accuracies on real data. | contrasting |
train_9707 | Both methods are based on CNN + LSTM. | the proposed framework also has the model component to discriminate between multiple classes. | contrasting |
train_9708 | Similar to M M F D Concat , in this setting, we keep the automated feature extraction and the fakeness discrimination components. | we replace the interpretable multi-source fusion component by explicitly considering all the sources equally important. | contrasting |
train_9709 | Few would dispute the idiomaticity of "kicking the bucket", as the non-literal meaning in this multi-word expression is largely conventionalized. | in the sentence "One approach would be to draw the line by reference [...]" the expression "draw the line" could be classified as either metaphorical (because it still evokes the literal senses of its constituents) or idiomatic (as it is a fixed expression with lexicalized figurative sense). | contrasting |
train_9710 | Additionally, they make use of concreteness ratings, grounded in the Conceptual Metaphor Theory (Lakoff and Johnson, 1980). | as argued in our introduction, concreteness is also useful for the detection of other kinds of non-literal language. | contrasting |
train_9711 | Their semantic tasks also improve when introducing shortcut connections, i.e., feeding the word representations into all layers of their network. | alonso and Plank (2017) find generally mixed performance of MTL for semantic tasks. | contrasting |
train_9712 | Essentially, this amounts to using one network for multiple tasks, trained jointly. | sLUICE has separate layers for each task, and learns parameters which control the information flow between those task-specific layers. | contrasting |
train_9713 | In other words: both metaphor tasks, where training (and evaluation) data for a particular token or construction is sparse, profit heavily from inclusion of the idiom datasets. | using only the respective other English metaphor dataset does not help. | contrasting |
train_9714 | is wrongly classified as literal by both STL approaches, but correctly labeled as metaphoric by both MTL-all configurations, i.e., with the help of auxiliary tasks. | most of the newly identified metaphors (i.e., found by MTL-all but not by STL) differ between the approaches. | contrasting |
train_9715 | We attribute this to heavy featureengineering on their part, using supersenses and concreteness information. | we perform on par with the state-of-the-art on tok-met (Do Dinh and Gurevych (2016): F 1 = 0.56, our system: F 1 = 0.56). | contrasting |
train_9716 | In this method, a considerably small fraction of tweets, over a time period are collected. | the way in which Twitter makes these set of tweets is unclear, inducing a possible bias. | contrasting |
train_9717 | (Rehbein et al., 2013) proposes an efficient approach to collect German tweets using geolocation features with language filter. | the data encounters certain biases: 1. | contrasting |
train_9718 | After being introduced, some of them are adopted by other speakers belonging to the same community and possibly spread until they become community norms. | other innovations do not manage to make their way into the community conventions and just disappear after a certain period of time. | contrasting |
train_9719 | Identifying cognates and borrowings by means of computational approaches has attracted considerable attention in recent years (Hall and Klein, 2010;Tsvetkov et al., 2015;Ciobanu and Dinu, 2015). | few studies went beyond this step, and beyond the comparative method, to automate the process of proto-language reconstruction (Oakes, 2000;Bouchard-Côté et al., 2013;. | contrasting |
train_9720 | (2014), as well as the paradigm generalization method to be proposed in section 4, operates on full paradigms, requiring that all of the inflection tables for a given part-of-speech have the same number of MSDs. | this is not the case in the CoNLL-SIGMORPHON shared task data: inflection tables of German nouns, for instance, encompass between four and eight forms. | contrasting |
train_9721 | Among the proposed methods, DAEME performs best in short-text classification tasks, while CAEME is competitive in semantic similarity measurement tasks. | aaEME performs overall well and obtains the best performance in word analogy, relation classification, and psycholinguistic score prediction tasks. | contrasting |
train_9722 | With the aid of recently proposed word embedding algorithms, the study of semantic similarity has progressed and advanced rapidly. | many natural language processing tasks need sense level representation. | contrasting |
train_9723 | With the pre-trained word embedding, some researches propose post-processing models that incorporate with the existing semantic knowledge into the word embedding model (Faruqui et al., 2015;Yu and Dredze, 2014). | word embedding models use only one vector to represent a word, and are problematic in some natural language processing applications that require sense level representation (e.g., word sense disambiguation, semantic relation identification, etc.). | contrasting |
train_9724 | Some researches use zero vector to represent the missing words, whereas some remove those missing words from the dataset. | within this research the reported performance can be compared due to the same missing word processing method and the same similarity computation method. | contrasting |
train_9725 | Since attention largely improves model performance for deterministic Seq2Seq models, it is tempting to include attention in the variational Seq2Seq as well. | our pilot experiment raises the doubt if a traditional attention mechanism, which is deterministic, may bypass the latent space in VED, as illustrated by a graphical model in Figure 1c. | contrasting |
train_9726 | This heuristic is proposed in hopes of better training the variational latent space at the beginning stages. | experiments show that such simple heuristic does not help much, and is worse than the principled variational attention mechanism in terms of all BLEU and diversity metrics. | contrasting |
train_9727 | This is consistent with the evidence that variational latent space may serve as a way of regularization and improves quality . | a small γ a only slightly improves diversity, and hence we did not choose this hyperparameter in Table 2. | contrasting |
train_9728 | However, variational attention outperforms deterministic attention in terms of both quality and diversity, showing that our model is effective in different applications. | we find the improvement is not so large as in the previous experiment. | contrasting |
train_9729 | This latter quantity is bounded by a different range for each measurement feature (see Table 1). | humans may prefer to give measurement values between 0 and 1, where 1 means "this happens as often as possible." | contrasting |
train_9730 | Prior to selection, each candidate measurement must be considered and a model retrained using t sampled candidate measurements. | by applying a number of additional approximations we can run on the weather sentiment dataset from Section 5.4 with over 22,000 candidate measurements. | contrasting |
train_9731 | Structural embeddings are important complements to existing neural architectures. | it is unclear whether they should be supplied as input to the encoder or be left out of the encoding process and directly concatenated with the encoder hidden states. | contrasting |
train_9732 | by the parser may affect the "Struct+2Way+Relation" results. | because the Gigaword dataset does not provide gold-standard annotations for parse trees, we could not easily verify this and will leave it for future work. | contrasting |
train_9733 | It also compares favorably to ground-truth summaries on "fluency" and "faithfulness." | the ground-truth summaries, corresponding to article titles, are judged as less satisfying according to human raters. | contrasting |
train_9734 | We find that the systems trained with maximum likelihood objectives produce less diverse output than those trained with additional adversarial objectives. | the adversarially-trained models only produce more types from the head of the vocabulary and not the tail. | contrasting |
train_9735 | We also found that GAN-based systems produce more diverse descriptions than MLE-based systems. | we caution that the GAN-based systems are the only ones in our evaluation that are designed with diversity in mind. | contrasting |
train_9736 | The fourth example shows a slightly challenging case, where both headlines make it difficult to understand the question. | the headline of our method included the term "winning bidder", so at least the answerer can assume that the question is about some sort of auction trouble. | contrasting |
train_9737 | (2015) proposed a neural headline generation model, there have been many studies on the same headline generation task (Takase et al., 2016;Chopra et al., 2016;Kiyono et al., 2017;Ayana et al., 2017;Raffel et al., 2017). | all of them are abstractive methods that can yield erroneous output, and the training for them requires a lot of paired data, i.e., news articles and headlines. | contrasting |
train_9738 | Furthermore, recent sophisticated neural models for extractive summarization (Cheng and Lapata, 2016;Nallapati et al., 2017) basically require large-scale paired data (e.g., article-headline) to automatically label candidates, as manual annotation is very costly. | such paired data do not always exist for real applications, as in our task described in Section 1. | contrasting |
train_9739 | Unlike the above tasks, SCT (Mostafazadeh et al., 2016) provides large-scale supervised training stories of temporal and causal relations, ensuring a high-quality evaluation for common sense knowledge understanding of mechanisms. | the published ROCStories could not be used directly in supervised learning. | contrasting |
train_9740 | Although these models generate better performance, the characteristics of question is still ignored. | research about visual question generation is much less (Ren et al., 2015;Vijayakumar et al., 2018;Mostafazadeh et al., 2016;Shijie et al., 2017). | contrasting |
train_9741 | Their method performs word sense discrimination and embedding learning by using a nonparametric estimate of the number of senses per word type. | our model does not assume that there is a finite set of discrete senses per word. | contrasting |
train_9742 | Meanwhile, a number of deep learning models are designed to take up the challenges, most of which focus on attention mechanism Seo et al., 2017;Cui et al., 2017a;Kadlec et al., 2016;. | how to represent word in an effective way remains an open problem for diverse natural language processing tasks, including machine reading comprehension for different languages. | contrasting |
train_9743 | Intuitively, word-level representation is good at catching global context and dependency relationships between words, while character embedding helps for dealing with rare word representation. | the minimal meaningful unit below word usually is not character, which motivates researchers to explore the potential unit (subword) between character and word to model sub-word morphologies or lexical semantics. | contrasting |
train_9744 | The results indicate that for a task like reading comprehension the subwords, being a highly flexible grained representation between character and word, tends to be more like characters instead of words. | when the subwords completely fall into characters, the model performs the worst. | contrasting |
train_9745 | Not only for machine reading comprehension tasks, character embedding has also benefit other natural language process tasks, such as word segmentation , machine translation (Luong and Manning, 2016), tagging (Yang et al., 2016; and language modeling (Verwimp et al., 2017;Miyamoto and Cho, 2016). | character embedding only shows marginal improvement due to a lack internal semantics. | contrasting |
train_9746 | proposes multi-dimensional/vectorial self-attention pooling on the top of self-attention network instead of BiLSTM. | both of them didn't consider multi-head self-attention. | contrasting |
train_9747 | There is no consensus on which set of labels should be uniformly used. | paraphrasing involves rewriting the noun compound as a paraphrase which conveys its meaning explicitly, e.g., orange juice: "juice made from orange" or "juice with orange flavour". | contrasting |
train_9748 | The advantage of prepositional paraphrasing as compared to labelling is that the set of prepositions is finite, limited and pre-defined. | the shortcoming is that the information is too coarse-grained for downstream tasks. | contrasting |
train_9749 | Among statistical approaches, supervised approaches rely on annotated data that needs to be sufficiently large and representative enough of the underlying problem. | such datasets are rare, and the ones that do exist are small and heavily skewed, which makes the learning more difficult. | contrasting |
train_9750 | The correct paraphrase of this noun compound is 'tree of apple'. | the system considers the paraphrase 'tree with apple' to be correct. | contrasting |
train_9751 | Such observations indicate the impact of discourse features. | sometimes contextual cues from the previous comments are not enough and misclassifications are observed due to lack of necessary commonsense and background knowledge about the topic of discussion. | contrasting |
train_9752 | The use of the words doctor, patient, and bronchial in the setup lead the listener to assume that the man is seeking medical advice. | the punchline, the doctor's wife's response along with her description, reveals the man's true intent. | contrasting |
train_9753 | Extended Lesk (Banerjee and Pedersen, 2003), for example, compute the relatedness between two words as a function of the size of the overlap of their glosses. | such measures do not fully address the short comings of word embedding similarities. | contrasting |
train_9754 | Any measure based on a distributional semantic approach, including both Extended Lesk and Word2Vec, at its core relies on word co-occurrence to extract relationships. | not all relationships are evidenced by co-occurrence. | contrasting |
train_9755 | By default, word association and Word2Vec features are computed across all word pairs in a document. | we also experiment computing these features only across pairs of humour anchor (HA) words. | contrasting |
train_9756 | Also the overall performance of this model is low and according to the FNC metric, TalosTree would even outperform TalosComb. | talostree returns almost no predictions for the DSG class, although it performs exceptionally well in terms of FNC. | contrasting |
train_9757 | Lexical cue words, such as reports, said, false, hoax play an important role in classification. | the systems fail when semantic relations between words need to be taken into account, complex negation instances are encountered, or the understanding of propositional content in general is required. | contrasting |
train_9758 | The features that have best positive correlations are the phrase ratio of verb phrases (PR VP) and the number of subordinating conjunctions (SBAR Count). | the features that have consistent negative correlations are the length ratio (LR NP) and the average length of noun phrases (AP L 2 NP). | contrasting |
train_9759 | Existing approaches in spam review detection mainly focused on exacting linguistic features and behavioral features. | linguistic features are ineffective when they are used to detect the real-life fake reviews (Mukherjee et al., 2013b;Wang et al., 2017b), and it usually requires a large number of samples to make the observations on behavior features. | contrasting |
train_9760 | Results shows that their model RE* benefits a lot from unlabeled data. | this is at the expense of efficiency since the training set size increases dramatically when the unlabeled data are added. | contrasting |
train_9761 | On one hand, our AE model trained on the small labeled data significantly outperforms the RE* model trained on the whole dataset, let alone our enhanced AE* model. | when RE* is reduced to RE, its performance drops a lot. | contrasting |
train_9762 | This suggests that the performance of RE* is highly dependent on the size of training data. | our model is less sensitive to the training size than RE. | contrasting |
train_9763 | These semi-supervised naive Bayes algorithms can be directly applied to text classification applications with a few labeled documents. | manually collecting a small number of labeled documents remains expensive in many text analysis applications. | contrasting |
train_9764 | For each algorithm, we independently run it 10 times, and show the average results in Tables 2 and 3. | comparison of dataless algorithms to SNB, we can observe that the proposed PL-DNB algorithm performs better in all settings. | contrasting |
train_9765 | In terms of Reuters, PL-DNB performs relatively stable as δ ∈ [0.1, 0.6], but worse when δ becomes larger than 0.6. | the performance of Newsgroup gets stable before δ achieves 0.9, especially when the set of S D is utilized. | contrasting |
train_9766 | As such questions might appear in one language but not in another, it is much tougher to exploit questions across languages. | striking features of an image are usually picked up by several questions. | contrasting |
train_9767 | Similarly, cognitive research on fluency showed that people rate stimuli that are processed more easily higher (Belke et al., 2010). | many modern artists like Picasso or Schönberg complicated the processability of their works using processes of abstraction in order to prevent such automated or fluid forms of art comprehensibility. | contrasting |
train_9768 | For most QA systems, raw text and structured knowledge graph are used as their knowledge. | raw text corpora are hard to understand for machine. | contrasting |
train_9769 | Jauhar (2017) also evaluates his model on Elementary School Science Questions (ESSQ) dataset. | the structure of those two datasets are quite different. | contrasting |
train_9770 | In this case, entire books, both in the training set and testing set, will be processed. | chapter lengths vary significantly from 158 terms up to 169,588 terms, and we must select a fixed length for each (given the nature of the applied models). | contrasting |
train_9771 | Additionally, this research has shown that across the corpus, there was a 6% gain in accuracy when training and evaluating on the first 5,000 words of each book over training and evaluating on the last 5,000. | there was an additional 2% increase in accuracy when training and evaluating on the random 5,000 word subtexts extracted from each book. | contrasting |
train_9772 | For MIT Restaurant, MIT Movie, CADEC, i2b2 2014 and i2b2 2006 (Figure 2), the neural network approaches have better recall than PRED and PRED-CCA across all dataset sizes; this results in the neural network approaches having a greater F1 score for these datasets in most cases (the exceptions being MIT Restaurant with under 500 sentences, and CADEC with 500 sentences). | for GUM, re3d, NIST IE-ER, MUC 6 ( Figure 4) and Ritter (Figure 3), we observe that initially PRED and PRED-CCA have higher recall than the neural network approaches, but the neural networks' recall surpasses them once the target domain dataset is large enough. | contrasting |
train_9773 | But, unless we know where "Ganapathy Colony" is, the water level data cannot enhance situational awareness and inform disaster response applications such as storm surge modeling or forecasting. | pragmatic influences on writing style shorten names to reduce redundant content in social media. | contrasting |
train_9774 | Representing a question as a bag of words might be too simple. | this method works well in our setting. | contrasting |
train_9775 | The true answer path should be "Joseph P Kennedy Sr − −−−−−−−−− →New York County". | the middle entity decided by IRN is "Rosemary Kennedy" who is also a child of "Joseph P Kennedy Sr", but her death is not included in KB. | contrasting |
train_9776 | In traditional evaluations such as word similarity and word analogy, the aforementioned context-aware word embeddings work well since semantic information plays a vital role in these tasks, and this information is naturally addressed by word contexts. | in real-world applications, such as text classification and information retrieval, word contexts alone are insufficient to achieve success in the absence of task-specific features. | contrasting |
train_9777 | Music structure discovery is a research field in Music Information Retrieval where the goal is to automatically estimate the temporal structure of a music track by analyzing the characteristics of its audio signal over time. | only a few works have addressed such task lyrics-wise Watanabe et al., 2016;Baratè et al., 2013). | contrasting |
train_9778 | The SSM phon exhibits a small but measurable performance decrease from SSM string , possibly due to phonetic features capturing similar regularities, while also depending on the quality of preprocessing tools and the rulebased phonetic algorithm being relevant for our song-based dataset. | despite lower individual performance, SSMs are still able to complement each other with the SSM all model yielding the best performance. | contrasting |
train_9779 | In particular, feed-forward CNNs with word embedding have been proven to be a relatively simple yet powerful kind of models for text classification (Kim, 2014). | the reliance on large-scale corpus has been a formidable constraint for deep neural networks (DNNs) based methods due to their numerous parameters. | contrasting |
train_9780 | The units tend to borrow more features from other tasks will more frequently activate the leaky gate. | the update gate will be more active to preserve more information in current task. | contrasting |
train_9781 | Previous works focused on detecting claims in a small set of documents or within documents enriched with argumentative content. | pinpointing relevant claims in massive unstructured corpora, received little attention. | contrasting |
train_9782 | Relying on this formulation led to promising precision results in the challenging task of corpus wide claim detection, albeit with low recall . | specifically, while each of the sentences in Table 2 contains a valid and relevant claim, only s1 satisfies their query; s2 satisfies only the first part of the query ('that' preceding the MC); s3 satisfies the second part of the query (CL token following the MC); and s4 only mentions the MC. | contrasting |
train_9783 | We test the performance of these DNNs over a distinct test set of 50 topics, also from (Levy et al., 2017). | in contrast to this previous work, we consider a much more relaxed query that only requires the MC to be mentioned in the sentence. | contrasting |
train_9784 | The first study focusing on morphologically rich languages to employ neural networks (Demir andÖzgür, 2014) contains a regularized averaged perceptron (Freund and Schapire, 1999) and relies on handcrafted rules along with pretrained word embeddings. | they refrain from using output from external morphological disambiguators and only rely on the first and last few characters of a word as features. | contrasting |
train_9785 | We see that the JOINT2 model is performing better than just calculating two losses at the last layer as we did in the JOINT1 model. | applying the Welch's t-test between the JOINT1 and JOINT2 runs does not strongly imply this difference (p = .24). | contrasting |
train_9786 | (2017) mapped the dataset from Freebase to Wikidata. | 2 our migrated SIMPLE-DBPEDIAQA dataset has roughly twice the number of mapped questions. | contrasting |
train_9787 | Next datasets with comparably high performance measures are Blogs and DailyDialogs. | crowdFlower and Electoral-Tweets seem to be the most challenging in the within-corpus setting. | contrasting |
train_9788 | In terms of annotation procedures, these experiments allow almost for no judgement, since most of the datasets use expert annotation and we only have few examples for the other two ways of annotation (crowdsourcing and distant supervision) being used. | we could observe that the crowdsourced datasets are more difficult which might be due to a more noisy annotation. | contrasting |
train_9789 | Because of its relatively low scores, we don't experiment with more complex generative models. | a simple discriminative classifier such as Logisitic Regression performs significantly better than Naïve Bayes on the random version for all the datasets. | contrasting |
train_9790 | When this condition is satisfied, the implication is that the system ranks the positive sentence below the negative sentence, or does not sufficiently rank the positive answer above the negative answer. | if the correct sentence has a score higher than the incorrect sentence by at least a margin m (i.e., h , and then the above expression has zero loss. | contrasting |
train_9791 | In other words, MRR is the average of the reciprocal ranks of results for the questions in Q. | if the set of correct candidate answers for a question q j ∈ Q is {d 1 , d 2 , ..., d m j } and R jk is the set of ranked retrieval results from the top result until you get to the answer d k , then MAP is calculated as follows: When a relevant answer is not retrieved at all for a question, the precision value for that question in the above equation is taken to be 0. | contrasting |
train_9792 | The distant supervision assumption here is that if a string in text is included in a predefined dictionary of entities, the string might be an entity. | this kind of auto-generated data suffers from two main problems: incomplete and noisy annotations, which affect the performance of NER models. | contrasting |
train_9793 | Most previous studies on NER focus on a certain set of predefined NER types, such as organization, location, person, date, and so on, where a certain amount of labeled data is provided to train the models. | different applications require particular entity types, such as "Brand" and "Product" in Ecommerce domain, and "Company" for finance industry. | contrasting |
train_9794 | that is regarded as a positive instance with two "Product" names are correctly matched by the distant supervision method. | in practice we find that the automatically labeled NER data suffers from two problems, i.e., incomplete annotation and noisy annotation, which negatively affect the performance of NER systems. | contrasting |
train_9795 | In order to overcome the challenge of data deficient, some approaches based on weakly supervised learning (Nadeau et al., 2006;Riloff and Jones, 1999) have been proposed and successfully expand training data and feature space. | it is difficult to implement these methods on Chinese tasks because of the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation, and it may cause large number of matching errors. | contrasting |
train_9796 | To learn and tune the parameters of the local models for CoNLL and TAC we use their own training and development splits. | to learn and tune the parameters of the global model (the discrepancy propagator, the heuristic and pruning functions) we only use the CoNLL training and dev-sets. | contrasting |
train_9797 | SuperAgent considers both QA collections and reviews when answering questions. | there is no mutual coordination for the response generation from different information sources, and the response from SuperAgent cannot be presented as a ranked list of snippets. | contrasting |
train_9798 | Many sentiment analysis systems rely primarily on the sentiment of individual words. | precise sentiment analysis often requires lexical-semantic knowledge that goes beyond word-level sentiment, even when dealing with short phrases such as bigrams. | contrasting |
train_9799 | Other approaches aim to learn sentiment composition from sentiment-labeled texts. | sentiment-labeled texts might not be available for certain domains or languages. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.