id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_3300 | However, with a dynamic programming decoder, their sequence labeling model can only extract local features. | our integrated approximated search and learning framework allows rich global features. | contrasting |
train_3301 | The idea of crosslingual transfer using the parser we examined above is straightforward. | to traditional approaches that have to discard rich lexical features (delexicalizing) when transferring models from one language to another, our model can be transferred using the full model trained on the source language side, i.e. | contrasting |
train_3302 | Contrary to the projection approach, CCA assigns embeddings for every word in the monolingual vocabulary. | one potential limitation is that CCA assumes linear transformation of word embeddings, which is difficult to satisfy. | contrasting |
train_3303 | Otherwise, the translational equivalence will be broken. | for PROJ, there is no such limitation. | contrasting |
train_3304 | This design biases the model to learn relations at a granularity optimized for the machine comprehension task. | to a generic discourse analyzer, our method can also utilize additional information available in the machine comprehension context. | contrasting |
train_3305 | 4 Ideally, we would be able to consider all possible pairs of sentences in a given paragraph. | to reduce computation costs in practice, we use a sentence window k and consider only sentences that are at most k away from each other. | contrasting |
train_3306 | On the MC500 test set, we again find that model 3, with a score of 63.75%, provides a gain of 3.5% over SWD and is comparable to the performance of RTE+SWD (63.33%) The importance of utilizing multiple relevant sentences to score answers is evident from the higher scores of models 2 and 3 on multi type questions in both test sets. | model 1, which retrieves only a single relevant sentence for each question, achieves the best scores on the single type questions up to 83.25% on MC160 test. | contrasting |
train_3307 | In MC160, the accuracy is even higher for what-questions (almost 80%). | the model does slightly worse on whyquestions, with only 60% accuracy. | contrasting |
train_3308 | This is likely because these questions often have characteristic words occurring in the sentences (such as here, there, after, before, him, her). | questions asking how, which and why have lower recalls since they often involve reasoning over multiple sentences. | contrasting |
train_3309 | Finally they reported an F-score of 8% for role linking. | being strongly lexicalized, their trained model seems heavily dependent on the training data. | contrasting |
train_3310 | (1) is non-convex due to the terms a and b that interact with each other, so it cannot be solved exactly using a standard optimization technique. | a method based on SVD provides an efficient and exact solution. | contrasting |
train_3311 | But the likelihood function under the Brown model is non-convex, making an MLE estimation of the model parameters difficult. | the hard-clustering assumption (Assumption 4.1) allows for a simple Theorem 4.1. | contrasting |
train_3312 | Though the shortest path can be selected, it ignores other related category nodes and loses rich information. | an ideal scheme should not only mirror the distance in the hierarchy, but also take into account all possible paths in order to capture the full aspects of relatedness. | contrasting |
train_3313 | These models share the property of producing a single general representation for each word, which can be utilized in a variety of tasks, from POS tagging to semantic role labelling. | here we attempt to decompose the representations into separate seman- tic and syntactic components. | contrasting |
train_3314 | that we can approximate smaller − small with bigger − big, or smaller − bigger with small − big. | knowing that opposite sides of the square in Figure 1 are parallel to each other still leaves open the question of what happens at the corners. | contrasting |
train_3315 | 's (2013b) method for making the word-analogy predictions in terms of addition and subtraction: smaller ≈ bigger − big + small. | in the case of the CBSOW and CBSOWM models, we use the novel approach described in Section 3.5: v smaller ≈ b small ⊕ s bigger . | contrasting |
train_3316 | There is also no Freebase type representing the concept 'municipalities.' | this word is associated with an entity in Freebase. | contrasting |
train_3317 | They report that 97% of the attributes in Freebase are commonly expressed as noun phrases. | unlike our work, they considered open information extraction and did not ground the extractions in an external KB. | contrasting |
train_3318 | Several semantic parsing methods use a domainindependent meaning representation derived from the combinatory categorial grammar (CCG) parses (e.g., (Cai and Yates, 2013;Kwiatkowski et al., 2013;Reddy et al., 2014)). | our query graph design matches closely the graph knowledge base. | contrasting |
train_3319 | These biases are not codified, which results in an idiosyncratic and mysterious user experience, a major drawback of natural language interfaces (Rangel et al., 2014). | our compact grammar precisely specifies the logical functionality. | contrasting |
train_3320 | Although this is a marked improvement in cost and scalability compared to annotated logical forms, it still requires non-trivial effort: the annotator must (i) understand the question and (ii) figure out the answer, which becomes even harder with compositional utterances. | our main source of supervision is paraphrases, which only requires (i), not (ii). | contrasting |
train_3321 | Table 2 might evoke rule-based systems (Woods et al., 1972;Warren and Pereira, 1982) or controlled natural languages (Schwitter, 2010). | there is an important distinction: the grammar need only connect a logical form to one canonical utterance; it is not used directly for parsing. | contrasting |
train_3322 | The simple RNN has the ability to capture context information. | the length of reachable context is often limited. | contrasting |
train_3323 | By using stock prices from Yahoo Finance, they annotated all the news in a transaction date with going up or down categories. | the weakness of this assumption is that all the news in one day will have the same category. | contrasting |
train_3324 | They were integrated into the regression model. | they concluded that their model does not successfully predict stock returns. | contrasting |
train_3325 | They concluded that the ratio of the emotional tweets significantly negatively correlated with Down Jones, NASDAQ and S&P 500, but positively with VIX. | they did not use their model to predict the stock price values. | contrasting |
train_3326 | Their model achieved around 75% accuracy. | their test period was short, from 8 th to 26 th in September 2012, containing only 14 transaction dates. | contrasting |
train_3327 | A sentiment time series was built based on these topics. | the time period of their whole dataset is rather short, only three months. | contrasting |
train_3328 | MaxEnt-LDA hybrid model can jointly discover both aspects and aspectspecific opinion words on a restaurant review dataset (Zhao et al., 2010), while FACTS, CFACTS, FACTS-R, and CFACTS-R model were proposed for sentiment analysis on a product review data (Lakkaraju et al., 2011). | one of the weaknesses of these methods is that there is only one opinion word distribution corresponding to one topic (aspect). | contrasting |
train_3329 | The results showed that the model using mood information outperformed the model without mood by 3.57%, 3.58%, 14.29% and 12.5% accuracy for XOM, EBAY, IBM and KO stock, respectively. | the performance on DELL stock was not improved. | contrasting |
train_3330 | For example, 'yahoo', 'ko' and 'finance' are highly associated with the distribution defined by hidden sentiment 1 and hidden topic 1. | it is rather difficult to guess which sentiment or topic in this joint distribution actually means. | contrasting |
train_3331 | As mentioned above, tags of parent nodes have impact on composition. | some phrases with the same tag should be composed in different ways. | contrasting |
train_3332 | Mention pair models that predict whether or not two mentions are coreferent have historically been very effective for coreference resolution, but do not make use of entity-level information. | we show that the scores produced by such models can be aggregated to define powerful entity-level features between clusters of mentions. | contrasting |
train_3333 | Even when using DAgger, this problem could exist to a lesser degree if the model heavily overfits to the training data. | the agent has a small number of parameters thanks to our model stacking approach, reducing the risk of this happening. | contrasting |
train_3334 | We assign each action the regret r associated with taking that action under the current policy as a cost: The "rolling out" procedure means we naively have to visit O(t 2 ) states each iteration instead of t, where t is the length of a trajectory. | the highly constrained action space described in section 3.1 combined with the use of memoization allows the algorithm to still run efficiently. | contrasting |
train_3335 | In total, there are only 56 features after the feature conjunction. | these features provide strong signal because they are directly related to the probabilities of mentions being coreferent. | contrasting |
train_3336 | These systems often use some form of pretraining for initialization, often word-embeddings learned from external tasks. | there has been little work of this form for coreference resolution. | contrasting |
train_3337 | While early work focused primarily on English (Soon et al., 2001;Ng and Cardie, 2002), efforts have been made toward multilingual systems, this being addressed in recent shared tasks Pradhan et al., 2012). | the lack of annotated data hinders rapid system deployment for new languages. | contrasting |
train_3338 | Since HIPTM can no longer access the votes in the test data, its performance drops significantly compared with VOTE. | it still quite strongly outperforms the two text-based baselines, showing that jointly modeling the voting behavior improves the text-based elements of the model. | contrasting |
train_3339 | The most Tea Party oriented frame node, M3, focuses on criticizing government overspending, a recurring Tea Party theme. | 13 Frame M1, least oriented toward the Tea Party, focuses on the downsides of a government shutdown, highlighting establishment Republican concerns about being held responsible for the political and economic consequences. | contrasting |
train_3340 | Several extensions of LDA have been proposed that assign topics not only to individual words but also to multi-word phrases, which we call topical collocations. | as we will discuss in section 2, most of those extensions either rely on a pre-processing step to identify potential collocations (e.g., bigrams and trigrams) or limit attention to bigram dependencies. | contrasting |
train_3341 | found that this modification allowed TNG to outperform LDACOL on a standard information retrieval task. | both LDACOL and TNG do not require words within a sequence to share the same topic, which can result in semantically incoherent collocations. | contrasting |
train_3342 | For example, if the target answer is a small number (say, 2), it is possible to count the number of rows with some random properties and arrive at the correct answer. | as the system encounters more examples, it can potentially learn to disfavor them by recognizing the characteristics of semantically correct logical forms. | contrasting |
train_3343 | For example, the decomposition grammar for s=3 for AMR #194 in the corpus has over 300 million rules. | many uses of decomposition grammars, such as sampling for grammar induction, can be phrased purely in terms of top-down queries. | contrasting |
train_3344 | Finally, we analyzed the asymptotic runtimes in Table 1 in terms of the maximum number d • s of in-boundary edges. | the top-down parser does not manipulate individual edges, but entire s-components. | contrasting |
train_3345 | These methods can efficiently estimate the co-occurrence statistics to model contextual distributions from very large text corpora and they have been demonstrated to be quite effective in a number of NLP tasks. | they still suffer from some major limitations. | contrasting |
train_3346 | These differences are further exhibited in the confusion matrices shown in Figure 4; when the classifier is trained using only monolingual features, it misclassifies 26% of ¬ pairs as ⌘, whereas the bilingual features make this error only 6% of the time. | the bilingual features completely fail to predict the A class, calling over 80% of such pairs ⌘ or ⇠. | contrasting |
train_3347 | Other approaches eliminate the discontinuities via tree transformations (Boyd, 2007;Kübler et al., 2008), sometimes as a pruning step in a coarse-to-fine parsing approach (van Cranenburgh and Bod, 2013). | reported runtimes are still superior to 10 seconds per sentence, which is not practical. | contrasting |
train_3348 | In existing parsers, features are commonly exploited from the parsing history, such as the top k elements on the stack. | such features are expensive in terms of search efficiency. | contrasting |
train_3349 | In practice, this global model is much stronger than the local MaxEnt model. | training this model without any approximation is hard, and the common practice is to rely on well-known heuristics such as an early update with beam search (Collins and Roark, 2004). | contrasting |
train_3350 | 7 Every experiment reported here was performed on hardware Feature We borrow the feature templates from Sagae and Lavie (2006). | we found the full feature templates make training and decoding of the structured perceptron much slower, and instead developed simplified templates by removing some, e.g., that access to the child information on the second top node on the stack. | contrasting |
train_3351 | This is particularly true with DP; it sometimes outperforms Z&C, probably because our simple features facilitate state merging of DP, which expands search space. | our main result that the system with optimal search gets a much higher score (90.7 F1) than beambased systems with a larger beam size (90.2 F1) indicates that ordinary beam-based systems suffer from severe search errors even with the help of DP. | contrasting |
train_3352 | Traditionally, CCG graphs are generated as a by-product by deep parsers with a core grammar (Clark et al., 2002;Clark and Curran, 2007b;Fowler and Penn, 2010). | modeling these dependencies within a CCG parser has been shown very effective to improve the parsing accuracy (Clark and Curran, 2007b;Xu et al., 2014). | contrasting |
train_3353 | This part of information is very similar to Semantic Role Labeling (SRL), whose goal is to find semantic roles for verbal predicates as well as their normalization. | functor-argument analysis grounded in CCG is approximation of underlying logic forms and thus provides bi-lexical relations for almost all words. | contrasting |
train_3354 | We also use the syntactic dependency trees provided by the CCGBank to obtain necessary information for graph parsing. | different from experiments in the CCG parsing literature, we use no grammar information. | contrasting |
train_3355 | The only underlying LSTM structure that has been explored so far is a linear chain. | natural language exhibits syntactic properties that would naturally combine words to phrases. | contrasting |
train_3356 | In bag-of-words models, phrase and sentence representations are independent of word order; for example, they can be generated by averaging constituent word representations (Landauer and Dumais, 1997;Foltz et al., 1998). | sequence models construct sentence representations as an order-sensitive function of the sequence of tokens (Elman, 1990;Mikolov, 2012). | contrasting |
train_3357 | For example, this allows the left hidden state in a binary tree to have either an excitatory or inhibitory effect on the forget gate of the right child. | for large values of N , these additional parameters are impractical and may be tied or fixed to zero. | contrasting |
train_3358 | TIME-FLOW is aware of the sequential direction, inherited from the space-awareness of CNN, but it is not sensitive enough about the prediction task, due to the uniform weights in the convolution. | tIME-ARROW, living in location-dependent parameters of convolution units, acts like an arrow pin-pointing the prediction task. | contrasting |
train_3359 | The work was done when Weiwei Guo was in Columbia University summarization systems adopt the extractionbased approach which selects some original sentences from the source documents to create a short summary (Erkan and Radev, 2004;Wan et al., 2007). | the restriction that the whole sentence should be selected potentially yields some overlapping information in the summary. | contrasting |
train_3360 | These features help them select better category-specific content for the summary. | the usability of such features depends on the availability of predefined categories in the summarization task, as well as the availability of training data with the same predefined categories for estimating feature weights. | contrasting |
train_3361 | Later, unified models are proposed to conduct sentence selection and redundancy control simultaneously (McDonald, 2007;Filatova and Hatzivassiloglou, 2004;Yih et al., 2007;Gillick et al., 2007;Lin and Bilmes, 2010;Lin and Bilmes, 2012;Sipos et al., 2012). | extraction-based approaches are unable to evaluate the salience and control the redundancy on the granularity finer than sentences. | contrasting |
train_3362 | For example, some background articles on Mubarak's step-down will likely explain the reasons behind it. | extracting such causal information can be difficult, as demonstrated by the still low results for discourse relation extraction (Lin et al., 2014;Braud and Denis, 2014). | contrasting |
train_3363 | That process leverages how good this connection is in the sub-graph (G*) which consists of X and its outgoing neighbors. | our proposed model uses the non-normalized value of the influence score to leverage how good this connection is on the entire graph instead of G*. | contrasting |
train_3364 | (2012) use 91 timelines from AFP as ground truth along with the AFP news corpus for feature extraction. | their dataset is not publically available. | contrasting |
train_3365 | 7 Overall, the recency salience measure provides a better fit than the frequency salience measure with respect to accuracies, suggesting that recency better captures speakers' representations of discourse salience that influence choices of referring expressions. | the models with frequency discourse salience have higher model log likelihood than the models with recency do. | contrasting |
train_3366 | Experiments with the adult-directed news corpus show a close match between speakers' utterances and model predictions. | experiments with child-directed speech show that the models were more likely to predict proper names where pronouns were used, suggesting that the estimates of discourse salience using simple measures were not sufficient to capture a conversation. | contrasting |
train_3367 | A major focus in computational social science has been the study of interpersonal relations through data. | social interactions are complicated, and we rarely have access all of the data that define the relationship between friends or enemies. | contrasting |
train_3368 | They have been good allies for the better part of the game. | immediately after this exchange, Austria suddenly invades German territory. | contrasting |
train_3369 | Previous studies mainly relied on shallow textual clues in Twitter posts in order to predict the number of flu infections, e.g., the number of occurrences of specific keywords (such as "flu" or "influenza") on Twitter. | such a simple approach can lead to incorrect predictions. | contrasting |
train_3370 | For example, given the sentence "I caught a cold," we would predict that the first person ("I," i.e., the poster) is the subject (carrier) of the cold. | we can ignore the sentence, "The TV presenter caught a cold" only if we predict that the subject of the cold is the third person, who is at a different location from the poster. | contrasting |
train_3371 | In addition to the subject labels, we annotated the text span that indicates a subject. | the subjects of diseases/symptoms are often omitted in tweet texts. | contrasting |
train_3372 | The likelihood of a positive episode is extremely high when the subject label of a disease/symptom is FIRSTPERSON (85.1%) or NEARBYPERSON (76.7%). | fAR-AWAYPERSON, NONHUMAN, and NONE subjects represent negative episodes (less than 5.0%). | contrasting |
train_3373 | Similarly, the subject labels improved the performance for "fever". | the subject labels did not improve the performance for headache and runny nose considerably. | contrasting |
train_3374 | That outcome may be achieved through strategies characterized in terms of conversation acts or language with particular stylistic characteristics. | individual acts by themselves lack the power to achieve a complex outcome. | contrasting |
train_3375 | Theory on coordination in groups and organizations emphasizes role differentiation, division of labor and formal and informal management (Kittur and Kraut, 2010). | identification of roles as such has not had a corresponding strong emphasis in the language technologies community, although there has been work on related notions. | contrasting |
train_3376 | What is similar between stances and personas on the one hand and roles on the other is that the unit of analysis is the person. | they are distinct in that stances (e.g., liberal) and personas (e.g., lurker) are not typically defined in terms of what they are meant to accomplish, although they may be associated with kinds of things they do. | contrasting |
train_3377 | Examples include such things as mixed membership stochastic blockmodels (MMSB) (Airoldi et al., 2008), similar unsupervised matrix factorization methods (Hu and Liu, 2012), or semi-supervised role inference models (Zhao et al., 2013). | these approaches do not standardly utilize an outcome as supervision to guide the clustering. | contrasting |
train_3378 | As in NBOW, each word type has an associated embedding. | the composition function g now depends on a parse tree of the input sequence. | contrasting |
train_3379 | Theoretically, word dropout can also be applied to other neural network-based approaches. | we observe no significant performance differences in preliminary experiments when applying word dropout to leaf nodes in RecNNs for sentiment analysis (dropped leaf representations are set to zero vectors), and it slightly hurts performance on the question answering task. | contrasting |
train_3380 | But if we consider that the classifier can only improve its goodness of fit with more features (the sets of features being nested as the support varies), it is likely that the lowest support will lead to the best test accuracy; assuming subsequent regularization to prevent overfitting. | this will come at the cost of an exponential number of features as observed in practice. | contrasting |
train_3381 | In cross-lingual POS tagging, mostly annotation projection has been explored (Fossum and Abney, 2005;Das and Petrov, 2011), since all features in POS tagging models are typically lexical. | using bilingual word representations was recently explored as an alternative to projectionbased approaches (Gouws and Søgaard, 2015). | contrasting |
train_3382 | (2014), in the context of monolingual dependency parsing, investigate continuous word representation for dependency parsing in a monolingual cross-domain setup and compare them to word clusters. | to make the embeddings work, they had to i) bucket real values and perform hierarchical clustering on them, ending up with word clusters very similar to those of ; ii) use syntactic context to estimate embeddings. | contrasting |
train_3383 | Traditionally, such scenarios call for dynamic programming for exact inference. | preliminary experiments showed that, for our model, a Viterbi search based segmenter, even supported by conditional random field (Lafferty et al., 2001) style training, yields similar results as the greedy search based segmenter in this section. | contrasting |
train_3384 | Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. | the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. | contrasting |
train_3385 | GPs usually outperform SVMs by a small margin. | these offer the advantages of not using the validation set and the interpretability properties we highlight in the next section. | contrasting |
train_3386 | doctors and nurses or accountants and assistant accountants. | with very few exceptions, we notice that only adjacent classes get misclassified, suggesting that our model captures the general user skill level. | contrasting |
train_3387 | This is expected because the vast majority of jobs in these classes require a university degree (holds for all of the jobs in classes 2 and 3) or are actually jobs in higher education. | classes 5 to 9 have a similar behaviour, tweeting less on this topic. | contrasting |
train_3388 | Differences arise either due to language use or due to the topics people discuss as parts of various social domains. | a large scale investigation of this hypothesis has never been attempted. | contrasting |
train_3389 | We acknowledge that the derivations of this study, similarly to other studies in the field, are reflecting the Twitter population and may experience a bias introduced by users self-mentioning their occupations. | the magnitude, occupational diversity and face validity of our conclusions suggest that the presented approach is useful for future downstream applications. | contrasting |
train_3390 | By formulating such ideas as search or MDL problems of given coding length 1 , word boundaries are found in an algorithmic fashion (Zhikov et al., 2010;Magistry and Sagot, 2013). | such methods have difficulty incorporating higher-order statistics beyond simple heuristics, such as word transitions, word spelling formation, or word length distribution. | contrasting |
train_3391 | Moreover, they usually depends on tuning parameters like thresholds that cannot be learned without human intervention. | statistical models are ready to incorporate all such phenomena within a consistent statistical generative model of a string, and often prove to work better than heuristic methods (Goldwater et al., 2006;Mochihashi et al., 2009). | contrasting |
train_3392 | (2014) proposed an intermediate model between heuristic and statistical models as a product of character and word HMMs. | these two models do not have information shared between the models, which is not the case with generative models. | contrasting |
train_3393 | For example, in Japanese can be segmented into not only / / / (plum/too/peach/too), but also into / / (plum/peach/peach), which is ungrammatical. | we could exclude the latter case 1 For example, Zhikov et al. | contrasting |
train_3394 | Since segment models like NPYLM have segment lengths as hidden states, they are called semi-Markov models (Murphy, 2002). | our model also has hidden part-of-speech, thus we call it a Pitman-Yor Hidden Semi-Markov model (PYHSMM). | contrasting |
train_3395 | In many cases PYHSMM found more "natural" segmentations, but it does not always conform to the gold annotations. | it often oversegments emotional expressions (sequence of the same character, for example) and this is one of the major sources of errors. | contrasting |
train_3396 | CTB was designed to serve syntactic analysis, whereas PD was developed to support information extraction systems. | the key challenge of exploiting the two resources is that they adopt different sets of POS tags which are impossible to be precisely converted from one to another based on heuristic rules. | contrasting |
train_3397 | Since a looser mapping function leads to a larger number of bundled tags and makes the model slower, we implement a paralleled training procedure based on Algorithm 1, and run each experiment with five threads. | it still takes about 20 hours for one iteration when using the complete mapping function; whereas the other three mapping functions need about 6, 2, and 1 hours respectively. | contrasting |
train_3398 | Particularly, when M ′ = 1K, the model converges very slowly. | from the trend of the curves, we expect that the accuracy gap between our coupled model with M ′ = 5K/20K and the baseline model should be much smaller when reaching convergence. | contrasting |
train_3399 | Examples for word embeddings are SENNA (Collobert and Weston, 2008), the hierarchical log-bilinear model (Mnih and Hinton, 2009), word2vec (Mikolov et al., 2013c) and GloVe (Pennington et al., 2014). | there are many other resources that are undoubtedly useful in NLP, including lexical resources like WordNet and Wiktionary and knowledge bases like Wikipedia and Freebase. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.