id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_12100 | Attention in the literature computes a weighted average with internal attention weights. | we investigate different strategies to incorporate attention information into a neural network. | contrasting |
train_12101 | The underlying intuition is similar to attention for machine translation, which learns alignments between source and target sentences, or attention in question answering, which computes attention weights based on a question and a fact. | these sources for attention are still internal information of the network (the input or previous output predictions). | contrasting |
train_12102 | The Surface Web is the portion of the web that can be crawled and indexed by the standard search engines, such as Google or Bing. | despite their existence, there is still an enormous part of the web remained without indexing due to its vast size and the lack of hyperlinks, i.e. | contrasting |
train_12103 | These techniques can not be applied to Tor HS since the onion addresses are constructed with 16 random characters. | tools like Scallion 6 and Shallot 7 allow Tor users to create customized .onion addresses based on the brute-force technique e.g. | contrasting |
train_12104 | Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. | little is known on when MTL works and whether there are data characteristics that help to determine its success. | contrasting |
train_12105 | The very low value for POS indicates a distribution that, although Zipfian, has very few outliers as a result of the small label set. | dEPRELS, coming from the same corpus, has about three times as many labels, yielding a distribution that has fewer mid-values while still being less than 3. | contrasting |
train_12106 | The former offers the opportunity to change the POS inventory to the three times larger PTB inventory while using the same corpus. | the characteristics of the UD/UPOS we have used as POS throughout the article makes it a more suitable auxiliary source, in fact it systematically outperforms the other two. | contrasting |
train_12107 | Most earlier work had in common that it assumed jointly labeled data (same corpus annotated with multiple labels). | in this paper we evaluate multitask training from distinct sources to address data paucity, like done recently (Kshirsagar et al., 2015;Braud et al., 2016;Plank, 2016). | contrasting |
train_12108 | This indicates a clear advantage of CBOW embeddings over count-based representations for capturing attribute meaning at the word level. | this holds only for adjectives; noun embeddings in isolation perform much worse. | contrasting |
train_12109 | The majority of existing unsupervised approaches focus on optimizing the accuracy of the method, sacrificing its interpretability due to the use of opaque models, such as neural networks. | our approach places a focus on interpretability with the help of sparse readable features. | contrasting |
train_12110 | It lies at the core of language understanding and has already been studied from many different angles (Navigli, 2009;Navigli, 2012). | the field seems to be slowing down due to the lack of groundbreaking improvements and the difficulty of integrating current WSD systems into downstream NLP applications (de Lacalle and Agirre, 2015). | contrasting |
train_12111 | Since semi-supervised models have been shown to outperform fully supervised systems in some settings (Taghipour and Ng, 2015b;Başkaya and Jurgens, 2016;Iacobacci et al., 2016;Yuan et al., 2016), we evaluate and compare models using both manually-curated and automatically-constructed sense-annotated corpora for training. | to supervised systems, knowledgebased WSD techniques do not require any senseannotated corpus. | contrasting |
train_12112 | The main disambiguation clue seems to be given by its previous and immediate subsequent words (federal and government), which tend to co-occur with this particular sense. | knowledge-based WSD systems like UKB or Babelfy give the same weight to all words in context, underrating the importance of this local disambiguation clue in the example. | contrasting |
train_12113 | As described in the Introduction, we collected the historical multiple-choice questions from Gaokao all over the country in rencent five years. | quite a lot contain graphs or tables which require the techniques beyond natural language processing(NLP). | contrasting |
train_12114 | Although (Yih et al., 2015) employed knowledge base, but still failed on multiple sentences questions which is beyond the scope of semantic parsing. | the diversity of candidates in GKHMC makes these models fail to match the question with the right candidate. | contrasting |
train_12115 | These neural approaches aim to obviate the need for any feature engineering and instead focus on developing a neural architecture that learns the representations and the ranking. | while it is possible to view a purely neural approach as an alternative to machine learning involving domain knowledge in the form of handcrafted features, there is no reason why the two approaches cannot be applied in tandem. | contrasting |
train_12116 | The approach achieves state-ofthe-art results. | it requires unsupervised pretraining of the Paragraph Vector model on a relatively big in-domain dataset. | contrasting |
train_12117 | The system described in Section 3 with no interaction transformation (only the encodings are passed to the MLP) and without any external features (x ext in Section 3 and in Figure 1), referred to as GRU-MLP, outperforms the CR and the Random baselines and the systems based on the discourse features. | it performs slightly worse than the approach of (Bogdanova and Foster, 2016). | contrasting |
train_12118 | This approach thus represents a key example of complex reasoning over Horn clause chains using neural networks. | for multiple reasons detailed below it is inaccurate and impractical. | contrasting |
train_12119 | (2) The same previous work takes only a single path as evidence in inferring new predictions. | as shown in Figure 1b, multiple paths can provide ev-idence for a prediction. | contrasting |
train_12120 | Given a vector of scores, the LSE is calculated as and hence the probability of the triple is, The average and the LSE pooling functions apply non-zero weights to all the paths during inference. | only a few paths between an entity pair are predictive of a query relation. | contrasting |
train_12121 | This means that during the back-propagation step, every path will receive a share of the gradient proportional to its score and hence this is a kind of attention during the gradient step. | for averaging, every path will receive equal p 1 N q share of the gradient. | contrasting |
train_12122 | (2012) extend PRA by augmenting KB-schema relations with observed text patterns. | these methods do not generalize well to millions of distinct paths obtained from random exploration of the KB, since each unique path is treated as a singleton, where no commonalities between paths are modeled. | contrasting |
train_12123 | For the sake of these experiments we chose to stop after an hour of an annotator time (the initial lexicon expansion bootstrap and annotating/adjudicating 1,100 sentences). | the human annotator using RASCAL gets a fairly good sense of what kind of annotations are being spotted and what is being missed. | contrasting |
train_12124 | the arguments are convincing: Predicting multiple related tasks should allow us to exploit any correlations between the predictions. | in much of this work, an MTL model is only one possible explanation for improved accuracy. | contrasting |
train_12125 | 12 Computational complexity is not an issue for standard semantic benchmarks such as SimLex-999 or MEN: these data sets require only N gt similarity computations in total, where N gt is the number of word pairs in each benchmark (999 or 3000). | complexity plays a major role in the USF evaluation: the system has to compute |W c | • |V r | similarity scores, where |W c | ≈ 5, 000, and |V r | is large for large vocabularies (typically covering > 100K words). | contrasting |
train_12126 | Interestingly, the best scoring model is Glove, a model which uses document-level co-occurrence, which steers it towards learning topical similarity. | the worst performing model relies on dependency-based contexts which better capture functional similarity (Levy and Goldberg, 2014) and outperform other context choices in word similarity tasks on SimLex and SimVerb (Melamud et al., 2016;. | contrasting |
train_12127 | Quality is assessed for well-arranged discussions that seek agreement. | to the subjective nature of effectiveness, people are good in such an assessment (Mercier and Sperber, 2011). | contrasting |
train_12128 | In accordance with the moderate αvalues, full agreement ranges between 17.4% and 44.7% only. | we observe high majority agreement between 87.5% and 98% for all dimensions, even where scores are rather evenly distributed, such as for global acceptability (95.4%). | contrasting |
train_12129 | The coefficients of emotional appeal seem lower than expected, in particular for effectiveness (.31), indicating the limitation of a correlation analysis: As reflected by the 235 texts with majority score 2 for emotional appeal, many arguments make no use of emotions, thus obliterating effects of those which do. | clarity was scored 2 in most cases, too, so the very low value there (.14) is more meaningful. | contrasting |
train_12130 | The difference between the response and key is quantified by a similarity metric such as accuracy, and different system outputs are compared to each other by ranking their scores with respect to the similarity metric. | comparing the scores of the similarity metric does not paint the full picture of the differences between the outputs, as we will demonstrate. | contrasting |
train_12131 | To quantify the difference of two corefer-ence system outputs S 1 and S 2 , given a key K, we count how many of the mentions m are classified differently using a mention classification function c: The mention classification function c requires a class inventory which is not featured by the common evaluation metrics for coreference resolution. | 10 Therefore, we adapt the mention classification paradigm introduced in the ARCS framework for coreference resolution evaluation (Tuggener, 2014) which assigns one of the following four classes to a mention m given a key K and a system response S: one issue with ARCS is to determine a criterion for the TP class, i.e. | contrasting |
train_12132 | All these metrics ex-cept SPICE and WMD define the similarity over words or n-grams of reference and candidate descriptions by considering different formulas. | sPICE (Anderson et al., 2016) considers a scene-graph representation of an image by encoding objects, their attributes and relations between them, and WMD leverages word embeddings to match groundtruth descriptions with generated captions. | contrasting |
train_12133 | In this example, failure of SPICE is likely due to incorrect parsing or the failure of synonym matching. | failure of CIDEr is likely due to unbalanced tf-idf weighting. | contrasting |
train_12134 | A common way of assessing the performance of a new automatic image captioning metric is to analyze how well it correlates with human judgements of description quality. | in the literature, there is no consensus on which correlation coefficient is best suited for measuring the soundness of a metric in this way. | contrasting |
train_12135 | The correlations within COMPOS-ITE dataset are even very high for all the metrics that consider n-grams, namely BLEU, CIDEr, ME-TEOR and ROUGE. | the correlations of these metrics against SPICE and WMD are not that high. | contrasting |
train_12136 | On ABSTRACT-50S dataset, the CIDEr metric outperforms all other metrics in both HC and HI cases. | on PASCAL-50S dataset, the WMD metric gives the best scores in three out of four cases. | contrasting |
train_12137 | MT evaluation metrics may attribute high scores for these pairs since they mainly focus on lexical and syntactic matching. | as our examples demonstrate, meaning could easily be lost if we rely only on form related MT system evaluation metrics. | contrasting |
train_12138 | Connotation indicates cultural or emotional association carried by words that appear in sentences (Feng et al., 2013). | to the sentiment polarity, connotation polarity indicates subtle shades of sentiment beyond denotative or surface meaning of text. | contrasting |
train_12139 | Extracting meaning related features from text and using form related features for MT evaluation have been studied separately. | integrating meaning related features into MT quality evaluation can capture the meaning preservation from source to target languages. | contrasting |
train_12140 | Thus, the interpretation of, e.g., the AUX-VERB or DET-PRON distinctions might differ across treebanks. | we ignore these differences in our analysis and consider all treebanks to be equally compliant. | contrasting |
train_12141 | Furthermore, while words tend to be hard to align in a cross-lingual setting due to homonymy and polysemy, morpho-syntactic information tends to be much more robust to language barrier (depending on typological closeness), which make them particularly relevant for cross-lingual transfer. | with delexicalized parsing approaches (Mc-Donald et al., 2011), the proposed method uses delexicalization during word embedding learning, not during parsing. | contrasting |
train_12142 | [Clubbing and putting up eyes] P 1 , [it is not violent and it does respect human rights] P 2 !!! | #irony) IMPLICIT activation arises from a contradiction between a lexicalized proposition P 1 describing an event or state and a pragmatic context P 2 external to the utterance in which P 1 is false, not likely to happen or contrary to the writer's intention. | contrasting |
train_12143 | Note that inferring irony in both types of activation requires some pragmatic knowledge. | in case of IMPLICIT, the activation of irony happens only if the reader knows the context. | contrasting |
train_12144 | Sentiment Analysis is a broad task that involves the analysis of various aspect of the natural language text. | most of the approaches in the state of the art usually investigate independently each aspect, i.e. | contrasting |
train_12145 | This constraint, in addition to the social media context, leads to a specific language rich of synthetic expressions that allow the users to express their ideas or what happens to them in a short but intense way. | the application of automatic sentiment classification approaches, in particular when dealing with noisy texts, is subjected to the presence of sufficiently manually annotated dataset to perform the training. | contrasting |
train_12146 | Along the same vein, Brown cluster assignments have also been used as a general purpose lexicon that requires no human manual annotation (Rutherford and Xue, 2014). | these solutions still suffer from the data sparsity problem and almost always require extensive feature selection to work well (Park and Cardie, 2012;Lin et al., 2009;Ji and Eisenstein, 2015). | contrasting |
train_12147 | Most previous research has focused on inducing and evaluating models from the English RST Discourse Treebank. | discourse treebanks for other languages exist, including Spanish, German, Basque, Dutch and Brazilian Portuguese. | contrasting |
train_12148 | Our system is similar to these last approaches in learning a representation using a neural network. | we found that good performance can already be obtained without using all the words in the discourse units, resulting in a parser that is faster and easier to adapt, as demonstrated in our multilingual experiments, see Section 7. | contrasting |
train_12149 | These results show that using all the words in the units (Ji and Eisenstein, 2014;Li et al., 2014), is not as useful as using more contextual information, that is taking more DUs into account (left and right children of the CDUs in the stack). | the slight drop for Relation shows that we probably miss some lexical information, or that we need to choose a more effective combination scheme than concatenation. | contrasting |
train_12150 | One possible explanation is that the Portuguese corpus is in fact a mix of different corpora, with varied domains, and possibly changes in annotation choices. | the low results for German show the sparsity issue since it is the language for which we have the fewest annotations ("#CDU", see Table 1). | contrasting |
train_12151 | This leads to a large drop in performance associated with this relation, when one of these corpora is not in the training data, especially for the source-only system for the En-DT (from 93% in F 1 to 30%). | on the En-DT, we observe improvement for other relations either largely represented in all the corpora (e.g. | contrasting |
train_12152 | The authors generate 5 artificial tasks of dialog. | the reasoning capabilities are not explicitly addressed and the author explicitly claim that the resulting dialog system is not satisfactory yet. | contrasting |
train_12153 | (2014) implement a deep learning architecture and report an 0.810 F-score and 35.9 NIST-SU error rate on broadcast news speech using prosodic and lexical features using a DNN for prosodic features, combined with a CRF classifier. | scaling this to spontaneous speech and the challenges of incrementality explained here, is yet to be tested. | contrasting |
train_12154 | For disfluency detection, standard approaches use pre-segmented utterances to evaluate performance, so this result is difficult to compare. | in the simple task, the accuracy of 0.720 repair onset prediction is respectable (comparable to (Georgila, 2009)), and is useful enough to allow realistic relative repair rates, in line with our motivation. | contrasting |
train_12155 | Utterance boundaries were detected just over a second after the end of the last word of the previous utterance. | the fact that T T D uttSeg on the word level reaches 0.283 suggests the timebased average is being weighed down by occa-sional long silences, which could be thresholded in future work. | contrasting |
train_12156 | The original Mor-phoChains system learns to identify child-parent pairs of morphologically related words, where the child (e.g., stopping) is formed from the parent (stop) by adding an affix and possibly a spelling transformation (both represented as features in the model). | these spelling transformations are never used to output underlying morphemes, instead the system just returns a segmentation by post-processing the inferred child-parent pairs. | contrasting |
train_12157 | 2 The system could in principle learn that bake is the parent of baked with type suffix, which would imply the analysis bake +d. | we hope it learns instead the type delete, which implies the (correct) analysis bake +ed. | contrasting |
train_12158 | We also experiment with a larger number of topics, to see if we can profit from a finer grained topic defini-tion. | this advantage will be offset by a smaller training set since we split into more sets. | contrasting |
train_12159 | This shows that the finer grained splits model important information. | the topic expert model does not reach the accuracy of the baseline using the full training Table 3: Results of the dependency parsing experiments using gold POS tags. | contrasting |
train_12160 | The gain in UAS is considerably smaller: The topic modeling expert reaches 90.55% as opposed to 90.26% for the full baseline. | the topic modeling setting for the 10topic setting outperforms the random baseline but does not reach the full baseline, thus mirroring the trends we have seen before. | contrasting |
train_12161 | the shared tasks on morphologically rich languages (Seddah et al., 2013;Seddah et al., 2014). | the comparison of results achieved for different languages is not straightforward as most languages and databases apply a unique tagset, moreover, they were annotated following different guidelines. | contrasting |
train_12162 | When adapting the universal dependency labels to Hungarian, we could find a one-to-one correspondence between the original labels of the Szeged Treebank and the UD labels only in most of the cases, and these labels could be automatically converted to the UD format, making use of the dependency and morphological annotations found in the original treebank. | we encountered some problematic cases during conversion, which we will discuss below in detail. | contrasting |
train_12163 | According to the UD principles, the first token of the multiword expressions should be marked as the head. | in Hungarian, it is always the last element of the multiword expression that is inflected. | contrasting |
train_12164 | We believe that this level of accuracy is not sufficient for releasing the rest of the 80,000 sentences of the automatically converted Szeged Dependency Treebank. | some of the shortcomings of the automatic conversion could be corrected by exploiting annotation found in other versions of the Szeged Treebank. | contrasting |
train_12165 | Neural attention models have achieved great success in different NLP tasks. | they have not fulfilled their promise on the AMR parsing task due to the data sparsity issue. | contrasting |
train_12166 | (2015), which does not require a dependency parser and uses SHRG to formalize the string-tograph problem as a chart parsing task. | they still need a concept identification stage, while our model can learn the concepts and relations jointly. | contrasting |
train_12167 | In this paper, we haven't used any syntactic parser. | as shown in previous works (Flanigan et al., 2014;Wang et al., 2015b;Artzi et al., 2015;Pust et al., 2015), using dependency features helps improve the parsing performance significantly because of the linguistic similarity between the dependency tree and AMR structure. | contrasting |
train_12168 | Neural attention models have achieved great success in different NLP tasks. | they have not been as successful on AMR parsing due to the data sparsity issue. | contrasting |
train_12169 | There is only very recent work around generation of question answer pairs from knowledge graph (Seyler et al., 2015). | there are several works around question generation that have been proposed in past with different motivations. | contrasting |
train_12170 | A rigorous approach involves setting up a hypothesis testing scenario using the performance of the systems on query documents. | often the hypothesis testing approach needs to send a large number of document queries to the systems, which can be problematic. | contrasting |
train_12171 | Intuitively, maximising cumulative rewards eventually leads to the selection of the best arm since it is the optimal decision. | (Bubeck et al., 2009) gives a theoretical analysis that any strategies for optimising cumulative reward is suboptimal in identifying the best performing arm. | contrasting |
train_12172 | We can conclude that in various NLP tasks, characters have recently been introduced in several different manners. | the models investigated in related work are either not tested on a competitive baseline (Miyamoto and Cho, 2016) or do not perform better than our models (Kim et al., 2016). | contrasting |
train_12173 | The best result is achieved by adding the first 3 and the last 3 characters to the model ('both orders'), giving a perplexity of 85.69, 3.05%/1.87% relative improvement with respect to the w475/w650 baseline. | adding more characters in both orders causes a decrease in performance. | contrasting |
train_12174 | For Dutch on the other hand, adding some random noise to the word-level model gave small improvements. | the random models perform much worse than the CW models. | contrasting |
train_12175 | This ability was harnessed by Shen and Lee (2016) for DA classification. | they ignored the conversational dimension of the data, treating the utterances in a dialogue as separate instances -an assumption that results in loss of information. | contrasting |
train_12176 | As seen in Table 4, both models exhibit accuracy drops (and small increases in negative log-likelihood) on the Switchboard development set, but small accuracy increases (and negative log-likelihood drops) on the Switchboard training set -an indication of over-fitting. | as seen in Table 5, both models yield a negligible or no drop in accuracy on the MapTask development set, while both yield a drop in accuracy on the training set. | contrasting |
train_12177 | The models that employ a DA connection to compute the attention signal (HA-RNN, woUt-tRNN, woHid2Attn, woConvRNN) show a slight improvement in accuracy when using the correct DA as input, instead of the predicted DA. | wDA2DA shows large improvements when using the correct DA (3.5% on Switchboard and 6.8% on MapTask), becoming the best-performing model for both datasets. | contrasting |
train_12178 | Since all the slotvalue specific information is delexicalised, the encoded vector can be viewed as a distributed intent . | if a value cannot be delexicalised in the input, its ngram-like embeddings will all be padded with zeros. | contrasting |
train_12179 | If multiple matches are observed, the corresponding embeddings are summed. | if there is no match for a particular slot or value, the empty n-gram embeddings are padded with zeros. | contrasting |
train_12180 | To the best of our knowledge, this is the first end-to-end NNbased model that can conduct meaningful dialogues in a task-oriented application. | there is still much work left to do. | contrasting |
train_12181 | Their corpus, the Columbia Quoted Speech Corpus (CQSC), is the most wellknown corpus and was used by follow-up work. | a result of their Mechanical Turk-based labeling strategy was that this corpus contains many unannotated quotes (see Table 4). | contrasting |
train_12182 | Domain dependence is a well-studied topic for PropBank SRL. | to the best of our knowledge, there exists no analysis of the performance of modern FrameNet SRL systems when applied to data from new domains. | contrasting |
train_12183 | One could optimize a frameId system to work in the no-lexicon setting which does not rely on the lexicon knowledge at all. | in this setting the classification results are currently lower. | contrasting |
train_12184 | On the one hand, annotators achieved κ = 0.345 (z = 92.2, p < 0.0001) (fair agreement) 6 when choosing targets to be added or removed. | they achieved a similar score of κ = 0.341 (z = 77.7, p < 0.0001) (fair agreement) when annotating the sentiment of the resulting targets. | contrasting |
train_12185 | A derivation precisely describes how the grounded equation system was constructed from the word problem by the automatic solver. | the grounded equation systems and the solutions are less informative, as they do not explain which span of text aligns to the coefficients in the equations. | contrasting |
train_12186 | Annotating gold derivations from scratch for all problems is time consuming. | not all word problems require manual annotation -sometimes all numbers appearing in the equation system can be uniquely aligned to a textual number without ambiguity. | contrasting |
train_12187 | There are a number of examples where an LSTM paper reports results of a CNN paper for comparison, such as (Ling et al., 2015) (POS tagging for English) and (Gillick et al., 2016) (named entity recognition for English). | there is no direct comparison between CNN and LSTM based architectures in morphological tagging. | contrasting |
train_12188 | The output is generated from this input. | multi-source morphological reinflection, the task we introduce, is a generalization in which the model receives multiple form-tag pairs. | contrasting |
train_12189 | In high-resource languages, an electronic dictionary may have near-complete coverage of the lemmata of the language. | paradigm completion is especially crucial for neologisms and lowresource languages. | contrasting |
train_12190 | (2016)'s neural architecture for MT translates from any of N source languages to any of M target languages, using language specific encoders and decoders, but sharing one single attention-mechanism. | to our work, they obtain a single output for each input. | contrasting |
train_12191 | There are two phenomena that cause reentrancies in AMR: control, where a reentrant edge appears between siblings of a control verb, and co-reference, where multiple men-tions correspond to the same concept. | 6 dependency trees do not have nodes with multiple parents. | contrasting |
train_12192 | These two metrics are strictly related to the concept score. | since named entity recognition is the focus of dedicated research, we believe it is important to define a metric that specifically assesses this problem. | contrasting |
train_12193 | (2013) use a probabilistic parsing and grounding model to understand natural language instructions and extend their knowledge base by asking questions. | unlike this work, they do not use semantic parsing to leverage the compositionality of language, and also use a fixed hand-coded policy for dialog. | contrasting |
train_12194 | The HIS model allows tracking probabilities of the potentially large number of hypotheses. | it is difficult to learn a policy over this large a state space in a reasonable number of dialogs. | contrasting |
train_12195 | This can be avoided by training with a simulated user agent. | such agents are not always realistic and their design requires parameters to be set ideally from existing conversation logs. | contrasting |
train_12196 | We believe the agent performing only parser learning performs much better than the agent performing only dialog learning due to the relatively high sample complexity of reinforcement learning algorithms in general, especially in the partially observable setting. | the parser changes considerably even from a small number of examples. | contrasting |
train_12197 | This aligner achieves a 90% F 1 score on hand aligned AMR-sentence pairs. | the ISI Aligner (Pourdamghani et al., 2014) presents a generative model to align AMR graphs to sentence strings. | contrasting |
train_12198 | So instead of aligning to a concept's word lexicon, sometimes a concept aligns to its parent node (head word). | the lexicon features dominate the alignment probability in our Figure 5: The AMR annotation of sentence "In the first 6 rounds of competition, Mingxia Fu and Bin Chi are occupying the first and third positions respectively" E-M calculation. | contrasting |
train_12199 | In previous work, order information of context words (relative position of words in the contexts) was generally ignored and objectives similar to the SkipGram (henceforth: SKIP) model were used to learn v(e). | the bag-of-word context is difficult to distinguish for pairs of types like (restaurant,food) and (author,book). | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.