id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_94300 | Similarly, to measure conciseness, we report how often summaries contain at least one non-restrictive relative clause (RelCl.) | for instance, a human writer places the underlined sentence in the input article next to the first sentence in the summary to improve topical coherence as they are about the same topic ("elections"). | neutral |
train_94301 | NeuSum ) uses a seq2seq model to predict a sequence of sentences indices to be picked up from the document. | our model consists of a sentence extraction model joined with a compression classifier that decides whether or not to delete syntax-derived compression options for each sentence. | neutral |
train_94302 | Our high-level approach to summarization is shown in Figure 1. | these summaries frequently contained short sentences to fill up the budget, and the collection of summaries returned tended to be less diverse than those found by beam search. | neutral |
train_94303 | Typically, an encoder encodes the input x i to a semantic representation c i , while a decoder controls or modifies the stylistic property and decodes the sentence x i based on c i and the pre-specific style l i . | the overall style transfer performance is still nonoptimal. | neutral |
train_94304 | The classifier objective L T style thus dominates Eqn. | explicitly learning precise stylized information within each domain is crucial to generate domain-specific styles. | neutral |
train_94305 | For example, consider a passage from the SQuAD dataset in Table 7, where except the question word who, the model sequentially copies everything from the passage and achieves a QBLEU score of 92.4. | one way to approach this is by re-visiting the passage and answer with the aim to refine the initial draft by generating a better question in the second pass and then improving it with respect to a certain aspect. | neutral |
train_94306 | • Reddit (Ouyang et al., 2017): is a collection of personal posts from reddit.com. | mmR is extremely biased to the importance aspect on XSum and Reddit. | neutral |
train_94307 | We looked into the individual word pairs marked as proper nouns in the DE and DA data, as these languages are related and RCSLS performs comparably on them otherwise, and did not find any patterns that could explain the large differences. | by more canonical we mean, for example, indefinite instead of definite forms of nouns and adjectives (see Ex. | neutral |
train_94308 | Alternative approaches for enrichment exists, of course, but we wonder how worthwhile further efforts would be. | the uncovered issues of high noise levels (proper nouns) and limited coverage (missing gold standard targets) clearly have a crucial impact on BDI results obtained on the MUSE dataset, and need to be addressed. | neutral |
train_94309 | Recently, morphological inflection has frequently been cast as a sequenceto-sequence task, mapping the characters of the input word together with the morphological features specifying the target to the characters of the corresponding inflected form (Cotterell et al., 2018). | closely is early stopping: a separate development or validation set is used to end training as soon as the loss on the development set L D (θ) starts increasing or model performance on the development set D starts decreasing. | neutral |
train_94310 | Development sets are impractical to obtain for real low-resource languages, since using all available data for training is often more effective. | hyperparameters are taken from Sharma et al. | neutral |
train_94311 | We obtain a different picture for TRANSL: results are equal for 3 languages, and better for DevLang for the remaining 2. | while longer training seems better for MORPH, not all performance loss can be explained by aborting training too early. | neutral |
train_94312 | Constraining search that way increases the run time as the γ-bounds are lower. | greedy and beam search both achieve reasonable BLEU scores but rely on a high number of search errors 5 to not be affected by a serious NMT model error: For 51.8% of the sentences, NMT assigns the global best model score to the empty translation, i.e. | neutral |
train_94313 | Among these, some works are implicitly on temporal commonsense, such as event durations (Williams, 2012;Vempala et al., 2018), typical temporal ordering (Chklovski and Pantel, 2004;Ning et al., 2018a,b), and script learning (i.e., what happens next after certain events) (Granroth-Wilding and Clark, 2016;Li et al., 2018). | for instance, given two events "going on a vacation" and "going for a walk," most humans would know that a vacation is typically longer and occurs less often than a walk, but it is still challenging for computers to understand and reason about temporal commonsense. | neutral |
train_94314 | 1 for the five phenomena studied here and Table 1 for basic statistics of it. | existing works have not studied all five types of temporal commonsense in a unified framework as we do here, nor have they developed datasets for it. | neutral |
train_94315 | Jia and Liang (2017) addressed this problem and proposed an adversarial version of the SQuAD dataset, which was created by adding a distractor sentence to each paragraph. | as shown in Section 2, the maximization of MI needs positive samples and negative samples drawn from joint distribution and the product of marginal distribution respectively. | neutral |
train_94316 | In this paper, we propose a meta-based algorithm for multi-hop reasoning (Meta-KGR) to address the above problems, which is explainable and effective for few-shot relations. | to the best of our knowledge, this work is the first research on fewshot learning for multi-hop reasoning. | neutral |
train_94317 | (2018) create an NLI test set specifically to show the deficiencies of state-of-the-art models in inferences that require lexical and world knowledge. | 2 Taking special account of both the switchable and associative instances suggests the following evaluation protocol for a given model. | neutral |
train_94318 | We do so by systematically examining threats to the validity of experiments involving recent CSR models. | when evaluating on SwAG, it is important to determine whether the prediction relies on an understanding of the context or on shallow patterns in the LM-generated counterfactuals. | neutral |
train_94319 | One is that these type of sentences can get a high generation probability since the generator is actually a language model. | considering the simplicity of the model and the ease of training, we adopt the neural constrained language model of as the generator. | neutral |
train_94320 | is the first endeavor to apply neural network to this task, which adopts a constrained neural language model (Mou et al., 2015) to guarantee that a pre-given word sense to appear in the generated sequence. | 1 Generating creative and interesting text is a key step towards building an intelligent natural language generation system. | neutral |
train_94321 | For generator, we first tag each word in the English Wikipedia corpus with one word sense using an unsupervised WSD tool 2 . | the generator can be any model that is able to generate a pun sentence containing a given word with two specific senses. | neutral |
train_94322 | In our joint model, grid search is used to determine β and results are shown in Figure 2. | further, Table 4 gives two examples of the generated questions on SQuAD dataset, by the base-line model and our joint model respectively. | neutral |
train_94323 | In this case, the symmetry of T is broken by larger unary potentials. | setup Unconditional generation is still considered a challenging task for both, GANs and latent stochastic models, and standard RNNs form a very competitive baseline (semeniuta et al., 2018). | neutral |
train_94324 | as ψ(w i , w j ) = A ij for some parameter matrix A ∈ R V ×V . | this confirms that our model can learn beyond pairwise interactions through the latent chain. | neutral |
train_94325 | 10 Charts (1a) and (1b) in Figure 2 show regard and sentiment scores for samples generated with a respect context. | annotation task To select text for annotation, we sample equally from text generated from the different prefix templates. | neutral |
train_94326 | For the complete set of categories, we measure inter-annotator agreement with fleiss' kappa; the kappa is 0.5 for sentiment and 0.49 for regard. | we define the regard towards different demographics as a measure for bias. | neutral |
train_94327 | (4) The evaluation of the proposed masking strategy on the two fact verification datasets indicates that, while the in-domain performance remains on par with that of the model trained on the original, lexicalized data, it improves considerably when tested in the out-of-domain dataset. | to mitigate this dependence on lexicalized information, we experiment with two strategies of masking. | neutral |
train_94328 | Also the distribution of labels was made similar to that of FNC. | with organization-c1, the misc-c1 entered commercial service. | neutral |
train_94329 | Idiosyncrasies Distorting Performance We investigate the correlation between phrases in the claims and the labels. | our analysis of the data demonstrates that this unexpectedly high performance is due to idiosyncrasies of the dataset construction. | neutral |
train_94330 | The aspect embeddings are initialized randomly and learned as parameters. | we find that the expressions of conflict opinions usually are lengthy and implicit. | neutral |
train_94331 | The key problems were that many of the regions were far too granular (e.g. | more complex tasks like Named-Entity Recognition often rely on contiguous, cleanly segmented text for successful processing. | neutral |
train_94332 | In addition, we include the results using neural models such as CNN and RNN in Table 4. | intuitively, words with higher topic coherence and lower degree of overlapping among different topics should be assigned higher reward in the next iteration of learning. | neutral |
train_94333 | In this work, a novel method for determining a stopping criterion is proposed that models the rate at which relevant documents occur using a Poisson process. | a range of studies have demonstrated that ranking the results of the Boolean query can support the identification of relevant studies by placing them higher in the ranking, e.g. | neutral |
train_94334 | Second, relevance judgments in nearly all newswire test collections are annotations on documents, not on individual sentences or passages. | we begin with BERT Large (uncased, 340m parameters) from Devlin et al. | neutral |
train_94335 | It performed worse (highest RBO .091), likely due to being more sensitive to unusual subword unit combinations generated by the MT system. | (2017) rather than weakly supervised BM25 scores (Dehghani et al., 2017), which would produce a smoothed tf.idf neural BM25 model that behaves differently than a standard neural IR model. | neutral |
train_94336 | Transformers (Vaswani et al., 2017) already make extensive use of linear maps in multi-headed attention and appear to be justified in doing so. | there is no evidence that models exclusively represent relationships in this manner. | neutral |
train_94337 | While the meanings of defining words are important in dictionary definitions, it is crucial to capture the lexical semantic relations between defined words and defining words. | neural pattern-based word-pair embedding models (Washio and Kato, 2018a,b;Joshi et al., 2019) unsupervisedly learn two neural networks: a word-pair encoder and pattern encoder, both of which encode the word-pair and lexico-syntactic pattern respectively into the same embedding space. | neutral |
train_94338 | To provide the definition decoder with information regarding lexical semantic relations, we use an additional loss function with word-pair embeddings as follows: where S is a set of stopwords. | lexical semantic relations in definitions are not explicit. | neutral |
train_94339 | In countries that speak multiple main languages, mixing up different languages within a conversation is commonly called codeswitching. | from the left image of figure 2, in word-level, the model mostly chooses the correct language embedding for each word, but also combines with different languages. | neutral |
train_94340 | Table 2 shows precision@1 results. | unlike GPA-based approaches, MPPA does not require a multi-way dictionary, but only bilingual dictionaries which are much easier to obtain even in an unsupervised manner. | neutral |
train_94341 | Since a parallel corpus, albeit small, is available, formality style transfer usually takes a seq2seq-like approach (Rao and Tetreault, 2018;Niu et al., 2018a;Xu et al., 2019b). | this work is supported in part by the National Natural Science Foundation of China (Grand Nos. | neutral |
train_94342 | In the past few years, style-transfer generation has attracted increasing attention in NLP research. | the preprocessed sentence serves as a Markov blanket, i.e., the system is unaware of the original sentence, provided that the preprocessed one is given. | neutral |
train_94343 | (2018) adopt an eraseand-replace approach and design their methods to erase the style-related words first and then fill in words of different style attributes. | previous works often suffer from content leaking problem. | neutral |
train_94344 | We follow the assumption that g(•, •) is a DAG consisting of N nodes and edges among them (Liu et al., 2019;Xie et al., 2019b;Pham et al., 2018). | we find that the model failed to optimize when n = 2. | neutral |
train_94345 | Figure 1 and 2 shows that JoBi not only consistently performs the best over the entire range of parameters, but also delivers a performance improvement that is especially large when the batch size or the negative ratio is small. | it should be noted that on FB15K-237, all JoBi models outperform all the baseline models, regardless of the base model used. | neutral |
train_94346 | Bilinear models such as DistMult and ComplEx are effective methods for knowledge graph (KG) completion. | in our preliminary experiments on baselines, we found that the choice of loss function had a large effect on performance, with negative log-likelihood (NLL) of softmax consistently outperforming both max-margin and logistic losses. | neutral |
train_94347 | Other often used dimensionalities include 200 (Tang et al., 2015;Ling et al., 2016;Nickel and Kiela, 2017) and 500 (Tang and Liu, 2009;Perozzi et al., 2014). | the exact definition of 'low' dimensionality is rarely explored. | neutral |
train_94348 | We further demonstrate our model's increased capabilities on humor identification problems, such as the previously created datasets for short jokes and puns. | this previous research has gone into many settings where humor takes place. | neutral |
train_94349 | Nodes will diverge because local gradients differ. | (2017) with eight layers of bidirectional LSTM consisting of 225M parameters. | neutral |
train_94350 | While the rule of thumb is to scale learning rate linearly with batch size (Goyal et al., 2017), the Transformer model is also sensitive to high learning rates (Aji and Heafield, 2019). | models are averaged periodically. | neutral |
train_94351 | To train a multilingual BERT model for our sequence prediction tasks, we add a softmax layer on top of the the first wordpiece (Schuster and Nakajima, 2012) of each token 3 and finetune on data 3 We experimented with wordpiece-pooling (Lee et al., 2017) which we found to marginally improve accuracy but at a cost of increasing implementation complexity to maintain. | mMiniBERT performs well and outperforms the state-of-the-art Meta-LSTM on the POS tagging task and on four out of size languages of the Morphology task. | neutral |
train_94352 | We wish to estimate p(x|y), the conditional probability of utterance x given dialogue act y. | let x and y denote an input sentence and the corresponding semantic representation respectively. | neutral |
train_94353 | On PTB, we also compare to two models using structural information in language modeling: parsing-reading-predict networks (PRPN; Shen et al., 2018a) predicts syntactic distance as structural features for language modeling; orderedneuron LSTM (ON-LSTM; Shen et al., 2018b) posits a novel ordering on LSTM gates, simulating the covering of phrases at different levels in a constituency parse. | a language model does not have access to future words, and hence running a backward RNN from right to left is less straightforward: one will have to start an RNN running at each token, which is computationally daunting (Kong et al., 2016). | neutral |
train_94354 | The first uses logic for semantic representation, including ATIS (Price, 1990;Dahl et al., 1994) and GeroQuery (Zelle and Mooney, 1996). | to this end, the tencent multilingual embeddings are chosen, which contain both Chinese and English words in a multi-lingual embedding matrix. | neutral |
train_94355 | Intuitively, the same semantic parsing task can be applied cross-lingual, since SQL is a universal semantic representation and database interface. | as discussed above, column names are selected by attention over column embeddings using sentence representation as a key. | neutral |
train_94356 | Thus, the parameters of the gating GCN are trained from the relevance loss and the usual decoding loss, a ML objective over the gold sequence of decisions that output the query y. Discriminative re-ranking Global gating provides a more accurate model for softly predicting the correct subset of DB constants. | finally, we compute two oracle scores to estimate future headroom. | neutral |
train_94357 | In transductive learning, because an unlabeled test set can be used for training, it is possible to adapt LMs directly to the word distributions of the test set. | we fine-tuned an LM on an unlabeled test set. | neutral |
train_94358 | Experiments using the SentEval suite showed that DCT embeddings outperform the commonlyused vector averaging on most tasks, particularly tasks that correlate with sentence structure and word order. | both EigenSent⊕Avg and ELMo performed better than all other models on SST-5. | neutral |
train_94359 | The BERT model architecture consists of multiple layers of Transformers and uses a specific input representation, with two special tokens, [CLS] and [SEP], added at the beginning of the input sentence pair and between the sentences (or bag of sentences) respectively. | the semantics of a sentence are usually more dependent on local context, rather than all sentences in a long docu- ment. | neutral |
train_94360 | (a) We observe significantly different levels of complexity dependent on the population ratio. | the cost function is then: whereV (h) refers to using the predicted value but not updating the value sub-network according to this loss function. | neutral |
train_94361 | We next examine what happens when we expose different linguistic communities to each other. | become more tightly coupled, as evident from the higher success rate among those in Fig. | neutral |
train_94362 | The first row is based on the balanced dataset, and the rest are based on the imbalanced dataset with different oversampling ratios. | we extracted instances using the same methods as described above, but we filtered out COMMENT/REPLY pairs in which a condescension-related word appeared. | neutral |
train_94363 | This model, called HEURISTIC LABELS, serves to demonstrate how well the abstractive model would perform with a perfect extractive step. | we believe this reflects the difficulty of evaluating this task and summarization in general. | neutral |
train_94364 | An example instance of the summary cloze task is presented in Figure 1. | being able to predict the next sentence of a summary is an important component of a full topic-focused MDS system and a worthwhile task on its own. | neutral |
train_94365 | The second recurrent layer consists of regular Gated Recurrent Units (GRUs), which are used to update the polished fact from q k to q k+1 using h k Tm . | hefei Prison proposed a commutation sentence since PERS had repentance and received two awards during his sentence. | neutral |
train_94366 | On one hand, the sections in the prototype summary that are not highly related to the prototype document are the universal patternized words and should be emphasized when generating the new summary. | we investigate the influence of the iteration number when facts are extracted. | neutral |
train_94367 | While the space of programs is exponential, we observed that abstract programs which are instantiated into correct programs are not very complex in terms of the number of production rules used to generate them. | each column is encoded by averaging the embeddings of words under its column name. | neutral |
train_94368 | The structural constraint discussed above now corresponds to assuming that each span in a question can be aligned to a unique row or column slot. | the representation of a slot is contextually aware of the entire abstract program (Dong and Lapata, 2018). | neutral |
train_94369 | In this work, we focus on only the DM formalism. | then, the next source node index d u i+1 is the same as the target node the module points to. | neutral |
train_94370 | In the case of UCCA, we sort the children based on their UCCA node ID. | in Table 2, we also conduct ablation study on beam search to investigate contributions from the model architecture itself and the beam search algorithm. | neutral |
train_94371 | Scalars p gen , p enc and p dec act as a soft switch to control the production of target node label from different sources. | mST algorithms have to be used to search for a valid prediction. | neutral |
train_94372 | As suggested by Dozat and Manning (2016), we project v i and h t to a lower dimension for reducing the computation cost and avoiding the overfitting of the model. | we experiment with the following traversal variants: (1) random, which sorts the sibling nodes in completely random order. | neutral |
train_94373 | LN is the layer normalization (Ba et al., 2016) and h 0 t is always initialized with s 0 . | the top node, in our case p2, only received errors from the top node's softmax. | neutral |
train_94374 | 6 The reason is that the random order potentially produces a larger set of training pairs since each random order strategy can be considered as a different training pair. | if we adopt the Smatch-weighted metric, our method achieves a better score i.e. | neutral |
train_94375 | We define x i to be the vector at node i (in the example trigram, the We skip the standard derivative for Ws. | the combined strategy benefits from both worlds. | neutral |
train_94376 | Empirically, we find that computation time is manageable when limiting the application of s t (•) to candidates that share the same entities as the unlabeled utterance. | the starting point is a user who needs a semantic parser for some domain, but has no data. | neutral |
train_94377 | It assumes that if two entities have a relation in KGs, then all sentences mentioning the two entities express this relation. | • Reside (Vashishth et al., 2018). | neutral |
train_94378 | NER is an essential pre-processing step for many natural language processing (NLP) applications, such as relation extraction (Bunescu and Mooney, 2005), event extraction (Chen et al., 2015), question answering (Mollá et al., 2006) etc. | we thank the anonymous reviewers for their insightful comments. | neutral |
train_94379 | Specifically, on OntoNotes and MSRA, 'w/o T-graph' obtains worse performance than others, showing that Tgraph is important. | • We achieve the state-of-the-art results in various popular Chinese NER datasets, and our model achieves a 6-15x speedup over the existing SOTA model. | neutral |
train_94380 | We can clearly see that removing any graph causes obvious performance degradation, but the importance of different graphs varies from dataset to dataset. | for instance, " ¬: : ::" (Beijing Airport) and ": : ::" (Airport) are the self-matched words of the character ": : :" (airplane). | neutral |
train_94381 | Compared with lattice LSTM (Zhang and Yang, 2018), Our model gains a 0.91% improvement in F1 score. | word information is very useful in Chinese NER, because word boundaries are usually the same as named entity boundaries. | neutral |
train_94382 | This work has been supported in part by National Science Foundation SMA 18-29268, DARPA MCS and GAILA, IARPA BETTER, Schmidt Family Foundation, Amazon Faculty Award, Google Research Award, Snapchat Gift and JP Morgan AI Research Award. | we found that, by explicitly adapting the model along label distribution shift, consistent improvements can be achieved on distant supervision but not on human annotations. | neutral |
train_94383 | We choose MLP as the interaction function in our DGLSTM-CRF according to performance on the development set. | how to make good use of the rich relational information as well as complex long-distance interactions among words as conveyed by the complete dependency structures for improved NER remains a research question to be answered. | neutral |
train_94384 | (2017)) yields the following estimator: Examining the expression above, we can see that for a fixed value of a, the numerator of the of the ratio grows with the frequency of w i in either language L 0 or L 1 . | these models did not achieve any substantial performance gain to justify their additional complexity. | neutral |
train_94385 | Evaluations on multiple text classification tasks show that ProSeqo significantly improved accuracy compared to state-of-the-art on-device neural network SGNN (Ravi and Kozareva, 2018) on short text with +3.4% for MRDA, +4.5% on SNIPS, +5.3% on SWDA and +8.9% on ATIS; and long documents with +23% on Amazon product reviews, +33.9% on AG news, +35.9% on Y!A. | for ATIS and SNIPs, the most recent state-of-art approaches use joint intent and slot prediction model (Hakkani-Tur et al., 2016;Liu and Lane, 2016;Goo et al., 2018), where the slot model recognizes the enti- AG Y!A AMZN ProSeqo (our on-device model) 91.5 72.4 62.3 SGNN (Ravi and Kozareva, 2018)(on-device) 57.6 36.5 39.3 fastText-full (Joulin et al., 2016) 92.5 72.3 60.2 CharCNNLargeWithThesau. | neutral |
train_94386 | (Liu and Lane, 2016) showed that joint intent and slot model improves upon the individual ones. | we noticed that the test set of the Y!A dataset in (Yang et al., 2016) was smaller compared to the data used in (Zhang et al., 2015;Joulin et al., 2016) and our experiments. | neutral |
train_94387 | However, this sample-wise comparison may be severely disturbed by the various expressions in the same class. | we choose it as an interaction function in this paper. | neutral |
train_94388 | In the distance metric learning models (Matching Networks, Prototypical Networks, Graph Network and Relation Network), all the learning occurs in representing features and measuring distances at the sample-wise level. | we aim to perform meta-learning on the training set, and extract transferable knowledge that will allow us to deliver better few-shot learning on the support set and thus classify the test set more accurately. | neutral |
train_94389 | 1 1 https://cogcomp.seas.upenn.edu/page/ publication_view/883 The plague in Mongolia, occurring last week, has caused more than a thousand isolation Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. | unfortunately, some classes are still challenging, such as "evacuation", "infrastructure", and "regime change". | neutral |
train_94390 | Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. | in this work, we only tried to make use of the class names and their definitheir probabilities after softmax, then do softmax to get new probabilities. | neutral |
train_94391 | We expect that the constrained models should have accuracies at least on par with the baseline (though one of the key points of this paper is that accuracy by itself is not a comprehensive metric). | our goal is to build models that minimize inconsistency with domain knowledge stated in logic. | neutral |
train_94392 | When label supervision is limited (i.e. | the λ for M dataset can be much higher. | neutral |
train_94393 | The example in §1 is an allowed label assignment. | with t-norms to relax logic, we can systematically convert rules as in (1) into differentiable functions, which in turn serve as learning objectives to minimize constraint violations. | neutral |
train_94394 | It produces undesirable outcomes: the encoder yields meaningless posteriors that are very close to the prior, while the decoder tends to ignore the latent codes in generation (Bowman et al., 2015). | the joint distribution is typically assumed as a standard multivariate Gaussian. | neutral |
train_94395 | Recently, researchers have also extended this method to unsupervised learn-ing (Hsu et al., 2019) and online learning scenarios . | we propose to learn lowrank embeddings of sentences by tensor decomposition to capture their contextual semantic similarity, which works well regardless of the size of documents (Hosseinimotlagh and Papalexakis, 2018). | neutral |
train_94396 | On CPU, InferSent is about 65% faster than SBERT. | we compute average GloVe embeddings using a simple for-loop with python dictionary lookups and NumPy. | neutral |
train_94397 | These two options are also provided by the popular bert-as-a-service-repository 3 . | sentEval fits a logistic regression classifier to the sentence embeddings. | neutral |
train_94398 | Let us consider the AUC risk, i.e., bipartite ranking risk (Narasimhan and Agarwal, 2013): Figure 1: An overview of the framework. | we also provide the results with varying thresholds and heuristic thresholds in Appendix C, where the trends of performance for each method do not differ much from Table 4. | neutral |
train_94399 | Unlike supervisedlearning, where we have positive and negative data, the threshold information is provided from labels and we can draw the decision boundary accordingly. | let sym be a symmetric loss such that sym (z) + sym (−z) = K, where K is a positive constant. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.