id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_98300
(2015) exploit graph information for knowledge base completion.
this also makes the model overfit on our data as the number of OpenIE text predicates observed with each entity pair is rather small (1.4 on average in our datasets).
neutral
train_98301
This suggests that M itself provides important information for recognizing entity types.
the corresponding PR curves can be found in the appendix ( Figure 5).
neutral
train_98302
As these type vectors are independent, the label correlations are only implicitly captured by sharing the model parameters that are used to extract f .
it achieves a 15.3% relative F1 improvement and also less inconsistency in the outputs.
neutral
train_98303
A.1 Performance on the whole testing data over time The performance on the whole testing data over time is shown in Figure 3.
then we apply K-Means (the number of clusters equals the budget given to the specific task) to cluster all the samples of the task.
neutral
train_98304
Practically, with the growth of the number of tasks, it is difficult to store all the task data 1 .
we improve the basic EMR with two motivations: (1) previous lifelong learning approaches work on the parameter space.
neutral
train_98305
Therefore, in lifelong learning research, the learner is usually constrained on the memory size, denoted as a constant B.
we also show that the EMR outperforms GEM on many benchmarks, suggesting that it is likely to be among the top-performed lifelong learning algorithms, and it should never be ignored for comparison when developing new lifelong learning algorithms.
neutral
train_98306
• We propose an alignment model which aims to alleviate the catastrophic forgetting problem by slowing down the fast changes in the embedding space for lifelong learning.
they do not suit many lifelong settings in NLP.
neutral
train_98307
Moreover, sequence models such as LSTM cannot be applied to such long sequences.
this type of approach tends not to work well for NLP applications as the semantic concepts/classes in NLP are often more complex and cannot be easily described by a set of pre-defined attributes.
neutral
train_98308
This decrease is likely due to the fact that AIDA-light was last updated in 2014 while the CNN/DailyMail datasets contain articles collected until the end of April 2015.
the default approach we use is to concatenate the claim and context into a linear sequence of tokens during preprocessing (shown in Figure 4a).
neutral
train_98309
For the growth of any company or application it is necessary for the customer care agents to be cordial and amicable to the customer.
our reward function r(y), used for evaluating y against the gold standard output is It is the weighted mean of the two terms: (i) BLEU metric m1: Ensures the content matching between the reference and the decoded outputs.
neutral
train_98310
The second layer leverages the external knowledge to compute F k , which consists of pair-wise knowledge score f k among all candidate n. To enhance the efficiency of the model, a softmax pruning module is applied to select high confident candidates into the second layer.
moreover, a softmax pruning is placed in between the two layers to select high confident candidates.
neutral
train_98311
Several baselines are compared in this work.
the first layer encodes the contextual information for computing F c .
neutral
train_98312
For this knowledge, we create a knowledge base by counting how many times a predicate-argument tuple appears in a corpus and use the resulted number to represent the preference strength.
for End2end, we use their released code 13 and replace its mention detection component with gold mentions for the fair comparison.
neutral
train_98313
The second application of t 4 has n−1 choices of q, and the ith application of t 4 has n − (i − 1) choices.
different ways to consume the q states, each producing a unique DAG.
neutral
train_98314
(2015); Leviant and Reichart (2015); Mrkšić et al.
an example output for all three methods is shown in Table 1.
neutral
train_98315
We propose a general framework for learning subword-informed word representations that allows for easy experimentation with different segmentation and composition components, also including more advanced techniques based on position embeddings and self-attention.
sms is less useful for entity typing, where almost all best-performing configurations are based on morf (see also Figure 4).
neutral
train_98316
We notice that Sent2Vec trigrams model dominates the word-similarity tasks as well as the semantic analogy tasks.
the reported results are given as mean and standard deviation for those five models.
neutral
train_98317
We show that the addition of MIL regularizers for generating explanations using thresholded attention improved precision and recall hypothesis explanations.
this is achieved by penalizing the smallest nonzero attention weight, which has the effect of encouraging at least one weight to be close to zero.
neutral
train_98318
This behaviour is also more pronounced in the premise sentence highlights rather than the hypothesis.
similar improvements were not realized for the premise sentence.
neutral
train_98319
Furthermore, the mean score for our system (79.2) is close to the mean of the best performing models (81.0), which are different systems, while using simpler features and learning algorithm.
in this paper, we are interested in both monolingual and cross-lingual CWi; in the latter, we build models to make predictions for languages not seen during training.
neutral
train_98320
The embedding based approaches suffer from accumulation of the embedding and training error (Balasubramanian and Lebanon, 2012), however in the proposed approach, we have removed the embedding step and considered the training error minimization at the label subset selection step.
in (Bhatia et al., 2015b), the authors perform local embedding of the label vectors.
neutral
train_98321
The content of web pages was represented using the Boolean bag-ofwords model.
for Bibtex datasset, our proposed method is competitive with the best results, and for Eurlex and Wiki10-31k, our method is substantially better than both SLEEC and FastXML, a notable achievement for a single model approach.
neutral
train_98322
• Relation extraction within instances is equal to the query "what is the relation between head and tail at time spot t i ?".
suppose we are to predict their relation after their divorce.
neutral
train_98323
The models in previous work (Zeng et al., 2015;Lin et al., 2016;Luo et al., 2017) generally include two parts, encoding and fusion.
for NYT-10 experiments, we adopt a single query without temporal encoding to compare results with other baseline methods since the dataset only contains one label for each mention set.
neutral
train_98324
Experiments and results, which demonstrate the performance of our framework, are presented in section 3.
our proposed framework leverages not only the word embeddings but also other semantic knowledge.
neutral
train_98325
In particular, given the wordpair (w, w ), and their provided contexts (c, c ) we define: Following the notation used in 3.2, K is the number of topics returned by the trained LDA model, x j is the word embedding trained on the subcorpus corresponding to the j-th topic after being projected to the unified vector space, p(j|w, c) denotes the posterior probability of topic j returned by LDA given as input the context c of word w, d denotes the cosine similarity between the two input representations and finallyx (w) = u argmax 1≤j≤K p(j|w,c) (w) is the vector representation of word w that corresponds to the topic with the maximum posterior for c. Intuitively, a higher score in MaxSimC indicates the existence of more robust multi-topic word representations.
the topic model splits the corpus into K (possibly overlapping) subcorpora.
neutral
train_98326
Afterward, following a soft clustering scheme each sentence is included in a topic-specific corpus when the posterior probability for the corresponding topic exceeds a predefined threshold.
this method is restricted to languages where such lexical resources exist and depends on the lexical coverage and quality of such resources.
neutral
train_98327
Interestingly, for these points the predicted scores for the baseline word vectors were nearly correct, and retrofitting pushed them to overpredict similarity.
interestingly, although Bruni et al.
neutral
train_98328
w rand is a random token in the sentence that is not the head of w mod .
we also see that the OpenAI transformer significantly underperforms the ELMo models and BERT.
neutral
train_98329
While it may be surprising that GloVe initialization is harmful, it is well known that pretrained word embeddings do not necessarily capture syntactic relationships (as evident in the poor performance of k-means clustering).
given random word y ∈ V , we set x ∈ V 2H to be an ordered list of H left and H right words of y.
neutral
train_98330
Thus we focus on the numerator term By the product rule, its gradient with respect to q is a sum of two terms.
x N ), it assumes a uniform distribution over consecutive word pairs (x i−1 , x i ) and optimizes the following empirical objective where #(c, c ) denotes the number of occurrences of the cluster pair (c, c ) under C. While this optimization is intractable, Brown et al.
neutral
train_98331
Each objective involves entropy estimation which is a nonlinear function of all data and does not decompose over individual instances.
we investigate two training objectives.
neutral
train_98332
This in turn means that J var = H(Z) − H(q, p) is a lower bound on I(X, Z), hence the name.
the first term is (using (10) again) the second term is (as a sum over batches) In contrast, the numerator term estimated as an average over minibatches is and the two terms of its gradient with respect to q (corresponding to (12) and 13 the difference between (12) and 14is where the difference between (13) and (15) is Adding these differences gives the second result.
neutral
train_98333
2018, whose hyperparameters were tuned by us.
in the unsupervised case, the standard approach is to maximize the log marginal likelihood, this summation is intractable because z t fully depends on all previous actions [z 1 , .
neutral
train_98334
We also look at ELMo's output when given the entire sentence.
the formula for the compatibility function (and its normalized form e) is defined as follows: Where the bilinear projection φ is defined as: For the composition function a we used either a treeLStM (tai et al., 2015) or a 2-layer MLP (see Appendix A.2 for more precise definitions on both methods).
neutral
train_98335
It contains text and named entity labels in English, Spanish, German and Dutch.
the table lists the F1 score for each entity types, as well as the overall F1 score.
neutral
train_98336
We can observe a noticeable gap between unsupervised and supervised methods, but the gap is narrowing as the rank increases.
unsupervised NMT The current NMT systems (Sutskever et al., 2014;Cho et al., 2014a;Bahdanau et al., 2015;Gehring et al., 2017;Vaswani et al., 2017) are known to easily overfit and result in an inferior performance when the training data is limited (Koehn and Knowles, 2017;Isabelle et al., 2017;Sennrich, 2017).
neutral
train_98337
Although the M may contain potential parallel sentences t for s, we cannot directly use (s, t ) as ground-truth sentence pairs to train the translation model V s→t because the NMT system is sensitive to noises (Cho et al., 2014a;Cheng et al., 2018).
note that (1) all the encoders share the same parameters (same for decoders); (2) the decoding processes are nondifferentiable, so the language modeling loss and the comparative translation loss are used to train the learning modules before and after the decoding processes, respectively.
neutral
train_98338
The Effect of Extraction Number k As shown in Table 1, the number k of the extracted-andedited sentences plays a vital role in our approach.
recent neural-based methods (Chu et al., 2016;Grover and Mitra, 2017;Grégoire and Langlais, 2018) learn to identify parallel sentences in the semantic spaces.
neutral
train_98339
Based on the semantic information of the source sentence s, we can further improve the extracted results with this editing mechanism.
owing to the shared encoders and decoders in language modeling, the semantic spaces of two languages are already strongly connected in our scenario.
neutral
train_98340
( , 2019 and propose to aggregate the multi-layer representations, and Dehghani et al.
especially, among these probing tasks, 'TrDep' and 'Toco' tasks are related to syntactic structure modeling.
neutral
train_98341
Figure 4 shows the LCR curves for MultiWoz, with a trend similar to the previous section: the word-level models can only achieve task reward improvement by sacrificing their response decoder PPL.
dealOrNodeal is a negotiation dataset that contains 5805 dialogs based on 2236 unique scenarios .
neutral
train_98342
The basic idea is to use an ROC-style curve to visualize the tradeoff between achieving higher reward and being faithful to human language.
(2) to our best knowledge, our work is the first comprehensive study of the use of latent variables for RL policy optimization in dialog systems.
neutral
train_98343
Defining action spaces for conversational agents and optimizing their decision-making process with reinforcement learning is an enduring challenge.
the word-level RL's success rate also improves to 79%, but the generated responses completely deviate from natural language, increasing perplexity from 3.98 to 17.11 and dropping BLEU from 18.9 to 1.4.
neutral
train_98344
Now as we move along each direction, we can see our model gradually transforms the response toward the corresponding responses of each direction.
this essentially makes the inference process simple and robust in that one can choose arbitrary directions to generate diverse responses.
neutral
train_98345
• It's a it's a luxury • Well that's interesting.
re-ranking can be extremely 1 An implementation of our model is available at https: //github.com/golsun/SpaceFusion 2 For simplicity, we omitted the response at the center: "I would love to play this game".
neutral
train_98346
(Choi et al., 2018) 10.0m 93.1 86.0 600D Residual stacked enc.
given a sentence, the goal of SRL is to identify the arguments of each target verb into semantic roles, which can benefit many downstream NLP tasks.
neutral
train_98347
6 The annotators were not provided with knowledge from any external lexical resource (such as WordNet).
we used two simple binary classifiers in our experiments on top of all comparison systems (except for the LSTM baseline).
neutral
train_98348
• (2.7) Our main contribution is introduction of ### without requiring neither supervision nor feature engineering.
we do so in order to analyze and disentangle the sources of review updates during the rebuttal stage.
neutral
train_98349
• We conduct t-test and get the p value as ###, which shows good agreement.
• (2.6) In response to your general remark: we can see how our discussion and conclusions would lead a reader to conclude that; rather, this paper is an exploration in an area that is, as you say, worth exploring.
neutral
train_98350
BERT (Devlin et al., 2018) fine-tuned on QQP achieves over 90% accuracy on QQP, but only 33% accuracy on PAWS data in the same domain.
in our experiments, we show the limitations of this modeling choice on PAWS pairs.
neutral
train_98351
In 1953, the team also toured in Australia.
models that do not capture non-local contextual information fail even with PAWS training examples.
neutral
train_98352
Table 1 summarizes the properties of these corpora.
transformer and LStM outperform all the other models in the highest and the lowest error-rated corpora, respectively.
neutral
train_98353
The original ICNALE is not error annotated.
for example, the performance of the transformer ranges from the score of f 0.5 , which is as low as 36.20 on CoNLL-2013, to as high as 60.06 on JfLEG.
neutral
train_98354
The following factors are considered while selecting our model.
single-corpus evaluation is deemed weak regardless of its diversity.
neutral
train_98355
To reduce model complexity, we replace the fully-connected structure with a star-shaped topology, in which every two non-adjacent nodes are connected through a shared relay node.
the maximum dependency path length of Star-transformer is O(1) with a constant two via the relay node.
neutral
train_98356
During training, the encoder and the classifier play a co-operative game, while the encoder and the discriminator play an adversarial game.
instead of classifying sentences independently with a Softmax layer, our second method is to model them jointly with a CRF layer (Lafferty et al., 2001).
neutral
train_98357
This requirement is again shared by § 185 to § 187 StGB.
the independent classification shows already that the amount of data is insufficient.
neutral
train_98358
a standard form available in a dictionary.
the results suggest that our method works well for detecting cyber security events from noisy short text.
neutral
train_98359
A corpus of 6,000 tweets describing software vulnerabilities is annotated with authors' opinions toward their severity.
experts mark it as of medium severity.
neutral
train_98360
As such, the performance of all the considered models on this dataset is lower than that on the Twitter and Weibo datasets.
each feature type is processed by a feature branch with a number of fully-connected (FC) layers.
neutral
train_98361
By integrating these hidden layers on top of a deep network, which produces the MRF potentials, we obtain our deep MRF model for fake news detection.
we select T = 5 as it produces the best trade-off between accuracy and computational complexity.
neutral
train_98362
By sharing parameters, the networks regularize each other, and the network for one task can benefit from repre- sentations induced for the others.
gold Adv MTL LSTM Sentence (1) 5 5 5 7 But, star gazer, we had guns then when the Constitution was written and enshrined in the BOR and now incorporated into th 14th Civil Rights Amendment.
neutral
train_98363
n}, and their average vector, s = 1/n i w i , s ∈ R d , as follows: Figures 1(a)-1(c) show strong positive correlation between divergence and document length show macro F1 performance of the deep averaging network developed in (Joulin et al., 2017) across datasets: performance considerably drops for higher values of information loss/divergence, e.g.
they can cause efficiency issues because of their large size (d × k); note that the typical value for embedding dimension d is 300 (Pennington et al., 2014;Mikolov et al., 2013).
neutral
train_98364
All NOT -0.00 0.00 0.72 1.00 0.84 0.52 0.72 0.
although 'he is' was poor in offensive content in the trial dataset (15%), we kept it as a keyword in order to avoid gender bias, and we found that in the full dataset it was more offensive (32.4%).
neutral
train_98365
It includes offensive tweets targeted at organizations, situations, events, etc., thus making it more challenging for models to learn discriminative properties for this class.
to the best of our knowledge, no prior work has explored the target of the offensive language, which might be important in many scenarios, e.g., when studying hate speech with respect to a specific target.
neutral
train_98366
Figure 2 (A) shows a Tree-LSTM unit.
in order to show the effect of KB concept embeddings, we visualize the probabilities of word transcription to be predicted for each event type.
neutral
train_98367
Micro Table 5: Comparison with different traditional classifiers in AD:Control classification task.
next, we do two ablation studies on the arrangements of modalities.
neutral
train_98368
As an example, a recent RE method called BRAN (Verga et al., 2018) proposed to use encoder of Transformer (Vaswani et al., 2017) for obtaining token representations and then used these representations for RE.
we experimented with different ablations of BRAN and noticed an improvement in results for DCN dataset upon removing multi-head selfattention layer.
neutral
train_98369
The specifications of these source data sets are given in Table 1.
source data set Yelp is more similar to CADEC than PubMed and MIMIC from both PPL and WVV perspectives.
neutral
train_98370
Results from all models outperform the baseline SVR model: Pearson's 2 http://nlp.stanford.edu/data/glove.
given the results presented so far mixing annotators may be beneficial given their respective trade-offs of precision and recall.
neutral
train_98371
Our results, in fact, point to a future direction which applies the edit-tree formalism, but alleviates the edit-tree explosion by exploiting the relationships between the edit-tree classes potentially using representation learning methods.
12 The results of this experiment are reported in Table 5.
neutral
train_98372
Comparing encoder representations to decoder representations, it is interesting to see that in several cases the decoder side representations performed better than the encoder side ones, even though the former were trained using a uni-directional LSTM.
we trained and evaluated the semantic and the syntactic classifiers on existing annotated corpora.
neutral
train_98373
The transition is the multiplicative attention function with h (enc) and h (dec) j as input.
figure 3: We present performance (in accuracy) averaged over the 20 languages from UD we consider.
neutral
train_98374
However, in the low-resource case, we would expect direct supervision on the sort of features we desire to extract to work better.
the gap between the performance of our model and Lematus-ch20 is larger when fewer training data are available, especially for ambiguous tokens.
neutral
train_98375
Each individual word is denoted as w i .
the task is quite nuanced as the proper choice of the lemma is context dependent.
neutral
train_98376
This is in line with previous work which shows that the linguistic nature of the target task is important when predicting typological features with language embeddings (Bjerva and Augenstein, 2018a).
unlike principles, parameters are the parts of linguistic structure that are allowed to vary.
neutral
train_98377
Word clusters distinguish themselves from word embedding models by their ability to learn from little data (Bansal et al., 2014;Qu et al., 2015); for example, in cases like (Bansal et al., 2014), word clusters outperform other kinds of representations, including word embeddings.
word clusters differentiate themselves from word embeddings by requiring estimation of many fewer parameters, and by their ability to derive qualitative representations from smaller corpora (Qu et al., 2015;Bansal et al., 2014).
neutral
train_98378
Since the vocabulary size is fixed, as the number of clusters k increases, purity can increase even if Where M I and H stand for Mutual Information and Entropy, respectively.
measuring on the Universal Dependencies corpora, we find that the percentage of polyclass words (i.e.
neutral
train_98379
In our experiments, purity measures the percentage of vocabulary that is labeled correctly.
for experiments on larger corpora, we use the unlabeled EuroParl corpus (Koehn, 2005).
neutral
train_98380
In Figure 3, we show PoS purity for clusters induced over Universal Dependencies (UD) corpora, where we consider all polyclass words as clustered incorrectly.
a value of, say 0.3, does not indicate that 30% of the points have been properly separated.
neutral
train_98381
We show the statistics in Table 5.
though many variations have been proposed, the underlying structures of multilingual topic models are similar.
neutral
train_98382
Interestingly, the H-distances of test documents are at a less ideal value, although they are slightly decreasing in most of the languages except AR.
multilingual topic models are generally extended from Latent Dirichlet Allocation (Blei et al., 2003, LDA).
neutral
train_98383
This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.
each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.
neutral
train_98384
McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.
this can be explained by language properties: the right-headed languages suffer more from ablating the backward LStM than other languages.
neutral
train_98385
Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.
nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.
neutral
train_98386
In this work, we propose a method for density matching for bilingual word embedding (DeMa-BWE).
(2) A strong and robust unsupervised self-learning method SLunsup from (Artetxe et al., 2018b).
neutral
train_98387
Layer 1 and 2 alignments are based on anchors from the first and second LSTM layer output respectively.
we emphasize that by constraining w s→t to be orthogonal, we also preserve relations between e i,c andê i,c that represent the contextual information.
neutral
train_98388
4 is no longer well-defined, as there are many corresponding vectors to both the source and the target words.
4 Balto arrives , distracts the bear , saves Aleu , they both escape and the bear disappears .
neutral
train_98389
2 As Table 1 shows, the shift of these words is indeed slightly higher than it is for other words.
we thank the MIT NLP group and the reviewers for their helpful discussion and comments.
neutral
train_98390
Through reinforcement learning ERD is able to learn the minimum number of posts required to identify a rumour.
and lastly, CM is optimised by minimising the cost: We train CM and RDM in an alternating fashion, i.e.
neutral
train_98391
In this paper, we follow a commonly accepted definition of rumour, that it is an unverified statement, circulating from person to person and pertaining to an object, event, or issue of public concern and it is circulating without known authority for its truthfulness at the current time, but it may turn out to be true, or partly or entirely false; alternatively, it may also remain unresolved (Peterson and Gist, 1951;Zubiaga et al., 2018).
our aim is to identify rumours as early as possible, while keeping a reasonable detection accuracy.
neutral
train_98392
As expected, ICES contains neighbors of characters which are merely visually similar without representing the same underlying character (such as Λ as a neighbor of A, or ⅼ as a neighbor of i).
for instance, at p = 0.5, POS improves by about 20pp, while AT alone had an effect of only 12pp and the effect of CE was even negative.
neutral
train_98393
For this, we compute the difference in the performance decrease normalized by the test performance on the clean data.
vELMo embeddings better capture similarity between visually similar words.
neutral
train_98394
The compression c(x, y) becomes The challenge here is that choices are not made independently from another.
we first represent each word using two different embedding channels.
neutral
train_98395
(2018) also examines diversity and quality (which they call precision and recall) in the context of generative image models.
proof The lower bound falls out of the definition of L * .
neutral
train_98396
Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).
to fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.
neutral
train_98397
We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).
the sentence embedding sent emb(s) for an utterance s is a Input: Do you go get coffee often Baseline Response: I do, when I am not playing the piano.
neutral
train_98398
Therefore, Goyal et al.
recently, energy based neural structured prediction models (Amos et al., 2016;Belanger and McCallum, 2016;Belanger et al., 2017) were proposed that define an energy function over candidate structured output space and use gradient based optimization to form predictions making the overall optimization search aware.
neutral
train_98399
Since, the local normalizer is easy to compute, likelihood maximization based training is a standard approach for training these models.
we demonstrate the effect of both sources of label bias through our experiments on two common sequence tasks: CCG supertagging and machine translation.
neutral