id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_94200
The input to the first autoencoder is the word embedding of the natural language.
we collect all the answers where they have a single code block and their corresponding questions are labeled with an SQL tag from 'Posts.xml'.
neutral
train_94201
An example question and four query substructures with highest prediction prob-abilities are shown in the top of Figure 4.
figure 3 depicts the framework of SubQG, which contains an offline training process and an online query generation process.
neutral
train_94202
Path Ranking Algorithm (PRA) is the first path-based reasoning method.
when the agent selects a valid action, Line 12 is executed.
neutral
train_94203
For each (h, r) pair, there is a ground truth t and about 10 generated false t. It is divided into a training set and a test set.
the pretrained embedding dimension is set to 100.
neutral
train_94204
Many links could miss among the entities in a KG.
valid action means that there is an output relation connected with the current entity, while invalid action denotes that there is no relation.
neutral
train_94205
Given a knowledge graph G = (E, R, T), where E, R and T are the entity set, relation set and KG triple set, respectively, and a news text snippet for which entity linking has been performed to build the mentioned entity set L ⊂ E, the text-based knowledge graph updating task is to read the news snippet S and update T accordingly to get the final triple set T and the updated graph G = (E, R, T ).
this requires far more complicated updating rules.
neutral
train_94206
Finally, to accommodate the fixed input dimension of the discriminator D, we can simply select a subset P e ⊆ Ω E with the top N occurrence frequency, where N is normally much smaller than |Ω E |.
as for the longer paths, despite their potential utility values, they are more likely to contain worthless or even noisy inference steps, thus we learn them only in the training phase.
neutral
train_94207
Nevertheless, the evidential paths they used are gathered by random walks, which might be inferior and noisy.
automated reasoning on KGs has been a longstanding task for natural language processing.
neutral
train_94208
Finally, through such a trialand-error process, the well-trained policy-based agent can be used to find evidential paths for predictions.
recently, DeepPath (Xiong et al., 2017) and MINErVA (Das et al., 2018) were proposed to address the problem above by using reinforcement learning, where they are committed to learn a policy which guides the agent to find more superior evidential paths to maximize the expected reward.
neutral
train_94209
The classical approach is to find the optimal reward function by inverse reinforcement learning (IRL) (Russell, 1998;Ng et al., 2000) to explain expert behaviors.
to encourage the agent to find more diverse evidential paths, it is desirable to train the agent by imitating each trajectory instead of each of its state-action pairs.
neutral
train_94210
Considering the model complexity, we also add l 2 -norm of all learnable parameters to the final loss function.
only using BiLSTM fusion without deep fusion, the accuracy drops by 2.1% on the test set of SciTail dataset.
neutral
train_94211
When one sentence learns attention to another sentence, the attention is performed between two parallel layers and oriented to the intermediate representations from the preceding layer of another one.
given the representations of P and Q: , each cross attention P(t)→Q will use the original semantics H 0 Q of Q for interaction, where t = {1, • • • , T} and t = 1 represents P using the original representation H 0 P .
neutral
train_94212
The first five models in Table 3 are all implemented in (Khot et al., 2018).
semantics to be paid attention are uncertain and unstable for matching because semantics are changed at different layers.
neutral
train_94213
Moreover, the methods stated above assume information from different paths between an entity pair only contributes to the relation inference linearly.
its simple structure has flaws in dealing with complicated relations like 1to-N, N-to-1 and N-to-N.
neutral
train_94214
In addition, to shown the effect of collaboration, we train a model variation with the parameters of the extractor frozen.
cPL denotes our proposed final model (with all the components).
neutral
train_94215
A straightforward solution for the OKGR problem is to directly add extracted facts (by a pre-trained relation extraction model) to the graph.
similarly, we train the extractor on the corpus labeled by distant supervision.
neutral
train_94216
It first uses PCNN-ATT to extract relational triples from the corpora, and augments KG with the triples whose prediction confidences are greater than a threshold.
the value of γ determines how the parameters are updated and to what extent the the internal states will be influenced by the future.
neutral
train_94217
Inspired by the coattention mechanism , we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to p θ (•) or q φ (•) by capturing semantic interactions of input sequences.
vari-ational Autoencoder (vAE) based models have shown great potential in modeling the one-tomany problem and generate diversified inferences (Bowman et al., 2015;Zhao et al., 2017).
neutral
train_94218
To simulate human reasoning process, ADIN is stacked with multiple asynchronous inference sub-layers, and each sub-layer consists of two inferential modules in an asymmetrical manner.
furthermore, we study the effect of two inferential modules in one asynchronous inference sub-layer.
neutral
train_94219
In this paper, we propose an asynchronous deep interaction network (ADIN) for natural language inference.
different from a simple semantic matching task, reasoning should be asynchronous and fully interpretable (Yi et al., 2018).
neutral
train_94220
This may be due to many reasons, such as more typos and emojis within the original reviews, and so forth.
40%,60% great pizza , large slices !
neutral
train_94221
This is useful for reasons such as marketing, overall enjoyment of interaction, and mental health therapy.
sTE can adjust text semantics (e.g.
neutral
train_94222
(1 -very negative, 3 -neutral, 5 -very positive) Each participant evaluates every piece of text.
nouns typically carry more, and phrases the most (since they consist of multiple words).
neutral
train_94223
Subsequently, Vaswani et al.
because English and French share many similarities, both KECG and baselines get satisfactory results on DbP15K FR−EN .
neutral
train_94224
We find probes on ELMo2 to be strikingly more selective than those on ELMo1, consistent across all probes, both for part-of-speech tagging and dependency head prediction.
they take this as evidence that gains in part-of-speech probing accuracy on the trained representations over the untrained representations are due to linguistic properties, not memorization.
neutral
train_94225
Constraining the number of training examples is effective for part-of-speech, suggesting that learning each linguistic task requires fewer samples than our control task.
(2017) defined completely random tasks related to Rademacher complexity (Bartlett and Mendelson, 2001) to understand the capacity of neural networks to overfit, showing that they are expressive enough to fit random noise, but still function as effective models.
neutral
train_94226
The IB method introduces the new random variable T, the tag sequence that compresses X, by defining the conditional distribution p ✓ (t | x).
the Maryland Advanced Research Computing Center provided computing facilities.
neutral
train_94227
, x n , it is possible that x i contains not only information about word i but also information describing word i + 1, say, or the syntactic constructions in the vicinity of word i.
language Table 1: Statistics of the datasets used in this paper.
neutral
train_94228
We encode each goldsegmented sentence in our treebank via the ELMo model for that language, which yields a tensor S ELMo = R N ×L×D , where N is the number of words in the sentence, L = 3 is the number of ELMo layers, and D = 1024 is the ELMo vector dimensionality.
a closer look at the breakdown per language reveals that this picture is slightly distorted by different sentence length distributions in different languages.
neutral
train_94229
The scoring is based on the top three words on the stack and the first word of the buffer, and the input to the MLP includes the BiLSTM vectors for these words as well as their leftmost and rightmost dependents (up to 12 words in total).
one of the most striking results, however, is that both parsers improve their accuracy on longer sentences, with some models for some languages in fact being more accurate on medium-length sentences than on shorter sentences.
neutral
train_94230
Semantic parsing is the task of mapping natural language to machine interpretable meaning representations, which in turn can be expressed in many different formalisms, including lambda calculus (Montague, 1973), dependency-based compositional semantics (Liang et al., 2011), frame semantics (Baker et al., 1998), abstract meaning representations (AMR; Banarescu et al.
we also conducted various ablation studies to examine the contribution of individual features to crosslinguistic semantic parsing.
neutral
train_94231
By leveraging a multilingual BERT self-attention model pretrained on 104 languages, we found that fine-tuning it on all datasets concatenated together with simple softmax classifiers for each UD task can meet or exceed state-of-the-art UPOS, UFeats, Lemmas, (and especially) UAS, and LAS scores, without requiring any recurrent or language-specific components.
we performed a high-level visual analysis of the BERT attention weights to see if they have changed on any discernible level.
neutral
train_94232
We display an average of scores over all (89) treebanks with a training set.
an interesting observation is that for Lemmas and UFeats, the classifier prefers to also incorporate the information of the first 3 layers.
neutral
train_94233
The OpenBookQA dataset provides the core science fact used to create the question.
collecting knowledge gaps for such questions and common-sense knowledge to capture these gaps are interesting directions of future research.
neutral
train_94234
It is worth noting that recent large-scale language models (LMs) (Devlin et al., 2019;Radford et al., 2018) have now been applied on this task, leading to improved state-of-the-art results (Sun et al., 2018;Banerjee et al., 2019;Pan et al., 2019).
the baseline models were developed for this dataset using hyper-parameter tun-ing; we do not perform any additional tuning.
neutral
train_94235
We present the test accuracies on the two ques- Table 4: Test accuracy on the the OBQA-Short subset and OBQA-Full dataset assuming core fact is given.
we take a different approach of using this background knowledge in an explicit inference step (i.e.
neutral
train_94236
No Relation Score: We ignore the entire relation-based score (score r ) in the model and only rely on the fact-relevance score.
these tables are expensive to create and these questions often need multiple hops (sometimes up to 16 (Jansen et al., 2018)), making reasoning much more complex.
neutral
train_94237
The grounded schema graphs are usually much more complicated and noisier, unlike the ideal case shown in the figure.
for each question concept c i ∈ C q and answer concept c j ∈ C a , we can efficiently find paths between them that are shorter than k concepts 4 .
neutral
train_94238
Due to this property of SQuAD, translating the triple (p, q, a) is non-trivial: when p and a are translated into target language, p t and a t , a t ⊆ p t may no longer hold.
due to the commonality of BidAF and BERT, taking question and passage as input and generating passage representations for answer span prediction, we can apply BERT to our approach, as shown in Figure 2 (Center), by modifying Refinery module of modeling confidence score.
neutral
train_94239
Therefore, we need to find the answer span a t in the target language, by matching with a s in the source language s. We overviewed a highprecision baseline where a s and p s are independently translated, and a t is found only when the translation of a s is exactly and consecutively found in that of p s .
our method is extensible to any language with an existing open-source NMT, leveraging only its attention scores (Bahdanau et al., 2014).
neutral
train_94240
This material is based upon work supported by the National Science Foundation under Grant No.
two sentences within paradigm form a minimal pair if they differ in one feature (licensor, NPI, or scope), but have different acceptability.
neutral
train_94241
For example, a popular view states that NPIs are licensed if and only if they occur in downward entailing (DE) environments (Fauconnier, 1975;Ladusaw, 1979), i.e.
a result showing generally good performance for a model should be regarded as possibly hiding actual differences in performance that a different task would reveal.
neutral
train_94242
For the regularizer in eq.
(2017) with the aid of a bilingual lexicon.
neutral
train_94243
However, even F still has d 2 entries, which is far too many to be practical.
let t ∈ [0, 1] d be a vector of typological features for language ∈ D t ∪ D e .
neutral
train_94244
We finetune a BERT model on the IMHO+context dataset described in Section 3.2 using both the masked language modeling and next sentence prediction objectives.
we assume we have ADUs and predict components and relations at that level.
neutral
train_94245
On inspection, this model is mostly predicting SUPPORTED for most adversarial instances.
it has been observed in related NLP tasks that as models become more complex, it is difficult to fully understand and characterize their behaviour (Samek et al., 2017).
neutral
train_94246
Miculicich Werlen and Popescu-Belis (2017) collect such a list for English-French for the pronouns it and they.
generated noisy example 2: He was creative, generous, funny, loving and talented, and we will miss him dearly.
neutral
train_94247
We use zero-padding (shown as shaded boxes) to make P r and P s fixed-length.
it is available for each source language for which English translations are generated in the WMT tasks, e.g., Czech, French, German, etc.
neutral
train_94248
P s ) as query vectors, the rows of K r (resp.
the focus was on cross-lingual pronoun prediction, which required choosing the correct pronouns in the context of an existing translation, i.e., this was not a realistic translation task.
neutral
train_94249
We also confirmed that the multilingual redundancy of Wikipedia was effective: using more languages led to significantly better performances.
such causalities satisfy the above three desiderata.
neutral
train_94250
Figure 5 (a) shows the effect of travel status on review summarization.
model Testing At test time, We use beam search with a beam size of 4.
neutral
train_94251
To perform the aspect-level evaluation, we first define seven aspects: location, service, room, value, facility, food, and hotel, where hotel describes the overall attitude.
extensive experiments on TripAtt show that ASN achieves state-of-the-art performance on review summarization.
neutral
train_94252
Therefore, We take gender, age and travel status as our attribute information, and collect near 3 million attribute-reviewsummary triplets.
1 , and 2 show strategies based on attribute embedding, and represent Attribute Selection strategy, and Attribute Prediction strategy, respectively.
neutral
train_94253
Second, users also explicitly label their travel status when booking hotels, such as traveled solo or traveled on business.
the best per- formance is obtained when the attribute-specific vocabulary size is 100, that's the reason why we set our attribute-vocabulary size to 100.
neutral
train_94254
Furthermore, they find that the Average Word Embedding sentence encoder works at least as well as encoders based on CNN and RNN.
adding a representation of the whole document (i.e.
neutral
train_94255
(ii) We test our method on the Pubmed and arXiv datasets and results appear to support our goal of effectively summarizing long documents.
more recently, inspired by the success of LSTm-minus in both dependency and constituency parsing, Liu and Lapata (2017) extend the technique to discourse parsing.
neutral
train_94256
Specifically, a gate p gen ∈ [0, 1] is introduced to switch between copy mode and generation mode.
besides relying on parallel data and text as in previous work, our model attends to relevant external knowledge, encoded as a temporary memory, and combines this knowledge with the context representation of data before generating words.
neutral
train_94257
This result demonstrates the relevance of external knowledge and sparseness of original data are the main factors affecting system 1 Available at https://github.com/hitercs/ WikiInfo2Text performance.
in the references, nationality of a person is frequently mentioned.
neutral
train_94258
Note that the rewards usually reflect the quality of extracted summary and measured by standard evaluation protocol.
then an earlier and more detailed sentence about "fans rioting" is appended to the summary by performing re-reading.
neutral
train_94259
Inspired by the reading cognition of human beings, we propose HER, a two-stage method, to mimic how people extract summaries.
they still sequentially process text and tend to extract earlier sentences over later ones due to the sequential nature of selection (Dong et al., 2018).
neutral
train_94260
In the dynamically masking method, we order the weights from big to small at first, then go on masking two neighbors until the ratio between them is over a threshold.
the training objective is to let the relevant part attended by attention c contribute more to the summarization, while let the irrelevant or less relevant parts attended by attention o contribute less.
neutral
train_94261
Similar to Hermann et al.
inspired by the phrase-based translation models, Yao et al.
neutral
train_94262
The loss can be calculated as follows: where y (1) and y (2) are the outputs of two tasks.
in addition, both MS and MT can significantly help to produce better summaries.
neutral
train_94263
2We use the LDC MT dataset, which belongs to the news domain similar to our En2ZhSum, during the training of CLS+MT.
then we employ LexRank (Erkan and Radev, 2004), a strong and widely used unsupervised summarization method, to summarize the translated document.
neutral
train_94264
Although end-to-end deep learning has made great progress in natural language processing, no one has yet applied it to CLS due to the lack of large-scale supervised dataset.
in this work, we introduce a novel approach to directly address the lack of data.
neutral
train_94265
We also visualize in Figure 3 a comparison between Pointer-Gen+ARL-SEN and Pointer-Gen+RL-SEN according to how sensational the test set headlines are.
in this paper, we propose a model that generates sensational headlines without labeled data.
neutral
train_94266
How to be a county party secretary like JIAO Yulu?
the results on the test set show that the baseline classifier gets 60% accuracy, which is worse than the proposed classifier (which achieves 65%).
neutral
train_94267
And the color of orange/black further indicates the better model and its score.
the following shows an example of a RL model maximizing the sensationalism score by generating a very unnatural sentence, while its sensationalism scorer gave a very high score of 0.99996: 十个可穿戴产品的设计原 则这消息消息可惜说明 Ten design principles for wearable devices, this message message pity introduction.
neutral
train_94268
A headline generation model can be trained with MLE, RL or a combination of MLE and RL.
none of these work considered the sensationalism of generated outputs.
neutral
train_94269
The training model is also optimized in a way that adapts to different data, which is based on a novel method of distantly-supervised learning guided by reference summaries and testing set.
first, this graph provides a huge concept space with multi-word terms that cover concepts of worldly facts as concepts, instances, relationships, and values 2 .
neutral
train_94270
The morphological realisation (MR) module consists in producing inflected word forms based on lemmas coupled with morphological features.
since for the system predictions we do not have a parse tree, we additionally record the distance between head and dependent (in the reference data) and we compare it with the distance between the same two items in the system output.
neutral
train_94271
For the exact match, most languages score above average (from 0.51 to 0.71).
the third line (WO+MR (S −c )) shows the BLEU scores for our WO+MR model, i.e., when the MR model is applied to the output of the WO model.
neutral
train_94272
but keeps the words "hot" and "meat".
although our model needs the matched pseudoparallel corpus as a starting point, it has high tolerance to recover from occasional low-quality matches.
neutral
train_94273
This indicates that subsequent iterative refinement indeed allows for 30−50% low-quality pairs in the initial pseudo-parallel corpus.
the average difference gap between the calculated score, 52.63, and a perfect BLEU score, 100, shows that the five human rewritten sentences contain significant lexical differences.
neutral
train_94274
Additionally, both rewards require reference summaries.
we leave the learning of a generic summarisation evaluation metric for future work.
neutral
train_94275
The extractor, on the other hand, applies an actor-critic RL algorithm on top of a pointer network.
bERT+MLP+Pref significantly outperforms (p < 0.05) all the other models that do not use bERT+MLP, as well as the metrics that rely on reference summaries (see Table 1).
neutral
train_94276
Generating accurate and diverse paraphrases automatically is still very challenging.
to the best of our knowledge, this is the first work on using GAN with VAEs for paraphrase generation task.
neutral
train_94277
Backpropagation of gradients from the discriminative model to the generative model is difficult for sequence generative models.
our proposed adversarial training and two-path loss take an apparently positive effect on alleviating the exposure bias problem and generating diverse but realistic predictions.
neutral
train_94278
(2018) utilized results from pre-executed operations to improve the fidelity of generated texts.
since the record in the row is not sequential, we use a self-attention network which is similar to Liu and Lapata (2018) to model records in the context of other records in the same row.
neutral
train_94279
2019, we conducted the following human evaluation experiments at the same scale.
unlike the unordered nature of rows and columns, the history information is sequential.
neutral
train_94280
Following notations in Section 2.3, β t,i ∝ exp(score(d t , row i )) obtains the attention weight with respect to each row.
we keep most recent history information sequence within history window and denote them as hist(r i,j ).
neutral
train_94281
The resultant summary consists of salient words or phrases that are selected by integer linear programming (ILP).
sTDs and sTDs/NC take both elements into consideration and perform much better than these baselines.
neutral
train_94282
In fact, abstractive approaches for MDS are more like words or phrases recombination.
many studies attempt to construct well connected graphs, e.g., group sentences into clusters (Wan and Yang, 2008;Banerjee et al., 2015) and construct dense features with distributed embeddings (Yasunaga et al., 2017).
neutral
train_94283
Recently, neural generative models have shown promising results in paraphrase generation.
all the data are manually labeled whether two sentences convey the identical meaning.
neutral
train_94284
We should lower the reward of that good output for other generators, but reducing the reward of Y * to zero will affect the stability of other generators.
rewards will be increased as the generators increase the generation probability of better sequences while decreasing the chances of worse sequence generation.
neutral
train_94285
Therefore, the word embeddings used as additional input play an important role in helping the model to deal with language information.
the complete model has slightly more parameters than the model without graph encoders (57.6M vs 61.7M).
neutral
train_94286
Abstractive summarization is a task to generate a short summary that captures the core meaning of the original text.
perfect consistency between attention weights occasionally disturbs the model to generate proper outputs.
neutral
train_94287
Table 1 shows the evaluation results with their consistency.
our proposed HCL with non-normalized attention weights can accurately compute this inconsistency, contrary to the HCL with normalized attention weights.
neutral
train_94288
Feedback comment generation in general is the task of generating feedback comments given a text (referred to as essay, hereafter).
then, each sentence is tokenized and put into lowercase.
neutral
train_94289
We find that our model can capture distant answer-relevant dependencies such as "mean temperature" while the proximity-based baseline model wrongly takes neighboring words of the answer like "coldest" in the generated question.
our Table 5: Performance for the average relative distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question (BLEU is the average over BLEU-1 to BLEU-4).
neutral
train_94290
We find that the performance drops at most 36% when the relative distance increases from "0 ∼ 10" to "> 10".
different from Heilman and Smith (2010) which directly use the simplified sentence for generation and Cao et al.
neutral
train_94291
In the first case, there are two subsequences in the input and the answer has no relation with the second subsequence 4 .
to address this issue, we extract the structured answer-relevant relations from sentences and pro-pose a method to jointly model such structured relation and the unstructured sentence for question generation.
neutral
train_94292
Then they adopt an overgenerate-and-rank approach to select the best candidate considering several features.
they explicitly encode the relative distance between sentence words and the answer via position embedding and positionaware attention.
neutral
train_94293
This idea was followed by See et al.
we believe that especially in 5 http://www.statmt.org/wmt17/ translation-task.html 6 http://opus.nlpl.eu/EMEA.php 7 Our baseline scores differ from those provided by the task organisers because of differences in truecasing strategies.
neutral
train_94294
Besides, the results indicate that our generated reviews have no statistical difference in terms of the helpfulness scores from those written by consumers towards certain products, with average helpfulness scores 3.10 and 3.03 for machinegenerated and real-world reviews respectively.
besides the statistical and semantical metrics, we also design an empirical study to test the personalized performance of our generated reviews.
neutral
train_94295
The proposed RevGAN model follows a three-staged process: In Stage 1, we propose to use Self-Attentive Recursive Autoencoder for mapping the discrete user reviews and product descriptions into continuous embeddings for the advantage of capturing the ''essence" of textual information and the convenience for subsequent optimization processes.
we condition the sentiment labels on the discriminator to artificially change the rules that the discriminator works, and force it backpropagate loss functions that update generator policy correspondingly.
neutral
train_94296
It is defined as the fraction of unique n-grams in the summary that are novel, normalized by the length ratio of the generated and reference summaries.
this shows the possible benefits that can be obtained by exposing the model to the evaluation data in unsupervised setups.
neutral
train_94297
(Paulus et al., 2017) combined supervised and reinforcement learning, demonstrating improvements over competing approaches both in terms of ROUGE and on human evaluation.
our experiments show that the best performing set of metrics consists of RoUGE-L in conjunction with QA conf and QA f score , both computed at an article-level, and hence unsupervised.
neutral
train_94298
We adopted the following automatic metrics.
results The annotation results in Table 4 show that our model significantly outperforms baselines in both metrics.
neutral
train_94299
Among the 1,000 generated texts, 79.0% of texts have scores above 4.
at time step t, the plan decoder makes a binary prediction for each input item by estimating P (d i ∈ g t |g <t , x, z p ): where σ denotes the sigmoid function, h i is the vector of input item d i , and h p t is the hidden state of the plan decoder.
neutral