id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_91800
(2018) and Qiu et al.
figure 1 shows an example of feedback comments on preposition use.
neutral
train_91801
On the one hand, it would require a successful parsing to recognize the sources of the errors.
this limitation is particularly problematic for beginner to intermediate learners.
neutral
train_91802
In that case, the retrieved feedback comment will likely be inappropriate no matter how high its similarity is (in the above example, a feedback comment concerning in is applied to at).
we then tested a rule-based and two neural-based baselines on the dataset, showing that retrieval-based methods were promising.
neutral
train_91803
Actual examples are: I/People (don't) agree the/this statement/opinion/that/thinking and (more) harmful (not only) for anyone/people/smokers/nonsmokers.
they encounter the tremendous difficulty of covering a wide variety of errors and of maintaining a large set of rules.
neutral
train_91804
As shown in Figure 2, our criteria can select answer-relevant relations (waved in Figure 2), which is especially useful for sentences with extraneous information.
then we formalize the task under the encoder-decoder framework with gated attention and dual copy mechanism.
neutral
train_91805
Such capability improves the interpretability of our model because the model is given not only what to be asked (i.e., the answer) but also the related fact (i.e., the answer-relevant relation) to be covered in the question.
• Hybrid (Sun et al., 2018) is a hybrid model which considers the answer embedding for the question word generation and the position of context words for modeling the relative distance between the context words and the answer.
neutral
train_91806
In this paper, we proposed RevGAN that automatically generates personalized product reviews from product embeddings as opposed to labels, which could output results targeting on specific products and users.
experimental results support that our proposed model indeed achieves significantly better generation results compared to the prior literature.
neutral
train_91807
The learning rates for generator and conditional discriminator are fixed at 5e-5 and 1e-5 respectively.
regarding longtext generation tasks, for all the models above, the computation complexity might be too high, thus failing to provide satisfying results.
neutral
train_91808
For (2) and (3), we set γ (Eq.
scores render the ability of the various metrics to reproduce human preferences (in terms of readability and relevance).
neutral
train_91809
Second, to capture the dependencies on s <t , we explicitly model the dependencies among local latent variables by inputting z s t−1 to the sentence decoder, so that z s t is conditioned on z s <t and is expected to model smooth transitions in a long text.
generally speaking, to write a long text, a human writer first arranges contents and discourse structure (i.e., high-level planning) and then realizes the surface form of each individual part (low-level realization).
neutral
train_91810
At the time of writing this paper, these models are the top performing models on Yelp, Amazon and Captions, with the D&R model of showing stateof-art performance.
we ask annotators to rate each pair of generated sentences given the source sentence, on content preservation, style transfer strength, fluency, and overall success.
neutral
train_91811
1 Prime Min Bertie Ahern of Ireland calls for general election on May 24.
moreover, entity mentions connecting sentences have also been used to extract non-adjacent yet coherent sentences (Siddharthan et al., 2011;Parveen et al., 2016).
neutral
train_91812
We found that absolute grammaticality judgments were hard to achieve on Mechanical Turk; Turkers' ratings of grammaticality were very noisy and they did not consistently rate true article sentences above obviously noised variants.
we implement a deletion-based BiLSTM model for sentence compression (wang et al., 2017) and run the model on top of our extraction output.
neutral
train_91813
where h i is a representation of the i-th sentence in the document.
we turn to other methods as described in the next two paragraphs.
neutral
train_91814
Details of the human evaluation instruction are included in Appendix A.3.
typically, an encoder encodes the input x i to a semantic representation c i , while a decoder controls or modifies the stylistic property and decodes the sentence x i based on c i and the pre-specific style l i .
neutral
train_91815
The auto-encoding reconstruction objective of the source domain is: where the encoder-decoder model (E, D) is shared in both domains.
we would like to thank the reviewers for their constructive comments.
neutral
train_91816
The test accuracy of all these classifiers used for evaluation are reported in Appendix A.1.
the corresponding objective can be written as: Figure 1: Illustration of the proposed DAST-C (left) and DAST (right) model.
neutral
train_91817
Over the past couple of years, several variants of the encodeattend-decode model have been proposed.
we observe that the increase in Q-Metric for refined questions is because the RefNet model can correct/add the relevant Named Entities in the question.
neutral
train_91818
Table 4 shows a comparison of existing and proposed summarization systems on the set of corpora in §5 except for Newsroom 12 .
lin and Bilmes (2011) claim that many existing summarization systems are instances of mixtures of such sub-aspect functions; for example, maximum marginal relevance (MMR) (Carbonell and Goldstein, 1998) can be seen as an combination of diversity and importance functions.
neutral
train_91819
Social posts and news articles are biased to the position aspect while the other two aspects appear less relevant.
• AMI (Carletta et al., 2005): is documented meeting minutes from a hundred hours of recordings and their abstractive summaries.
neutral
train_91820
Alternative approaches for enrichment exists, of course, but we wonder how worthwhile further efforts would be.
we refer to linguistic literature to argue that proper nouns, having no lexical meaning but rather just a referential function, cannot reliably be used in the evaluation of word-level translation systems.
neutral
train_91821
In the original paper, early stopping is done by training for 50 epochs, and the best model regarding development accuracy is applied to the test set.
for this task, all languages are both development and target languages.
neutral
train_91822
For example, Beam-10 yields 15.9% fewer search errors (absolute) than greedy decoding (57.68% vs. 73.58%), but Beam-100 improves search only slightly (53.62% search errors) despite being 10 times slower than beam-10.
exact search may not be practical, but it allowed us to discover deficiencies in widely used NMT models.
neutral
train_91823
This paper systematically studies this temporal commonsense problem.
given the lack of evaluation standards and datasets for temporal commonsense, our second contribution is the development of a new dataset dedicated for it, MCTACO (short for multiple choice temporal common-sense).
neutral
train_91824
To the best of our knowledge, there are no other available systems for the "stationarity", "typical time", and "frequency" phenomena studied here.
all other systems drop by almost 30% from F 1 to EM, possibly another piece of evidence that they only associate surface forms instead of using one representation for temporal commonsense to classify all candidates.
neutral
train_91825
We observe that accuracy is stable across the different subsets for the single LM and GPT-2 117M with full scoring.
the proliferation of artificial-intelligence technologies that interact with human users (e.g., dialogue systems, recommendation systems, information retrieval tools) has led to renewed interest in common-sense reasoning (CSR).
neutral
train_91826
(2018), is a rule-based system that uses search engines to gather evidence for the candidate resolutions without relying on the entities themselves.
their performance drops significantly on the non-associative subset, when information related to the candidates themselves does not give away the answer.
neutral
train_91827
We carefully analyze the generated results of Pun-GAN with low overall scores in human evaluation.
the generation probability of the whole sentence x is formulated as To give a warm start to the generator, we pretrain it using the same general training corpus in the original paper.
neutral
train_91828
In this work, we propose to incorporate language modeling as an auxiliary task to help QG via multi-task learning.
some other works try to generate questions from a text without answers as input (subramanian et al., 2018;.
neutral
train_91829
Here, we use the term "demographic" to refer to a group of people with the same gender, race, or sexual orientation.
we present a systematic study of biases in natural language generation (NLG) by analyzing text generated from prompts that contain mentions of different demographic groups.
neutral
train_91830
We demonstrated the utility of our approach in a cross-domain evaluation of textual entailment for fact verification.
first, we note that indeed the fully lexicalized model, which performs well in-domain, transfers very poorly to a new domain.
neutral
train_91831
Basic NER: Token sequences which are labeled as NEs are replaced by the corresponding label, e.g., location, person.
since our approach is implemented as a data pre-processing step, it can potentially help any neural method that learns from text and is likely to be used out of domain.
neutral
train_91832
Overlap Aware NER (OA-NER): This technique additionally captures the lexical overlap between the claim and evidence sentences with entity ids.
we split the training dataset into two partitions, with 40,904 records for training and 9,068 records for development.
neutral
train_91833
Finally, the composed sentence vector is used for sentiment prediction.
these methods belong to the biased model, where the words in the tail of a sentence are more heavily emphasized than those in the header for building sentence representations.
neutral
train_91834
In addition, our approach is easy to train and can be used as a drop-in complement for any text-spotting algorithm that outputs a ranking of word hypotheses.
in [431]: #plt.figure(figsize=(1,0.0008)) data = np.array ([[0.65966403,0.80212915],[0.3854018,0.25270265],[0.1903537,0.5453093]]) #labels = np.array ([['way','SWE'], ['DSWE','D'], ['DSWE','D'] figure(figsize=(3,2) figure(figsize=(3,2) figure(figsize=(3,2)
neutral
train_91835
Tritraining (Zhu, 2006) uses 3 classifiers to vote their classification results and labels them if all the classifiers agree with the prediction.
after the early-stopping in meta-epoch, we simply add all the unlabeled data with its pseudo-labels to training set, which finally brings significant performance gain.
neutral
train_91836
Whereas mass processing of articles was once largely limited to keyword searches and citation crawling, modern natural language processing techniques can now deeply search for specific and broad concepts, explore relationships, and automatically extract useful information from text.
further, if two proposed regions overlap, their size and position can help determine which is more likely to fit a true document region more closely.
neutral
train_91837
The architecture of the neural topic model is shown in Figure (1) and we describe the model in more details below.
with the sampledθ , for each document d ∈ N d , we can estimate the Evidence Lower Bound (ELBO) with a Monte Carlo approximation using L independent samples: By minimising the ELBO in Eq.
neutral
train_91838
Examples include neural topic models which are typically built upon variational autoencoder (VAE) with an objective of minimising the error of reconstructing original documents based on the learned latent topic vectors.
we evaluate our model on the 20 Newsgroups † consisting of 18K documents, and NIPS (Tan et al., 2017) consisting of 6.6k documents.
neutral
train_91839
Here, the action aims to select words with high coherence scores and filter background words.
the topic features have only 30 dimensions so the performance is limited in comparison with CNN and RNN.
neutral
train_91840
By training translation models using varying amounts of training data, we are able to explore performance across a wide range of resource levels and examine the reliability of performance patterns.
there is no established guidance on how to optimize the resulting MT-IR system.
neutral
train_91841
They can capture semantic and syntactic properties of words as linear substructures, allowing relationships to be expressed as geometric translations (Mikolov et al., 2013b).
it is surprising that linear maps also outperform geometric translations, given that they are ultimately learned using translation vectors.
neutral
train_91842
So far, we have obtained the adjective-noun pairs and value polarity relations for nouns.
syntaxsQLNet and sQL-Net solve the text-to-sQL task using a sequenceto-set structure.
neutral
train_91843
Bosc and Vincent (2018) proposed a method that learns the long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) that encodes a word definition into an embedding.
while the baseline CPAE outperformed our model on the two out of five datasets pertaining to word relatedness, our model consistently outperformed the baseline on the similarity benchmarks.
neutral
train_91844
Relationships between words captured in embeddings are studied in terms of similarity or relatedness (Hill et al., 2015).
the model with the proposed method chose the correct words at both the generalized class and the details.
neutral
train_91845
As far as semi-automatic approaches are concerned, Mihalcea and Moldovan (2001) conceived eXtended WordNet, a resource providing disambiguated glosses by means of a classification ensemble combined with human supervision.
supervision depends heavily on large quantities of reliable sense-annotated data which -especially for languages other than English -are poorly available.
neutral
train_91846
in sentiment analysis task.
further, more interpretability would be obtained by our method due to the structurally sparse attention distribution.
neutral
train_91847
Our model outperforms BERT BASE by a stable margin especially in SST-2, SenTube-A, SenTube-T, and SciTail, with the improvements of 1.2%, 2.1%, 1.7%, and 1.5%, respectively.
the attention matrices of models with different setups are visualized in Figure 1.
neutral
train_91848
UMH (Alaux et al., 2019), was also evaluated on this benchmark.
to gradient based methods, our approach avoids word sampling and hyper-parameters that need to be tuned.
neutral
train_91849
2) LSTM-AutoEncoder (Ryu et al., 2017): Recent work on OOD detection that uses only ID examples to train an autoencoder for OOD detection.
we use squared hinge loss and L2 regularization.
neutral
train_91850
We expect optimizing only on L in and L ood will lead to lower confidences on ID classification, because the system tends to mistakenly reduce the scale of F in order to minimize the loss for OOD examples.
for example, Table 1 shows some of the utterances a chat-bot builder provided for training.
neutral
train_91851
Some entities, such as R & B, are changed to lower case incorrectly, especially if not recognized as a proper noun.
this paper focuses on harnessing pre-trained neural networks with rulebased systems.
neutral
train_91852
Following the previous works, we compute the BLEU score (Papineni et al., 2002) used in machine translation.
we follow previous works and apply three automatic evaluation metrics and one human evaluation for three indicative aspects: attribute compatibility, content preservation, and fluency.
neutral
train_91853
To this end, we compute the softmax normalization over all incoming edges for node i: It provides us a way to compare all incoming edges in the same manner, rather than making a local decision via a bundle of edges from node j.
this model reduces the architecture search problem to learn continuous variables {α i,j k }, which can be implemented using efficient gradient descent methods.
neutral
train_91854
In this paper, we study differentiable neural architecture search (NAS) methods for natural language processing.
we follow the same setup as in language modeling but with a different learning rate (0.1) and a different hidden layer size (256).
neutral
train_91855
, e n } and a set of relations R = {r 0 , .
note that results with full softmax (Appendix D) demonstrate that our implementation of baselines is very competitive.Our implementation of ComplEx performs significantly better than ConvE (Dettmers et al., 2018) on two out of the three datasets, and come close to results of Lacroix et al.
neutral
train_91856
When pretraining is combined with annealing, PPL improves substantially.
the girl is drinking water with a bucket .
neutral
train_91857
BIOBERT is trained on PubMed abstracts and PMC full text articles, and CLIN-ICALBERT is trained on clinical text from the MIMIC-III database .
this corpus consists of 18% papers from the computer science domain and 82% from the broad biomedical domain.
neutral
train_91858
We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2).
when we look at the general population's opinion as well, we find a stark difference between their preferences and those of the Reddit users (Table 2).
neutral
train_91859
To enhance the capability of domain adaptation for the sentence generator, we propose to first interpret dialogue acts in natural language so that new dialogue acts can be understood by the generator.
table 1 gives some examples of atomic templates used in DStC 2&3 dataset.
neutral
train_91860
PaLM provides an intuitive way to inject structural inductive bias into the language model-by supervising the attention distribution.
their contribution is orthogonal to ours and not head-to-head comparable to the models in the punctuation removal.
neutral
train_91861
We observe no significant trend of favoring one branching direction over the other.
meaningful span representations are crucial in span-based tasks (Lee et al., 2017;Peng et al., 2018c;Swayamdipta et al., 2018, inter 1 We experiment with a strong LSTm implementation for language modeling (merity et al., 2018), see §3.
neutral
train_91862
Our split has less training data and more test instances in the "Hard" category and less in "Easy" and "Medium".
(2018a)'s model on their English dataset but under our split.
neutral
train_91863
(2018b), we make training, development and test sets split in a way that no database overlaps in them as shown in Table 1.
the characterbased model is less sensitive to question sentences, which is likely because characters are less sparse compared with words.
neutral
train_91864
Thus, the parameters of the gating GCN are trained from the relevance loss and the usual decoding loss, a ML objective over the gold sequence of decisions that output the query y. Discriminative re-ranking Global gating provides a more accurate model for softly predicting the correct subset of DB constants.
experimental setup We train and evaluate on SPIDeR (Yu et al., 2018b), which contains 7,000/1,034/2,147 train/development/test examples, using the same pre-processing as .
neutral
train_91865
and for 1 ≤ k < N .
we report the performance in probing tasks in Table 2.
neutral
train_91866
We observe that before finetuning, the attention patterns on [SEP] tokens and periods is almost identical between sentences.
in this work, we show that pretrained language models, BERT (Devlin et al., 2018) in particular, can be used for this task to capture contextual dependencies without the need for hierarchical encoding nor a CRF.
neutral
train_91867
It is clear that our approach outperforms BERT+TRANSFORMER.
we found it better to train our model to directly predict the ROUGE scores, and the loss function we used is Mean Square Error.
neutral
train_91868
First, we reward the agent when it correctly guesses the answer after communication r self after = 1 y * =ŷ 1 , where 1 is an indicator function, in order to encourage it to incorporate information received from the other agent.
we consider two linguistic communities of population sizes N 1 and N 2 , which are trained independently from each other as fully-connected communities and have developed separate communication protocols.
neutral
train_91869
This helped the annotators learn how to approach the task.
there is wide support for the idea that women experience more condescension than men do (Hall and Braunwald, 1981;Harris, 1993;McKechnie et al., 1998;Cortina et al., 2002;trix and Psenka, 2003), as reflected in the recent lexical innovation mansplaining, which can be roughly paraphrased as 'a man condescending to a woman'.
neutral
train_91870
Condescending language use is caustic; it can bring dialogues to an end and bifurcate communities.
we then used Expectation-Maximization, as in Dempster et al.
neutral
train_91871
The results are largely consistent with the extractive model scores, with CONTENTSELEC-TOR performing the best, the topic causing no statistically significant difference, and a large gap remaining between current models and a perfect system (the heuristic labels).
because we are interested in topic-focused summarization, in this work, it is also assumed that a high-level topic is provided as input, such as in Figure 1.
neutral
train_91872
We use the ABS model of Rush et al.
we believe this concept warrants further exploration in future work.
neutral
train_91873
Core to our approach is the principle of the Information Bottleneck (Tishby et al., 1999), producing a summary for information X optimized to predict some other relevant information Y.
this choice is motivated by linguistic cohesion, in which we expect more broadly relevant information to be common between consecutive sentences, while less relevant information and details are often not carried forward.
neutral
train_91874
In Figure 1, the alignment matrix will only have A 0,6,8 = 1 and A 1,2,3 = 1 which indicates that the first slot is aligned with "nation of Turkey", and the second slot is aligned with "silver medals".
the original dataset is annotated with SQL queries, but we only use the execution result for training.
neutral
train_91875
For instance, programs like "select( previous( next( previous(argmax [all rows], column:silver) column:bronze)" are unlikely to have a corresponding question.
similar ideas of using abstract meaning representations have been explored with fully-supervised semantic parsers (Dong and Lapata, 2018;Catherine Finegan-Dollak and Radev, 2018) and in other related tasks (Goldman et al., 2018;Herzig and Berant, 2018;Nye et al., 2019).
neutral
train_91876
Considering the large search space of possible programs, an alignment model that takes into account the full range of correspondences between program operations and question spans would be very expensive.
we instead optimize the following objective: where E[A] are the marginals of A with respect to p (A|x, t, h).
neutral
train_91877
4 We find that 83.6% of questions in WIK-ITABLEQUESTIONS are covered by at least one consistent program.
we set the maximal number of production rules for generating abstract programs to 6 and 9, which leads to search time of around 7 and 10 hours for WIKISQL and WIKITABLEQUES-TIONS respectively, using a single CPU.
neutral
train_91878
It is interesting to explore the use of some revision mechanisms when the initial steps go wrong.
an updated graph G t is produced and G t will be processed for the next time step.
neutral
train_91879
We use a standard semantic parser provided by AllenNLP (Gardner et al., 2017), based on a sequence-to-sequence model (Sutskever et al., 2014).
we additionally pruned equivalent examples: if we generated a logical form with the structure A B, we pruned B A.
neutral
train_91880
Others used a variational autoencoder by modeling unlabeled utterances (Kociský et al., 2016) or logical forms (Yin et al., 2018) as latent variables.
on two datasets, our method leads to 70.6 accuracy on average on the true distribution, compared to 51.3 in paraphrasing-based data collection.
neutral
train_91881
After the paraphrase process was completed, we manually fixed typos in all generated para- phrases.
conversing with a virtual assistant in natural language is one of the most exciting current applications of semantic parsing, the task of mapping natural language utterances to executable logical forms (Zelle and Mooney, 1996;Zettlemoyer and collins, 2005;Liang et al., 2011).
neutral
train_91882
Lawrence and Riezler (2018) and Berant et al.
we train s t+1 from all the examples generated in iterations 0 • • • t. in every iteration more and more examples are labeled, and a better scoring function is trained.
neutral
train_91883
We design ablation studies as follow: 1) w/o C: without word-character Containing graph(C-graph).
by using a single NVIDIA GeForce GTX 1080 Ti G-PU, We randomly select 10 training and testing epoch as samples.
neutral
train_91884
For example, as shown in Figure 1, the boundaries and semantic knowledge of the selfmatched word " ¬::" (Beijing Airport) can help the character ":"(airplane) to predict an "I-LOC" tag, instead of "O" or "B-LOC" tags.
this work is also supported by the CCF-tencent Open Research Fund.
neutral
train_91885
Since aligning word-character lattice structure for batch training is usually non-trivial, the lattice LSTM (Zhang and Yang, 2018) suffers from slow speeds in training and testing.
the output of j-th layer is a new set of node features, NF (j+1) = {f 1 , f 2 , ..., f N }.
neutral
train_91886
The left side shows the overall architecture, including an encoding layer, a graph layer, a fusion layer, and a decoding layer.
t-graph performs poorly without cooperating with other graphs.
neutral
train_91887
On DS datasets, the four methods targeting label distribution shift achieve consistent performance improvement, with averagely 3.83 F1 improvement on KBP and 1.72 on NYT.
this phenomenon verifies that shifted label distribution is making a negative influence on model performance.
neutral
train_91888
We observe that, on human-annotated dataset, neural relation extraction models outperform feature-based models by notable gaps, but these gaps diminish when the same models are applied to DS datasets-neural models merely achieve performance comparable with feature-based models.
in the train set of KBP, 74.25% instances are annotated as NONE and 85.67% in the test set.
neutral
train_91889
Low-Resource NER Following Cotterell and Duh (2017), we emulate truly low-resource condition with 100 sentences for training.
our models are optimized by mini-batch stochastic gradient descent (SGD) with learning rate 0.01 and batch size 10.
neutral
train_91890
Currently, research efforts have derived useful discrete features from dependency structures (Sasano and Kurohashi, 2008;Cucchiarelli and Velardi, 2001;Ling and Weld, 2012) or structural constraints (Jie et al., 2017) to help the NER task.
furthermore, every entity type (i.e., row) has a most related dependency relation (with more than 17% occurrences).
neutral
train_91891
Note that, the bilingual data needed for this approach is coarsely taken from the same domain.
task-specific models use all three splits for training, hyper-parameter tuning and evaluation, respectively, while models trained on bilingual data only use the test split for evaluation (so table 2: AUC performance for our approach in comparison to the task-specific models directly trained on the evaluation datasets (results are comparable to supervised state-of-theart).
neutral
train_91892
The main thesis underlying this work is that texts composed by members of a specific cultural group are distinct from those composed by members of another.
the translation and document representation components of the compound transformation are complementary in the sense that when translation is of high quality, a naive document representation suffices.
neutral
train_91893
For each training size n, we sampled n training instances to train the models and tested on the entire dev/test set.
it should also be noted that this model misses the sentiment polarity inversion expressed by the word despite, due to its simple sequence-agnostic nature.
neutral
train_91894
The iteration number iter used in dynamic routing algorithm is 3.
next, an Induction Module executes a dynamic routing procedure, in which the matrix transformation can be seen as a map from the sample space to the class space, and then the generation of the class representation is all depending on the routing-by-agreement procedure other than any parameters, which renders a robust induction ability to the proposed model to deal with unseen classes.
neutral
train_91895
However, this sample-wise comparison may be severely disturbed by the various expressions in the same class.
munkhdalai and Yu (2017) proposed the meta Network, which learnt the metalevel knowledge across tasks and shifted its inductive biases via fast parameterization for rapid generalization.
neutral
train_91896
A prediction is incorrect if it disagrees with what is known to be true.
second, we develop a systematic framework for mitigating inconsistency in models by compiling the invariants into a differentiable loss functions using t-norms (Klement et al., 2013;Gupta and Qi, 1991) to soften logic.
neutral
train_91897
With the notion of errors, we can now focus on how to train models to minimize them.
the λ for M dataset can be much higher.
neutral
train_91898
5 Here, we will focus on the product t-norm to allow comparisons to previous work: as we will see in the next section, the product t-norm strictly generalizes the widely used cross entropy loss.
we count the percentage of unlabeled examples where the symmetry/transitivity loss is positive.
neutral
train_91899
Indeed, the output that copies input gives maximal BLEU yet clearly fails in terms of the style transfer.
to generate plausible sentences with specific semantic and stylistic features every sentence is conditioned on a representation vector z which is concatenated with a particular code c that specifies desired attribute, see Figure 3.
neutral