id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_16200
Explanatory models can potentially help human moderators make quicker and more consistent decisions about whether to remove comments (Lakkaraju et al., 2016).
we propose that truly minimal explanations are liable to give only a partial portrait of why a comment is objectionable, making it harder to render a fair holistic decision.
contrasting
train_16201
When working with a recurrent unit that has no internal bias term, this behavior is entirely determined by the bias term of the final sigmoid output layer, σ(wx + b), which with typical random initialization of b results in a default predicted value of roughly 0.5.
this 0.5 default value is not always optimal or semantically appropriate to the predictive task.
contrasting
train_16202
In S3, "Information" and "those services" are considered more vague by humans than "often depends."
if those terms are removed from the sentence, yielding "The [..] we obtain from [..] often depends on your settings or their privacy policies."
contrasting
train_16203
Note that for a left leaning article (see Figure 3a), the model focuses on sentences involving gun-control, feminists, and transgender.
a visualization of sentence attention scores for an article which the model predicted as "right-leaning" ((see Figure 3b)) reveals a focus on words like god, religion etc.
contrasting
train_16204
Most of these studies attempt to extract efficient features from text content (Liu and Hsieh, 2006;Lin et al., 2012;Aletras et al., 2016;Sulea et al., 2017) or case annotations (e.g., dates, terms, locations, and types) (Katz et al., 2017).
these conventional methods can only utilize shallow textual features and manually designed factors, both require massive human efforts and usually suffer from the generalization issue when applied to other scenarios.
contrasting
train_16205
In response to the growth in online hate, there has been a trend of developing automatic hate speech detection models to alleviate online harassment (Warner and Hirschberg, 2012;Waseem and Hovy, 2016).
a common problem with these methods is that they focus on coarse-grained 1 http://www.pewinternet.org/2017/07/11/onlineharassment-2017/ Figure 1: A portion of the hate group map published by the Southern Poverty Law Center (SPLC).
contrasting
train_16206
Our work is most closely related to (Van Hee et al., 2015), which focuses on fine-grained cyberbullying classification.
this study only focuses on seven categories of cyberbullying while our dataset consists of 40 classes.
contrasting
train_16207
Because of their predictive power, socio-demographics and other extralinguistic information can additionally be leveraged when building the model itself.
a central challenge in integrating community attributes is that they have very different properties than linguistic features and can be lost, in essence, like a needle in a haystack.
contrasting
train_16208
Authoritarian countries such as Russia and China have received a great deal of attention for trying to control and distort the spread of information through "fake news" and censorship.
authoritarian governments might also use subtle tactics of media manipulation that are much harder to detect, like flooding communication channels with irrelevant information or highlighting particular viewpoints of an event to distract public attention (Rozenas and Stukal, Forthcoming;Munger et al., 2018;King et al., 2017).
contrasting
train_16209
Censorship can be detected by checking what content is no longer available.
we have no systematic way of identifying more subtle forms of media manipulation.
contrasting
train_16210
About 11,900 articles are hand-annotated with frames: annotators highlight spans of text related to each frame in the codebook and assign a single "primary frame" to each document.
the MFC, like other prior framing analyses, relies heavily on labor-intensive manual annotations.
contrasting
train_16211
This indicates that MLE objective is helpful to stabilize the training and improve the model performance, as we expected.
further increasing α does not bring more gain.
contrasting
train_16212
(2013b) propose to merge them into a single token in a pre-processing step.
for that purpose, they use a scoring function based on their co-occurence frequency in the training corpus, with a discounting coefficient δ that penalizes rare words, and iteratively merge those above a threshold: we also need to learn representations for compositional n-grams in our scenario, as there is not always a 1:1 correspondence for n-grams across languages even for compositional phrases.
contrasting
train_16213
Nevertheless, these systems work in a word-by-word basis and have only been shown to work in limited settings, being often evaluated in word-level translation.
our method builds a fully featured phrasebased SMT system, and achieves competitive performance in a standard machine translation task.
contrasting
train_16214
(2018), who use a separate encoder for each language, sharing only a subset of their parameters, and incorporate two generative adversarial networks.
our results in Section 6.1 show that our SMT-based approach obtains substantially better results.
contrasting
train_16215
is a conflict answer to Question 1.
when this answer text is considered as a sequence, it is highly possible to be predicted as the category of positive or negative rather than conflict.
contrasting
train_16216
(2014) investigate the identification of topic-relevant claims, an approach that was later extended with evidence extraction to mine supporting statements for claims (Rinott et al., 2015).
both approaches are designed to mine arguments from Wikipedia articles; it is unclear whether their annotation scheme is applicable to other text types.
contrasting
train_16217
In general, we observe an overall lower score for trl models that use the DIP2016 corpus compared to those using the SemEval corpus.
to the mtl model, for trl models all parameters are transferred to the main task, not just parameters that represent shared knowledge.
contrasting
train_16218
Dynamic aspect extraction is advantageous since it assumes nothing more than a set of relevant reviews for a product and may discover unusual and interesting aspects (e.g., whether a plasma television has protective packaging).
it suffers from the fact that the identified aspects are finegrained, they have to be interpreted post-hoc, and manually mapped to coarse-grained ones.
contrasting
train_16219
Emotion Corpora: There are several affective datasets such as SemEval-2017 Task 4 (Rosenthal et al., 2017) and Olympic games dataset (Sintsova et al., 2013).
these datasets are limited by quantity.
contrasting
train_16220
As shown in Table 4, their model performs as well as other traditional methods.
our model (CARER) significantly outperforms theirs (↑20%).
contrasting
train_16221
16 As shown in Figure 3, due to the limited coverage of the LIWC lexicon, such resources may not be feasible on evolving, large-scale datasets.
word2vec contains over 3 million unique word embeddings and has been proven effective for text classification.
contrasting
train_16222
(2016) partially motivate the ranking-based variant throught the importance sampling viewpoint of Bengio and Senécal (2008).
there are two critical differences: 1) the algorithm of Bengio and Senécal (2008) does not lead to the same objective L n R in the ranking-based variant of NCE; instead it uses importance sampling to derive an objective that is similar but not identical; 2) the importance sampling method leads to a biased estimate of the gradients of the log-likelihood function, with the bias going to zero only as K !
contrasting
train_16223
It refers to the critique (Ranzato et al., 2016) that MLE-trained models tend to have suboptimal performance as they are trained to maximize a convenient objective (i.e., maximum likelihood of word-level correct next step prediction) rather than a desirable sequence-level objective that correlates better with the common discrete evaluation metrics such as ROUGE (Lin and Och, 2004) for summarization, BLEU (Papineni et al., 2002) for translation, and word error rate for speech recognition, not loglikelihood.
training models that directly optimize for such discrete metrics as objective is hard due to non-differentiable nature of the corresponding loss functions (Rosti et al., 2011).
contrasting
train_16224
Towards sequence level optimization, previous works (Ranzato et al., 2016;Paulus et al., 2018) employ reinforcement learning (RL) with a policy-gradient approach which works around the difficulty of differentiating the reward function by using it as a weight.
rEINFOrCE is known to suffer from high sample variance and credit assignment problems which makes the training process difficult and unstable besides resulting in models that are hard to reproduce (Henderson et al., 2018).
contrasting
train_16225
Most neural sequence generation models are trained with the objective of maximizing the probability of the next correct word.
this results in a major discrepancy between training and test settings of these models because they are trained with cross-entropy loss at word-level, but evaluated based on sequence-level discrete metrics such as ROUGE (Lin and Och, 2004) or BLEU (Papineni et al., 2002).
contrasting
train_16226
Moreover, the words should match the leave-one-out method's selections, which closely align with human perception (Li et al., 2016b;Murdoch et al., 2018).
rather than providing explanations of the original prediction, our reduced examples more closely resemble adversarial examples.
contrasting
train_16227
Most reduced inputs are nonsensical to humans ( Figure 2) as they lack information for any reasonable human prediction.
models make confident predictions, at times even more confident than the original.
contrasting
train_16228
In this example, the leave-one-out method will not highlight "Broncos".
this is not a failure of the interpretation method but of the model itself.
contrasting
train_16229
This method adjusts the softmax function's temperature parameter using a small held-out dataset to align confidence with accuracy.
because the output is calibrated using the entire confidence distribution, not individual values, this does not reduce overconfidence on specific inputs, such as the reduced examples.
contrasting
train_16230
This approach has become popular because it avoids exposure bias, and directly optimizes a measure of summary quality.
it also has a number of downsides.
contrasting
train_16231
(2018) recently proposed an extractive summarization approach based on deep Q learning, a type of reinforcement learning.
their approach is extremely computationally intensive (a minimum of 10 days before convergence), and was unable to achieve ROUGE scores better than the best maximum likelihood-based approach.
contrasting
train_16232
One potential cause of this high variance can be seen by inspecting (3), and noting that it basically acts to change the probability of a sampled index sequence to an extent determined by the reward R(i, a).
since ROUGE scores are always positive, the probability of every sampled index sequence is increased, whereas intuitively, we would prefer to decrease the probability of sequences that receive a comparatively low reward, even if it is positive.
contrasting
train_16233
Two of the most commonly used assumptions are that simple words are associated with shorter lengths and higher frequencies in a corpus.
these assumptions are not always accurate and are often the major source of errors in the simplification pipeline (Shardlow, 2014).
contrasting
train_16234
(2016) used a list of 3,000 most common English words; Lee and Yeung (2018) used an ensemble of vocabulary lists of different complexity levels.
to the best of our knowledge, there is no previous study on manually building a large word-complexity lexi-con with human judgments that has shown substantial improvements on automatic simplification systems.
contrasting
train_16235
In this situation, the original system should be extended to support new user actions based on user feedback.
adding new intents or slots will change the predefined ontology.
contrasting
train_16236
RL methods have shown great potential in building a robust dialog system automatically.
rL-based approaches are rarely used in real-world applications because of the maintainability problem (Paek and Pieraccini, 2008;Paek, 2006).
contrasting
train_16237
The first strategy requires a new interaction environment.
building a user simulator or hiring real users once the system needs to be extended is costly and impractical in real-world applications.
contrasting
train_16238
The user simulator can provide infinite simulated experiences without additional cost, and the trained system can be deployed and then fine-tuned through interactions with real users (Su et al., 2016;Zhao and Eskenazi, 2016;Williams et al., 2017;Dhingra et al., 2017;Liu and Lane, 2017;Peng et al., 2017b;Budzianowski et al., 2017;Peng et al., 2017a;Tang et al., 2018).
due to the complexity of real conversations and biases in the design of user simulators, there always exists the discrepancy between real users and simulated users.
contrasting
train_16239
Respectively, the policy model is trained via real experiences collected by interacting with real users (direct reinforcement learning), and simulated experiences collected by interacting with the learned world model (planning or indirect reinforcement learning).
the effectiveness of DDQ depends upon the quality of simulated experiences used in planning.
contrasting
train_16240
Theoretically, larger amounts of high-quality simulated experiences can boost the performance of the dialogue policy more quickly.
the world model by no means perfectly reflects real human behavior, and the generated experiences, if of low quality, can have negative impact on dialogue policy learning.
contrasting
train_16241
From the table, with fewer simulated experiences, the difference between DDQ and D3Q may not be significant, where DDQ agents achieve about 50%-60% success rate and D3Q agents achieve higher than 68% success rate after 100 epochs.
when the number of planning steps increases, more fake experiences significantly degrade the performance for DDQ agents, where DDQ(10, fixed θ G ) suffers from bad simulated experiences after 300 epochs and achieves 0% success rate.
contrasting
train_16242
Recent work ((Vinyals and Le, 2015;Bordes et al., 2016;Serban et al., 2016)) has shown that dialog models can be trained in an end-to-end manner with satisfactory results.
human dialog has some unique properties that many other learning tasks do not.
contrasting
train_16243
The system asks questions to fill the missing fields and eventually generate the correct corresponding API call.
the system asks for information in a deterministic order -Cuisine → Location → People → Price to complete the missing fields.
contrasting
train_16244
The system must propose options to users by listing the restaurant names sorted by their corresponding rating (from higher to lower) until users accept.
each restaurant has a different rating.
contrasting
train_16245
Synthesized data can also be an option to obtain a large dataset.
these are often built from templated responses which make it meaningless for dialogue models to learn.
contrasting
train_16246
Otherwise a status action of "no flight found" will be returned.
the task logic for customers with a goal of "change" would be slightly different.
contrasting
train_16247
During the self-play experiments we perform similar predictions on the dialogue states.
instead of asking the models to predict those states given ground truth history, we now ask the models to predict given the generated dialogues.
contrasting
train_16248
Paraphrase generation is an important task in NLP, which can be a key technology in many applications such as retrieval based question answering, semantic parsing, query reformulation in web search, data augmentation for dialogue system.
due to the complexity of natural language, automatically generating accurate and diverse paraphrases is still very challenging.
contrasting
train_16249
As a result, end-to-end neural text generation has drawn increasing attention from the natural language research community (Mei et al., 2016;Lebret et al., 2016;Wiseman et al., 2017;Kiddon et al., 2016).
a critical issue for neural text generation has been largely overlooked.
contrasting
train_16250
In fact, other value likes -2 or -3 is close to -1, and the word "edges" is also applicable to them.
directly establishing the lexical choices on various sparse numeric values is not easy (Reiter et al., 2005;Smiley et al., 2016;Zarrieß and Schlangen, 2016).
contrasting
train_16251
(2016) have built a biography generation dataset from Wikipedia.
a recent study by Perez-Beltrachini and Gardent (2017) shows that existing datasets have a few missing properties such as lacking syntactic and semantic diversity.
contrasting
train_16252
Table 2 shows that 11.7% of facts in the game summaries can be inferred based on the input data.
this dataset focuses on generating long text and 27.1% of facts are unsupported 2 , which brings difficulties to the analysis of fidelity for the generated text.
contrasting
train_16253
Some have focused on conditional language generation based on tables (Yang et al., 2017), short biographies generation from Wikipedia tables (Lebret et al., 2016;Chisholm et al., 2017) and comments generation based on stock prices (Murakami et al., 2017).
none of these methods consider incorporating the facts that can be inferred from the input data to guide the process of generation.
contrasting
train_16254
The split is done at article level.
we keep all samples instead of only keeping the sentence-question pairs that have at least one non-stop-word in common (with 6.7% pairs dropped) as in (Du et al., 2017).
contrasting
train_16255
In order to test a model's real semantic parsing performance on unseen complex programs and its ability to generalize to new domains, an SP dataset that includes a large amount of complex programs and databases with multiple tables is a must.
compared to other large, realistic datasets such as ImageNet for object recognition (Deng et al., 2009) and SQuAD for reading comprehension (Rajpurkar et al., 2016), creating such SP dataset is even more time-consuming and challenging in some aspects due to the following reasons.
contrasting
train_16256
This is not a big problem if we use one single dataset because we have enough data domain specific examples to know which columns are default.
it would be a serious problem in cross domain tasks since the default return values differ cross domain and people.
contrasting
train_16257
The attention and copying mechanisms do not help much either.
sQLNet and TypesQL that utilize sQL structure information to guide the sQL generation process significantly outperform other seq2seq model.
contrasting
train_16258
Recent success in Deep Learning motivated researchers to use neural networks instead of human designed rules and templates to generate meaningful sentences from structured information.
these supervised models work well only when provided either massive amounts of labeled data or when restricted to a limited domain.
contrasting
train_16259
Nevertheless, there are two major differences between the training procedure of a DAE and an inference instance in NLG: First, we do not need to predict any content information in NLG as all of the content information is already provided by the structured data.
a DAE training instance can also remove content words from the sentence.
contrasting
train_16260
Recent neural networkbased approaches employ the sequence-tosequence model which takes an answer and its context as input and generates a relevant question as output.
we observe two major issues with these approaches: (1) The generated interrogative words (or question words) do not match the answer type.
contrasting
train_16261
In this framework, a good discriminator that can assign reasonable reward for the generator is a critical component.
directly applying a classifier as the discriminator like most existing GAN models (e.g., SeqGAN (Yu et al., 2017)) cannot achieve satisfactory performance.
contrasting
train_16262
Currently, a popular model for text generation is the sequence-to-sequence model (Sutskever et al., 2014;.
the sequenceto-sequence model tends to generate short, repetitive , and dull text (Luo et al., 2018).
contrasting
train_16263
In this paper, to handle this problem, we propose to use adversarial training (Goodfellow et al., 2014;Denton et al., 2015;Li et al., 2017), which has achieved success in image generation (Radford et al., 2015;Gulrajani et al., 2017;Berthelot et al., 2017).
training GAN is a non-trivial task and there are some previous researches that investigate methods to improve training performance, such as Wasserstein GAN (WGAN) and Energybased GAN (EGAN) (Salimans et al., 2016;Gulrajani et al., 2017;Zhao et al., 2017;Berthelot et al., 2017).
contrasting
train_16264
It shows that the classifier still cannot tell the difference between the text with low novelty.
the analysis of experimental result shows that our proposed discriminator can better distinguish high-novelty text from lownovelty text without the saturation problem.
contrasting
train_16265
However, this could be computationally expensive because the time complexity is O(T 2 ).
our discriminator can calculate the reward of all words with the time complexity of O(T ), which is more computationally efficient.
contrasting
train_16266
As we can see, the reward distribution of SeqGAN saturates and cannot distinguish the novelty of the text accurately.
dP-GAN has a strong ability of resisting reward saturation and can give more precise reward for text in terms of novelty.
contrasting
train_16267
If the accuracy of classifier is too high, the classifier cannot give reasonable reward to the generator for generating real and diverse text .
the language-model based reward given by DP-GAN better reflect the novelty of the text.
contrasting
train_16268
The above example might give the impression that named entities are essential but other words are not.
this is misleading and may not always be the case.
contrasting
train_16269
Note that, an alternate way of collecting human judgments would have been to take the output of existing AQG systems and ask humans to assign answerability scores to these questions based on the presence/absence of the above mentioned relevant information.
when we asked human evaluators to analyze 200 questions generated by an existing AQG system, they reported that the quality was poor.
contrasting
train_16270
In our initial evaluations, we also tried showing the actual source (image or document) to the annotators.
we realized that this did not allow us to do an unbiased evaluation of the quality of the questions.
contrasting
train_16271
To accept two arguments, i.e., input sentence X and style id k, we can directly concatenate the one-hot representation of style id one hot(k) and the embedding vector h T obtained by bi-LSTM encoder and then feed the concatenated vector [one hot(k), h T ] instead of h T into the decoder model without changing anything else.
there is no theoretical guarantee that the output sentence generated by the decoder is strongly correlated to the style id input one hot(k).
contrasting
train_16272
By disentangling poems in different styles, our model can learn a more compact generation model for each style and write more fluent and meaningful poems.
by fixing the style when generating three lines in a quatrain, SPG is able to generate more coherent poems.
contrasting
train_16273
One aspect that is central to a question is the context that is relevant to generate it.
this context changes for every image.
contrasting
train_16274
Advances in image captioning (Xu et al., 2015;Fang et al., 2015;Karpathy and Fei-Fei, 2015;Vinyals et al., 2015) are effective in generating sentencelevel descriptions.
sentences generated by these approaches are usually generic descriptions of the visual content and ignore background information.
contrasting
train_16275
We require images with well-aligned, news-style captions for training.
we want to test our model on real social media data and it is difficult to collect these informative captions for social media data.
contrasting
train_16276
Textsummarization model generates results from documents retrieved by hashtags, so it tends to include some long phrases common in those documents.
the templates generated by our model is based on the language model trained from the news captions, which has different style with Flickr captions.
contrasting
train_16277
not expected to describe all objects in the scene.
describing objects that are not present in the image has been shown to be less preferable to humans.
contrasting
train_16278
We use beam size of 5, and for all models trained with cross entropy, it outperforms lower beam sizes on CHAIR.
when training models with the self-critical loss, beam size sometimes leads to worse performance on CHAIR.
contrasting
train_16279
This is somewhat surprising, as a model which has access to image information at each time step should be less likely to "forget" image content and hallucinate objects.
it is possible that models which include image inputs at each time step with no access to spatial features overfit to the visual features.
contrasting
train_16280
All three metrics do penalize sentences for mentioning incorrect words, either via an F score (METEOR and SPICE) or cosine distance (CIDEr).
if a caption mentions enough words correctly, it can have a high METEOR, SPICE, or CIDEr score while still hallucinating specific objects.
contrasting
train_16281
Likewise, our work is related to other work which aims to build better evaluation tools , (Anderson et al., 2016), (Cui et al., 2018)).
we focus on carefully quantifying and characterizing one important type of error: object hallucination.
contrasting
train_16282
Recent neural sequence-to-sequence models have shown significant progress on short text summarization.
for document summarization, they fail to capture the longterm structure of both documents and multisentence summaries, resulting in information loss and repetitions.
contrasting
train_16283
The Mask Only model with increased supervision on the copy mechanism performs very similar to the Multi-Task model.
bottom-up attention leads to a major improvement across all three scores.
contrasting
train_16284
While we observed both cases quite frequently in generated summaries, the fraction of very long copied phrases decreases.
either with or without bottom-up attention, the distribution of the length of copied phrases is still quite different from the reference.
contrasting
train_16285
As for the other two models, they have better ROUGE score on Trunc version.
as the example shown in Table 4 6 , higher ROUGE scores do not necessarily mean 6 The entities in different color indicate two important roles in the text.
contrasting
train_16286
Most existing document summarisation techniques require access to reference summaries to train their systems.
obtaining reference summaries is very expensive: Lin (2004) reported that 3,000 hours of human effort were required for a simple evaluation of the summaries for the Document Understanding Conferences (DUC).
contrasting
train_16287
Simpson and Gurevych (2018) proposed to use an improved Gaussian process preference learning (Chu and Ghahramani, 2005) for learning to rank arguments in terms of convincingness from crowdsourced annotations.
such Bayesian methods can hardly scale and suffer from high computation time.
contrasting
train_16288
The neural encoder-decoder framework has recently been exploited to summarize single documents, but its success can in part be attributed to the availability of large parallel data automatically acquired from the Web.
parallel data for multi-document summarization are scarce and costly to obtain.
contrasting
train_16289
If a text piece has been attended to during summary generation, it is unlikely to be used again.
the attention value assigned to a similar text piece in a different position is not affected.
contrasting
train_16290
These issues may be alleviated by improving the encoder-decoder architecture and its attention mechanism (Cheng and Lapata, 2016;Tan et al., 2017).
in these cases the model has to be re-trained on large-scale MDS datasets that are not available at the current stage.
contrasting
train_16291
Precision and recall are commonly used to evaluate recommendation systems (Karypis, 2001) and information retrieval task (Zuva and Zuva, 2012).
we only care about whether the image appears in the reference image set.
contrasting
train_16292
In other words, some images are noise.
our text input is long text, and it contains enough information for text generation.
contrasting
train_16293
One drawback of our method is that its expressive power is basically the same as that of each single model.
this alternatively means that the lower bound of the quality of each output is guaranteed with the worst case of the outputs of single models, while the current runtime-ensemble method can perform worse than each single model for the worst case input.
contrasting
train_16294
For example, Stahlberg and Byrne (2017) proposed a method of unfolding an ensemble of multiple translation models into a single large model once and shrinking it down to a small one.
all methods require extra implementation on a deep-learning framework, and it is not easy to apply them to other models.
contrasting
train_16295
Although, various evaluation measures have been proposed, ROUGE-N (Lin, 2004), Basic Elements (BE) (Hovy et al., 2006) remain the de facto standard measures since they strongly correlate with various manual evaluations and are easy to use.
the evaluation scores computed by these automatic measures are not so useful for improving system performance because they merely confirm if the summary contains small textual fragments and so they do not address semantic correctness.
contrasting
train_16296
Thus, PEAK regards subjectpredicate-object triples as alternatives to SCUs and constructs a pyramid by clustering semantically equivalent triples.
the performance of subject-predicate-object triples extraction is not satisfying for the practical demands and semantic similarity utilized for clustering the triples does not correlate well with human judgment (see Section 2).
contrasting
train_16297
Thus, to construct high quality pyramids, PEAK is required to not only accurately extract the triples but also measure the semantic similarity between them accurately.
in general, both extracting the triples and measuring the semantic similarity are still challenging NLP tasks.
contrasting
train_16298
(C)) due to the mismatch word distributions between the summaries of the source and target domains.
the discriminator still regularizes the generated word sequence.
contrasting
train_16299
(2017) proposed a multitask neural architecture where the three tasks are trained together with the same representation.
they do not model comment-comment interactions in the same question-comment thread nor do they train taskspecific embeddings, as we do.
contrasting