id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_5500
The BidAF model is the strongest (23.5), better than TFIDF between the question and the support document to select sentences.
these approaches are limited by the support document, as an oracle computed on the full web sources achieves 54.8.
contrasting
train_5501
We find only 19% of multi-task model answers are fully accurate; even if the model output answers the question, it can generate a sentence with an incorrect statement.
the extractive model copies sentences from humanwritten text.
contrasting
train_5502
However, answering visual questions requires not only information about the visual content but also common knowledge, which is usually too hard to directly learn from only a limited number of images with human annotated answers as supervision.
comparatively little previous VQA research has worked on enriching the knowledge base.
contrasting
train_5503
Both of them provide significant improvements to the baseline model.
there is still a reasonable gap between generated and human captions.
contrasting
train_5504
Human evaluation with GT captions also shows better performance than with GenP captions as seen in Table 4.
the results from the man- Table 3: TextQA with GT model outperforms TextQA with GenP (we run each model five times with different seeds and average the scores.
contrasting
train_5505
More precisely, we share the features (dimensions) between the paired source and target embeddings (vectors).
in contrast to the previous studies, we also model the private features of the word embedding to preserve the private characteristics of words for source and target languages.
contrasting
train_5506
The proposed method has a limitation in that each word can only be paired with one corresponding word.
synonym is a quite common phenomenon in natural language processing tasks.
contrasting
train_5507
In the vanilla model, as shown in Figure 6(a), only the similar monolingual embeddings are clustered, such as the English words "died" and "killed", and the Chinese words "zhuxi" (president) and "zongtong" (presi-dent).
in the proposed method, no matter whether the similar source and target words are paired or not, they tend to cluster together; as shown in Figure 6(b) and 6(c).
contrasting
train_5508
Abusive behavior is an omnibus term that often includes harassment, threats, racial slurs, sexism, unexpected pornographic content, and insults-all of which can be directed at other users or at whole communities Nobata et al., 2016).
nLP has largely considered a far narrower scope of what constitutes abuse through its selection of which types of behavior to recognize (Waseem et al., 2017;Schmidt and Wiegand, 2017;Fortuna and nunes, 2018).
contrasting
train_5509
Such community-specific norms and context are important to take into account, as NLP researchers are doubling down on context-sensitive approaches to define (e.g., Chandrasekharan and Gilbert, 2019) and detect abuse (e.g., .
not all community norms are socially acceptable within the broader world.
contrasting
train_5510
Giving a dialogue agent this ability would enable it to continuously improve and adapt over its lifetime, rather than requiring additional annotation costs for each and every improvement.
naively training a dialogue agent on its own conversations yields poor results.
contrasting
train_5511
Hashimoto and Sassano (2018) used user responses to detect mistakes made by a deployed virtual assistant, showing that model mistakes can be identified in chit-chat, weather, or web search domains.
they did not explore how to use these identified mistakes to improve the model further; their agent was not equipped to feed itself.
contrasting
train_5512
4 The boost in quality is naturally most pronounced when the HH DIALOGUE training set is small (i.e., where the learning curve is steepest), yielding an increase of up to 9.4 accuracy points, a 31% improvement.
even when the entire PER-SONACHAT dataset of 131k examples is used-a much larger dataset than what is available for most dialogue tasks-adding deployment examples is still able to provide an additional 1.6 points of accuracy on what is otherwise a very flat region of For each column, the model using all three data types (last row) is significantly better than all the others, and the best model using only one type of self-feeding (FEEDBACK examples or HB DIALOGUE examples) is better than the supervised baseline in the first row (p < 0.05).
contrasting
train_5513
A straightforward method to introduce such a classifier is to build a sentence-level emotion discriminator as follows: where W ∈ R K×d is a weight matrix and K denotes the number of emotion categories.
it is infeasible to enumerate all possible sequences as the search space is exponential to the size of vocabulary, and the length of Y is not known in advance.
contrasting
train_5514
Such systems work well if all possible combinations of user inputs and conditions are considered in the training stage (Paek and Pieraccini, 2008;Wang et al., 2018).
as shown Figure 1: An example of task-oriented dialogue system.
contrasting
train_5515
For each element h n in bi-RNNs outputs, we compute a scalar selfattention score as follows: The final utterance representation E(x) is the weighted sum of bi-RNNs outputs: (3) After getting the encoding of each sentence in C t , we input these sentence embeddings to another GRU-based RNNs to obtain the context embedding E(C t ) as follows: In the existing work (Williams et al., 2017;Bordes et al., 2016;Li et al., 2017), after getting the context representation, the dialogue system will give a response y t to the user based on p(y t |C t ).
the dialogue system may give unreasonable responses if unexpected queries happen.
contrasting
train_5516
4(a), the system will refuse to respond if JSD avg is higher than a threshold τ 1 .
the dialogue model tends to give close weights to all response candidates in the early stage of training, as shown in Fig.
contrasting
train_5517
To simulate the new unconsidered user needs, one possible method is to delete some question types in the training set of existing datasets (e.g., bAbI tasks (Bordes et al., 2016)) and test these questions in the testing phase.
the dialogue context plays an important role in the response selection.
contrasting
train_5518
It is possible for the dialogue model to retrieve responses directly without any preprocessing.
the fact that nearly all utterances contain entity information would lead to a slow model convergence.
contrasting
train_5519
These advances in task-oriented dia-logue systems have resulted in impressive gains in performance.
prior work has mainly focused on building task-oriented dialogue systems in a closed environment.
contrasting
train_5520
The intended outcome is that the agent produces utterances consistent with its given persona.
these models still face the consistency issue, as shown in Figure 1.
contrasting
train_5521
pick aŷ such that: Unlike Markovian processes, no sub-exponential algorithm exists to find the optimal decoded sequence, and thus we instead use approximations.
arg-max The simplest approach to decoding a likely sequence is to greedily select a word at each timestep:ŷ because this deterministic approach typically yields repetitive and short output sequences, and does not permit generating multiple samples, it is rarely used in language modelling.
contrasting
train_5522
The goal of diverse decoding strategies is to generate high-quality candidate sequences which span as much of the space of valid outputs as possible.
we find there to be a marked trade-off between diversity and quality.
contrasting
train_5523
Top-10 random sampling has the highest fluency, coherence, and interestingness, as well as significantly lower perplexity than other random sampling methods.
this trend did not extend to image captioning, where top-10 random sampling results in both worse SPICE scores and lower diversity measures than setting the temperature to 0.7.
contrasting
train_5524
Related contents of the candidates are appropriately integrated into the response, but the model is discouraged as the response is different from the ground-truth.
rather than just provide materials for the generation, N-best response candidates also contain references for evaluating responses.
contrasting
train_5525
We propose a Retrieval-Enhanced Adversarial Training method for neural response generation in dialogue systems.
to existing approaches, our REAT method directly uses response candidates from retrieval-based systems to improve the discriminator in adversarial training.
contrasting
train_5526
To deal with the generic response problem, various methods have been proposed, including diversity-promoting objective function (Li et al., 2016a), enhanced beam search (Shao et al., 2016), latent dialogue mechanism (Zhou et al., , 2018a, Variational Autoencoders (VAEs) based models Serban et al., 2017), etc.
these methods still view multiple responses as independent ones and fail to model multiple responses jointly.
contrasting
train_5527
incorporate topic information to generate informative responses.
these models suffer from the deterministic structure when generating multiple diverse responses.
contrasting
train_5528
This is a step towards having pretraining objectives that explicitly consider and leverage discourse-level relationships.
it is still unclear whether language modeling is the most effective method of pretrained language representation, especially for tasks that need to exploit multi-turn dependencies, e.g.
contrasting
train_5529
10 The figure indicates that the conversations often extend over days, or even more than a month apart (note the point in the topright corner).
our annotations rarely contain links beyond an hour, and the output of our model rarely contains links longer than 2 hours.
contrasting
train_5530
The discriminator in GAN is often used to evaluate the generated utterances and guide dialogue learning.
these methods mainly focus on the surface information of generated utterances to guide the dialogue learning, and fail to consider the utterance connection within the dialogue history.
contrasting
train_5531
Existing neuralbased dialogue systems only consider this signal in a weak and implicit way, where they use hierarchical encoders to model the dialogue history (Sordoni et al., 2015a;Serban et al., 2016;Li et al., 2017;Serban et al., 2017;Xing et al., 2018).
we argue that these methods are mainly designed to model the overall semantic context information of the dialogue history but not good at modeling intermediate sequential order.
contrasting
train_5532
Rather than measuring the relative probabilities of past forms across different verbs, CR@5 considers the relative rankings of different past forms for each verb.
cR@5 also yielded unstable results: 39-47% on A&H's data, and 29-44% on K&c's data, as shown in Figure 3.
contrasting
train_5533
In the previous experiment, we saw that individual models often rank implausible past tenses higher than plausible ones.
we see here that on aggregate nearly all the model's proposed past tenses are those suggested by A&H.
contrasting
train_5534
φ A is therefore sensitive to the actual SA scores, and to the popularity of mentioned concepts.
φ 1 is only sensitive to changes in the set of activated concepts.
contrasting
train_5535
On the one hand, this indicates that our experiments are not able to confirm the forward semantic priming hypothesis.
given the good results of AEoS, our experiments confirm the backwards priming hypothesis and sentence wrap-up.
contrasting
train_5536
End-to-end training with Deep Neural Networks (DNN) is a currently popular method for metaphor identification.
standard sequence tagging models do not explicitly take advantage of linguistic theories of metaphor identification.
contrasting
train_5537
This approach is not tailormade for metaphors; it is the same procedure to that used in other sequence tagging tasks, such as Part-of-Speech (PoS) tagging (Plank et al., 2016) and Named Entity Recognition (NER) (Lample et al., 2016).
we have available linguistic theories of metaphor identification, which have not yet been exploited with Deep Neural Network (DNN) models.
contrasting
train_5538
This is necessary because it will be concatenated with the purple attentive context representation, in encoding space; we found that performance is better when both meanings are in the encoding space.
the RNN HG does concatenate vectors from two different spaces; this works because they are representations of the same word, rather than word versus context.
contrasting
train_5539
RNN HG performs slightly worse than RNN MHCA.
it exceeds RNN MHCA by 0.3% on the VUA VERB track (F1=70.8%).
contrasting
train_5540
In this approach, we take 2 × 150 dimension BiLSTM hidden states so that h t and g t are aligned in dimensionality.
such an approach yields 73.7%, 70.0%, 78.9% and 71.8% F1 scores on VUA ALL POS, VUA VERB, MOH-X and TroFi datasets, which is worse than the concatenation approach (RNN HG) in Table 2.
contrasting
train_5541
It supports our arguments that the noise of treating non-target words as literals in TroFi negatively impact our models' ability to learn the difference between literals and metaphors.
all words in VUA news are annotated, so that the advantages of our models are more obvious.
contrasting
train_5542
Metaphors may be misclassified as literal by RNN HG.
rNN MHCA may flag the clash between literals and their contexts, if there are many metaphors in the contexts, so that literal target words may be misclassified as metaphoric.
contrasting
train_5543
Diachronic word embeddings have been widely used in detecting temporal changes.
existing methods face the meaning conflation deficiency by representing a word as a single vector at each time period.
contrasting
train_5544
It is well known that word meaning can be represented with a range of senses.
existing methods only assign one embedding to a 1840 1860 1880 1900 1920 1940 1960 1980 word for a time period, thus they face challenges in representing senses and tracking the change of them.
contrasting
train_5545
(2018) propose global anchor method for detecting linguistic shifts and domain adaptation.
the above methods could only assign one neural embedding to a word at each time period, which cannot model the change of the word senses.
contrasting
train_5546
We thus measure the discrepancy between the conditional distribution p(z B t+1 |m A t ) and the marginal distribution p(z B t+1 ) not taking m A t into account.
we want to assess agent B's behaviour under other possible received messages m A t .
contrasting
train_5547
In their setup, fully symmetric agents are encouraged to use flexible, multi-turn communication as a problem-solving tool.
the independent complexities of navigation make the environment somewhat cumbersome if the aim is to study emergent communication.
contrasting
train_5548
In informal experiments, we found shallow CNNs incapable to handle even the simplest random split.
it is hard to train very deep LSTMs, and it is not clear that the latter models need the same depth CNNs require to "view" long sequences.
contrasting
train_5549
(The prior simply corresponds to add-λ smoothing.)
in many cases we do have missing data.
contrasting
train_5550
2017;, with the possible exception of Phillips and Pearl 2014a,b).
very little previous research compares the performance of a wide range of algorithms using diverse and cognitively plausible segmentation methods within a large set of typologically diverse languages and closely matched corpora, with unified coding criteria for linguistic units.
contrasting
train_5551
If chance is defined as the highest of the two baselines (p=0, 1/6), 1 algorithm performed above chance in all 8 languages (DiBS).
if we relax this criterion, AG, FTPa, FTPr, MIr and MIa also performed above chance for nearly all languages.
contrasting
train_5552
An error analysis would be beyond the scope of this paper.
three categories of incorrect cases have been measured and can be found online.
contrasting
train_5553
First, no algorithm performed systematically below chance level in our study.
we cannot say that they all performed above chance for all languages either.
contrasting
train_5554
More deeply embedded sentences appear to be shorter (in number of words), and this is in accordance with the hypothesis that they impose a heavier processing load than shallower clauses.
surprisingly, when sentence complexity (in number of clauses) is accounted for, there is no clear bias against deeper embeddings.
contrasting
train_5555
Modern deep learning algorithms often do away with feature engineering and learn latent representations directly from raw data that are given as input to Deep Neural Networks (DNNs) McCann et al., 2017;Peters et al., 2018).
it has been shown that linguistic knowledge (manually or semi-automatically encoded into lexicons and knowledge bases) can significantly improve DNN performance for Natural Language Processing (NLP) tasks, such as natural language inference (Mrkšić et al., 2017), language modelling (Ahn et al., 2016), named entity recognition (Ghaddar and Langlais, 2018) and relation extraction (Vashishth et al., 2018).
contrasting
train_5556
Transfer learning results: For each of the re- maining pages in our data we first utilize our pretrained model from the last step.
we train the dense layers with randomly selected 20% datapoints from the page to be tested.
contrasting
train_5557
There has also been no work to understand the possibility of predicting edit quality based on edits in pages in a common category.
there has been no work to leverage advanced machinery developed in language modeling toward predicting edit quality.
contrasting
train_5558
We can see that the IR baseline tends to output responses with more overlapped terms with the query due to the use of Jaccard similarity.
the obtained responses may not be relevant to the query, as shown in the first case.
contrasting
train_5559
A strong thesis statement can help lay a strong foundation for the rest of the essay by organizing its content, improving its comprehensibility, and ensuring its relevance to the prompt.
an essay with a weak thesis statement lacks focus.
contrasting
train_5560
In this regard, while different adjectives such as heavy and strong can convey basically the same meaning (e.g., 'intensification' in heavy load and in strong fragrance), great has different senses in great loss and in great time (with 'intensification' and 'positive' meanings, respectively).
to interpret the meaning of a sentence, a system should take into account the properties of these expressions: for instance, the meaning of the verb [to] take in the collocation take [a] cab is different from the same verb in a free combination such as take [a] pencil, so natural language understanding or abstract meaning representation systems could benefit from the correct identification of collocations (Bonial et al., 2014;O'Gorman et al., 2018).
contrasting
train_5561
To evaluate the extraction, some researchers use manual selection of true collocations from ranked lists, while others take advantage of examples extracted from collocation dictionaries.
most of these approaches are carried out only in one language, and they do not always permit to obtain precise recall values.
contrasting
train_5562
On the one hand, this allows for accurate precision and recall values to be obtained, also taking into account ambiguous combinations which may be collocations or not depending on the context.
a gold-standard enables the research community to evaluate different strategies in a more comparable way.
contrasting
train_5563
In this respect, we could miss some true collocations incorrectly labeled with a wrong dependency relation.
the annotated cases were manually checked, and therefore they have a correct syntactic analysis (except for human errors).
contrasting
train_5564
On the one hand, the base of each collocation has a numerical id followed by the syntactic pattern (e.g., obj, amod) and by its lexical function.
the collocate is labeled with the same id as the base it depends on.
contrasting
train_5565
"Answer 0: the trophy","Answer 1: the suitcase".
wSC estimates common sense indirectly and it does not consider explanation on why one option is true while the other is wrong.
contrasting
train_5566
We conjure that both of them have the ability to judge whether a sentence is with or against common sense.
for Sen-Making, ELMo does better than BERT; BERT beats ELMo in Explanation.
contrasting
train_5567
Fine-tuned ELMo has an obvious improvement in Sen-Making and a non-obvious improvement in Explanation, probably because introducing knowledge will help models to identify common sense but cannot help them in inference.
fine-tuning makes BERT perform the same in Sen-Making and even worse in Explanation.
contrasting
train_5568
Language models (LMs) have been used as baselines in several humor recognition studies (Shahaf et al., 2015;Yang et al., 2015;Cattle and Ma, 2018).
to most previous humor recognition studies, we didn't engineer any linguistic features.
contrasting
train_5569
In contrast to most previous humor recognition studies, we didn't engineer any linguistic features.
lM-based approach can be seen as indirect reflection of some common humor features such as incongruity, unexpectedness, or nonsense.
contrasting
train_5570
Since the self-attention module is just a two-layer feed-forward network, the computational complexity of training CVDD is very low.
evaluating a pre-trained model for obtaining word embeddings may add to the computational cost (e.g.
contrasting
train_5571
NN may be the most straightforward approach.
it is often challenged by a phenomenon called hubness (Radovanovic et al., 2010).
contrasting
train_5572
We define a distance matrix D, between the mapped source embeddings and target embeddings, where "dist" is some distance metric.
for the ith source word, Nearest Neighbor (NN) criterion determines (the index of) its translation in the Y set, by arg min several works (Radovanovic et al., 2010;Dinu et al., 2014) have observed that the accuracy of NN is often significantly degraded by a phenomenon called hubness.
contrasting
train_5573
2017show that ISF significantly outperforms NN in BLI tasks.
it is still not clear why ISF works so well.
contrasting
train_5574
The quantity, exp(−β j / ), normalizes the j-th column of the kernel matrix exp(−D i,j / ).
(ISF) simply sets the column normalizer as the column sum, i exp(−D i,j / ).
contrasting
train_5575
While interacting with the human annotator, one may decide to split the total annotation budget B between two types of queries: (i) those which improve the underlying student learner based on the suggestions of the policy, and (ii) those which improve the policy.
this approach may not make the best use of the annotation budget, as it is not clear whether the budget used to improve the policy (via the second type of queries) would pay back the improvement which could have been achieved on the student learner (via the queries of the first type).
contrasting
train_5576
Existing approaches for learning word embeddings often assume there are sufficient occurrences for each word in the corpus, such that the representation of words can be accurately estimated from their contexts.
in real-world scenarios, out-of-vocabulary (a.k.a.
contrasting
train_5577
In another work (Khodak et al., 2018), a simple linear function is used to infer embedding of an OOV word by aggregating embeddings of its context words in the examples.
the simple linear averaging can fail to capture the complex semantics and relationships of an OOV word from its contexts.
contrasting
train_5578
However, their goal is to get a contextualized embedding based on a given sentence, with word or sub-word embeddings as input.
our work utilizes multiple contexts to learn OOV embeddings.
contrasting
train_5579
In our network, both the source specific part and the common part contribute positively to the source classification objective (i.e., minimize the mis-classification loss).
for the domain divergence objective, the common part contributes positively (i.e., tries to minimize divergence) whereas the source specific part contributes negatively (i.e., tries to maximize divergence).
contrasting
train_5580
• /σ is the correlation of two HO estimators of R in different twofold CVs, where j ̸ = j ′ and k, k ′ = 1, 2.
the six confusion matrices in M are correlated because the three partitions are performed on a single text corpus and the training sets contain overlapping samples.
contrasting
train_5581
The posterior distributions make an exact comparison possible (Zhang and Su, 2012;Wang and Li, 2016).
the distribution of F 1 is difficult to tackle, because it is a complex function.
contrasting
train_5582
The latest work is Bidirectional Beam Search (BiBS) (Sun et al., 2017) which proposes an approximate inference algorithm in BiRNN for image caption infilling.
this method is based on some unrealistic assumptions, such as that given a token, its future sequence of words is independent of its past sequence of words.
contrasting
train_5583
Moreover, GSN and BiBS can be only applied to decoders with bidirectional structures, while almost all sequence generative models use a unidirectional decoder.
our proposed inference method decouples from these assumptions and can be applied to the unidirectional decoder.
contrasting
train_5584
We can see that the complete sentences generated by them are better than all other algorithms.
biRNN-GSN uses the bidirectional structure as the decoder, which makes it challenging to apply to most sequence generative models, but the proposed method is gradient-based, which can be broadly used in any sequence generative models.
contrasting
train_5585
Based on the gold parse tree (Figure 1), "an extensive presence" is the maximum span of the first coreferring mention in Example 1.
the corresponding maximum boundary for this same mention is "an extensive presence, of course in this country" based on the system parse tree (Figure 2).
contrasting
train_5586
Compared to manually annotated minimum spans: • MINA is applicable to any English coreference corpus.
3 manually annotated minimum spans can be only used in their own corpora.
contrasting
train_5587
Sequence-to-sequence paradigms (Sutskever et al., 2014) provide the flexibility that the output sequence can be of a different length than the input sequence.
they still require the output vocabulary size to be fixed a priori, which limits their applicability to problems where one needs to select (or point to) an element in the input sequence; that is, the size of the output vocabulary depends on the length of the input sequence.
contrasting
train_5588
Remark: In our earlier attempts, we experimented with a self-attention based encoder-decoder with positional encoding similar to (Vaswani et al., 2017) to reduce the encoding time from O(n) (linear) to O(1) (constant) time.
the perfor-mance was inferior to the RNN-based encoder.
contrasting
train_5589
SPADE and DCRF are both sentence-level parsers.
dPLP and 2-Stage Parser are document-level parsers, and they do not report sentence-level performance.
contrasting
train_5590
Using event similarity alone is too coarse to support many relevant inferences.
if the relation between the events is given, more clues can be applied to support reliable inferences.
contrasting
train_5591
For example, a simple baseline method is randomly choosing a question asked for another paragraph, and using it as an unanswerable question.
it would be trivial to determine whether the retrieved question is answerable by using wordoverlap heuristics, because the question is irrelevant to the context (Yih et al., 2013).
contrasting
train_5592
In summary, the above mentioned systems aim to generate answerable questions with certain context.
our goal is to generate unanswerable questions.
contrasting
train_5593
", and then answer "What is the former name of that animal?".
when considering the evidence paragraphs, the question is solvable in a single hop by finding the only paragraph that describes an animal.
contrasting
train_5594
We report the F1 score of single-paragraph BERT on these new distractors in Table 4: the accuracy declines from 67.08 F1 to 46.84 F1.
when the same procedure is done on the training set and the model is re-trained, the accuracy increases to 60.10 F1 on the adversarial distractors.
contrasting
train_5595
As shown in Table 4, the original model's accuracy degrades significantly (drops to 40.73 F1).
similar to the previous setup, the model trained on the adversarially selected distractors can recover most of its original accuracy (increases to 58.42 F1).
contrasting
train_5596
1 Document Retrieval Knowledge bases (KBs) are considered as an essential resource for answering factoid questions.
accurately constructing KB with a welldesigned and complicated schema requires lots of human efforts, which inevitably limits the coverage of KBs (Min et al., 2013).
contrasting
train_5597
As a matter of fact, KBs are often incomplete and insufficient to cover full evidence required by open-domain questions.
the vast amount of unstructured text on the Internet can easily cover a wide range of evolving knowledge, which is commonly used for open-domain question answering .
contrasting
train_5598
Recently, text-based QA models along (Seo et al., 2016;Xiong et al., 2017;Yu et al., 2018) have achieved remarkable performance when dealing with a single passage that is guaranteed to include the answer.
they are still insufficient when multiple documents 1 https://github.com/xwhan/Knowledge-Aware-Reader.
contrasting
train_5599
With the entity linking annotations in passages, we fuse the entity knowledge with the token-level features in a similar fashion as the query reformulation process.
instead of applying a standard gating mechanism (Yang and Mitchell, 2017;Mihaylov and Frank, 2018), we propose a new conditional gating function that explicitly conditions on the question q .
contrasting