id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_21700
This can be partially seen from Table 3, where the OOV words are better handled by CharCNN in terms of in-vocabulary nearest neighbors.
to fully validate the abovementioned expectation we conduct additional experiments: we train two models, CharCNN and MorphSum, on PTB and then we evaluate them on the test set of Wikitext-2 (245K words, 10K wordtypes).
contrasting
train_21701
one can predict the mean decrease in PPL from the TTR of a text with a simple linear regression: The empirical perplexities in Table 2 are way above the current state-of-the-art on the same datasets (Melis et al., 2018).
the approach of Melis et al.
contrasting
train_21702
In fact, the best model exhaustively generates all candidate strings, where the first part is a prefix of C 1 and the second part is a suffix of C 2 and scores them to pick the best candidate while using a backward model learned to generate the components given the blend.
we propose a more straightforward model that explicitly incorporates inherent linguistic constraints entirely obviating the need for decoding using exhaustive candidate generation yet yielding competitive performance.
contrasting
train_21703
In the absence of such information, just clipping to one syllable yields better performance.
when this information is exactly known (CLIPPHONE(O)), we note an improvement as expected (µ = 2.79).
contrasting
train_21704
We differ from all of these works in several ways.
to the blending model proposed by Gangal et al.
contrasting
train_21705
The most straightforward solution is to treat language as a sequence of characters (Sutskever et al., 2011).
models that operate at two levels-a character level and a word levelhave better performance (Chung et al., 2016).
contrasting
train_21706
To make training and inference with our model tractable, we have assumed independence between previous adjacent events and the next word generation given the previous surface word forms ( §2.1).
thus, the posterior probability over the analysis is only determined by the left contextsubsequent decisions are independent of the process used to generate a word at time t. since disambiguating information may be present in either direction, we introduce a model variant that conditions on information in both directions.
contrasting
train_21707
When the model went to deeper flat NER layers, the performance dropped gradually as the number of gold entities decreased.
the performance for predictions on each flat Layer P (%) R (%) F (%) EP (%) ER (%) EF (%) #Predictions #Gold Entities 0.00 0.00 0.00 0.00 10.00 0.00 0 10 Layer 5 0.00 0.00 0.00 0.00 0.00 0.00 0 1 NER layer was different in terms of extended evaluation metrics.
contrasting
train_21708
Such models use simple reading mechanisms to encode the premise and hypothesis independently.
such a complex task require explicit modeling of dependency relationships between the premise and the hypothesis during the encoding and inference processes to prevent the network from the loss of relevant, contextual information.
contrasting
train_21709
The main idea is to represent the entities and relations in a vector space, and one can use machine learning technique to learn the continuous representation of the knowledge graph in the latent space.
even steady progress has been made in developing novel algorithms for knowledge graph embedding, there is still a common challenge in this line of research.
contrasting
train_21710
In a typical RL setting, each action performed by the agent will change its state, and the agent will perform a series of actions (called an epoch) until it reaches certain states or the number of actions reaches a certain limit.
in the analogy above, actions does not affect the state, and after each action we restart with another unrelated state, so each epoch consists of only one action.
contrasting
train_21711
Hence, it is assumed that the context of the situation is explicitly expressed in words.
language understanding involves implicit knowledge, which is not mentioned but still seems obvious to humans, e.g., 'people can sit back on a bench, but companies cannot', 'companies are in cities'.
contrasting
train_21712
Same as SimpleFrameId, our system is based on pretrained embeddings to build the input representation out of the predicate context and the predicate itself.
different to SimpleFrameId, our representation of the predicate context is multimodal: beyond textual embeddings we also use IMAG-INED and visual embeddings.
contrasting
train_21713
Regarding the setting with lexicon, the null hypothesis cannot be rejected at a significance level of α = 0.05 (p = 0.2181).
concerning accuracy scores without using the lexicon, the null hypothesis is rejected at a significance level of α = 0.05 (p < 0.0001).
contrasting
train_21714
This work builds on our initial study of semantic divergences (Carpuat et al., 2017), where we provide a framework for evaluating the impact of meaning mismatches in parallel segments on MT via data selection: we show that filtering out the most divergent segments in a training corpus improves translation quality.
we previously detect mismatches using a cross-lingual entailment classifier, which is based on surface features only, and requires manually annotated training examples (Negri et al., 2012.
contrasting
train_21715
The generation of the output sequence is conditioned on the previous words and the input.
when certain sequences are very common, the language modelling conditional probability will prevail over the input conditioning.
contrasting
train_21716
This is a reasonable approach, under the assumption that we are modeling a single underlying distribution in the data.
in many applications and for many natural language datasets, there exist multiple underlying distributions, characterizing a variety of language styles.
contrasting
train_21717
a sequence of {field, value} pairs and use a standard seq2seq model for this task.
such a model is too generic and does not exploit the specific characteristics of this task as explained below.
contrasting
train_21718
We observe that our final model gives the best performance -though the bifocal attention model performs poorly as compared to the basic seq2seq model on French.
the overall performance for French and German are much smaller than those for English.
contrasting
train_21719
For example, most English descriptions start with name followed by date of birth but this is not the case in French.
this is only a qualitative observation and it is hard to quantify this characteristic Model BLEU-4 NIST-4 ROUGE-4 (Mei et al., 2016) 10 If the proposed model indeed works well then we should see attention weights that are consistent with the stay on and never look back behavior.
contrasting
train_21720
some question-answer pairs are correct and some are wrong.
this kind of dataset is hard to obtain in most situations because of the lack of manual annotation efforts.
contrasting
train_21721
The application of Generative Adversarial Network (GAN) (Goodfellow et al., 2014; in this scenario regards every generated question-answer pair as a negative instance.
generative Domain-Adaptive Nets (gDAN) regards every generated question-answer pair appended with special domain tag as a positive instance.
contrasting
train_21722
GCN (competitive) is analogous to (Goodfellow et al., 2014), where all the generated questions are regarded as negative instances (with label as zero).
gCN (collaborative) is analogous to , where the generated questions are regard as positive instances.
contrasting
train_21723
Our main observation from Table 1 is that simply regarding all the generated questions as negative instances ("competitive") could not bring performance boost.
regarding the generated questions as positive ones ("collaborative") improves the QA model.
contrasting
train_21724
Several researchers adopted this architecture for the reading comprehension (RC) style QA tasks, because it can extract contextual information from each sentence and use it in finding the answer (Xiong et al., 2016;Kumar et al., 2016).
none of this research is applied to the QA pair ranking task directly.
contrasting
train_21725
The word with the highest probability is then selected as the answer.
in multiple-choice QA, C is in the form of open, natural language sentences instead of a single word.
contrasting
train_21726
Both EntNet and QRN find a final answer by decoding the final vector(s) into a vocabulary entry via softmax classification.
many of the best performing factoid QA systems, e.g., (Seo et al., 2017a;Clark and Gardner, 2017), return an answer by finding a span of the original paragraph using attention-based span prediction, a method suitable when there is a large vocabulary.
contrasting
train_21727
These words are closely related to the category of dataset "Office Product", which implies both models can get a good interpretation of user reviews.
when we carefully compare the two tables, there exists some differences.
contrasting
train_21728
The hidden layers and word embeddings are opaque and difficult from which to draw conclusions.
a generative model that represents words explicitly as probability distributions allows for easier post-analysis.
contrasting
train_21729
We empirically set the cutoff based on dev set performance to optimize F1.
a detection system may desire to optimize recall at the expense of precision, thus choosing a lower λ and forcing the system to predict attacks more often.
contrasting
train_21730
This seems particularly appropriate when dealing with uncooperative annotators, as might be encountered, for example, in crowdsourcing (Snow et al., 2008;Zhang et al., 2016).
with a team of trained annotators, we believe that honest disagreements could contain valuable information better not ignored.
contrasting
train_21731
The aspects which correlate most strongly with the final recommendation are substance (which concerns the amount of work rather than its quality) and clarity.
soundness/correctness and originality are least correlated with the final recommendation.
contrasting
train_21732
26 Surprisingly, however, most of the reviews are not made publicly available.
we collected and organized PeerRead such that it is easy for other researchers to use it for research purposes, replicate experiments and make a fair comparison to previous results.
contrasting
train_21733
We focus on the task of abstractive summarization of a long document.
to extractive summarization, where a summary is composed of a subset of sentences or words lifted from the input text as is, abstractive summarization requires the generative ability to rephrase and restructure sentences to compose a coherent and concise summary.
contrasting
train_21734
Although it was able to capture the statistics of the player correctly (e.g., 15 penalties, 16 attempts), it still missed the player who scored the only goal in the game (i.e., kevin mirallas).
multi-agent model was able to generate a concise summary with several key facts.
contrasting
train_21735
On the other hand multi-agent model was able to generate a concise summary with several key facts.
similar to single agent model, it missed to capture the player who scored the only goal in the game.
contrasting
train_21736
As a result, encoding word orders, as is done by the encoders except AvgEmb, might bring noise to keyphrase extraction on Weibo dataset.
avgEmb is the worst encoder on Twitter dataset, as word order is crucial in English.
contrasting
train_21737
The generated texts are merely lists of bi-grams and not meaningful sentences which cannot be considered to be summaries.
they achieve superhuman ROUGE recall scores.
contrasting
train_21738
Extractive summarization uses sentence-level features (Yang et al., 2017) that have been leveraged for producing query-focused or topic-based summaries.
for RNN-based frameworks, such tuned summary generation is non-obvious due to the absence of explicit content based features.
contrasting
train_21739
Our contribution is a modified encoder to encode the article in a topic-sensitive manner.
for the sake of completeness, we shall provide an overview of the entire network, and will reuse notations from their work to a large extent.
contrasting
train_21740
Alternate to the the proposed architecture, one can append the topic vector to the initial hidden state of the encoder, or the initial state of the decoder.
in our experiments, these approaches did not produce the desired tuning.
contrasting
train_21741
Our last baseline is a topic-signature based approach which also works by extracting sentences from the article which are aligned to the target topic.
the selection of sentences is based on topic signatures as described by Lin and Hovy (2000) and Conroy et al.
contrasting
train_21742
Both methods outperformed Seq2Seq without syntactic features in terms of translation quality.
both methods fail to provide an entire parse tree until the decoding phase is finished.
contrasting
train_21743
(2017) consider higherorder dependency relationships in Seq2Seq by incorporating a graph convolution technique (Kipf and Welling, 2016) into the encoder.
the dependency information of the graph convolution technique is still given in pipeline manner.
contrasting
train_21744
This approach is known to have three advantages: its applicability to many useful submodular objective functions, the efficiency of the greedy algorithm, and the provable performance guarantee.
when it comes to compressive summarization, we are currently missing a counterpart of the extractive method based on submodularity.
contrasting
train_21745
Thanks to the submodularity of their objective function, their method enjoys a 1 2 (1−e −1 )-approximation guarantee.
because of the costly DP procedure, their method is less scalable than the standard greedy methods such as the extractive method (Lin and Bilmes, 2010) and ours.
contrasting
train_21746
Relation statements, which are strings intended for human readers, are similar to the 3-tuples, "relations", from prior work on information extraction (Banko et al., 2007).
in this work, we show that the assumptions underlying the extraction of 3-tuples for machines ( §3) leads to poor performance in summarizing mention sets for people ( §5).
contrasting
train_21747
Our compression-based method achieves higher yield than off-the-shelf relation extractors.
because all sentences in a mention set include (t 1 ) and (t 2 ), it is always possible to generate a very large candidate set by simply extracting all spans between (t 1 ) and (t 2 ) from the mention set, regardless if such relation statements are coherent.
contrasting
train_21748
A good summary for this mention set should describe a central event from this time period: when General Cedras fled to the Dominican Republic.
jean-Bertrand Aristide -United States are mentioned together in 67 months in the corpus, covering a number of important events spread across decades (figure 3c).
contrasting
train_21749
Then, the recall is defined as and measures how much of the desired content was returned by the system.
precision is defined as and measures how much of the returned content was actually desirable.
contrasting
train_21750
A possible explanation is that there are short sentences in the input documents which are considerably redundant to other high precision sentences.
overall the trend in the results (increasing evaluation scores with increasing α, which means increasing impact of ROUGE precision) substantiate the general hypothesis of this paper, namely that sentence selection measures should target precision instead of recall.
contrasting
train_21751
The word drop probability is normally set to 0.25, since using a higher probability may degrade the model performance (Bowman et al., 2016).
we observe that these tricks do not solve the degeneracy for the VHRED in conversation modeling.
contrasting
train_21752
As z conv is independent of the conditional structure, it does not suffer from the data sparsity problem.
the expressive power of hierarchical RNN decoders makes the model still prone to ignore latent variables z conv and z utt t .
contrasting
train_21753
This indicates that the text analyzed by both models encodes some information about egregiousness.
for the recall and hence the F1-score, the EGR model relatively improved the text-based model by 41% and 18%, respectively.
contrasting
train_21754
In addition, many enterprises have started to use conversational chat platforms such as Slack 2 to enhance team collaboration.
multiple conversations may 1 Facebook Messenger: https://www.messenger.
contrasting
train_21755
Messages in the same conversation may have higher similarity scores (Shen et al., 2006;Mayfield et al., 2012) or similar context messages (Wang and Oard, 2009).
similarity thresholds for determining new topics vary depending on context.
contrasting
train_21756
Unsupervised approaches (Wang and Oard, 2009) estimate the relationship between messages through unsupervised similarity functions such cosine similarity, and assign messages to conversations based on a predefined threshold.
supervised methods exploit a set of user annotations (Elsner and Charniak, 2008;Mayfield et al., 2012;Shen et al., 2006;Du et al., 2017;Mehri and Carenini, 2017) to adapt to different datasets.
contrasting
train_21757
Many studies also focus on similar tasks aside from conversation disentanglement, such as entailment prediction (Mueller and Thyagarajan, 2016;Wang and Jiang, 2017) and question-answering (Severyn and Moschitti, 2015;Amiri et al., 2016;Yin et al., 2016).
most of their models are complicated and require a larger amount of labeled training data; limited conversational data can lead to unsatisfactory performance as shown in Section 4.
contrasting
train_21758
In order to model the relation between the query entity pair, we assume that there exists an underlying latent variable (paths connecting two nodes) in the KG, which carries the equivalent semantics of their relations.
due to the intractability of connections in large KGs, we propose to use variation inference to maximize the evidence lower bound.
contrasting
train_21759
Large-scaled knowledge graph supports a lot of downstream natural language processing tasks like question answering, response generation, etc.
there are large amount of important facts missing in existing KG, which has significantly limited the capability of KG's application.
contrasting
train_21760
Since our model only deals with the relation classification problem (e s , ?, e d ) with e d as input, so it's hard for us to directly compare with MINERVA (Das et al., 2018).
here we compare with chain-RNN (Das et al., 2016) and CNN Path-Reasoner model, the results are demonstrated as Table 4.
contrasting
train_21761
A TLINK denotes a temporal relation between mentions, i.e., events, time expressions and document creation time (DCT) (Setzer, 2002).
annotating TLINKs is a painful work, because annotation candidates are quadratic to the number of mentions in a document.
contrasting
train_21762
For solving this, many dense annotation schemata are proposed to force annotators to annotate more or even complete graph pairs.
dense annotation is time-consuming and unstable human judgments on "salient" pairs are not improved at all.
contrasting
train_21763
Theoretically, a TORDER between two mentions with any distance in a document can be automatically computed.
it is important to make the new data in a comparable manner to the existing data.
contrasting
train_21764
"), and exchanging them is syntactically valid.
this naive swapping of attribute markers can result in ungrammatical outputs.
contrasting
train_21765
Instead, we train DELETEONLY to reconstruct the sentences in the training corpus given their content and original attribute value by maximizing: (3) For DELETEANDRETRIEVE, we could similarly learn an auto-encoder that reconstructs x from c(x, v src ) and a(x, v src ).
this results in a trivial solution: because a(x, v src ) and c(x, v src ) were known to come from the same sentence, the model merely learns to stitch the two sequences together without any smoothing.
contrasting
train_21766
In contrast, TEMPLATE-BASED is good at preserving the content because the content words are guaranteed to be kept.
it makes grammatical mistakes due to the unsmoothed combination of content and attribute words.
contrasting
train_21767
Manual inspection shows that on AMAZON, some product genres are associated with either mostly positive or mostly negative reviews.
our systems produce, for example, negative reviews about products that are mostly discussed positively in the training set.
contrasting
train_21768
Many of the examples exhibit complex transformations while preserving both the input semantics and grammaticality, even when the target syntax is very different from that of the source (e.g., when converting a declarative to question).
the failure cases demonstrate that not every template results in a valid paraphrase, as nonsensical outputs are sometimes generated when trying to squeeze the input semantics into an unsuitable target form.
contrasting
train_21769
Building a TSA model that can automatically determine the sentiment of a tweet has received significant attention over the past several years.
since most state-of-the-art TSA models use machine learning to tune their parameters, their performance -and relevance to a real-world implementation setting -is highly dependent on the dataset on which they are trained.
contrasting
train_21770
Higher coverage may be needed still to identify and understand these annotator disagreements.
the differences between these two subsets would be masked using the 3x coverage commonly found in other datasets.
contrasting
train_21771
Another related direction is to train on disparate annotations of the same task Peng et al., 2017).
the different nature of our tasks requires a modelling of their label spaces.
contrasting
train_21772
Sentence-level and text-level architectures are either adapted to sequential input data (typical for RNN, LSTM, GRNN and related architectures) or spatially arranged input data (as with CNN architectures).
for word embeddings (the default input for word emotion induction) there does not seem to be any meaningful order of their components.
contrasting
train_21773
points better on both dimensions.
the COMMON model was trained on much more data than the embeddings Sedoc et al.
contrasting
train_21774
All of this previous work only identifies affective events and their polarities.
our work aims to identify the reason for the affective polarity of an event.
contrasting
train_21775
Goals could be very specific to a character in a particular narrative story.
but many types of goals originate from universal needs and desires shared by most people (Max-Neef et al., 1991).
contrasting
train_21776
Our gold standard data set is relatively small, so supervised learning that relies entirely on manually labeled data may not have sufficient coverage to perform well across the human need categories.
the AffectEvent dataset contains a very large set of events that were extracted from the same blog corpus, but not manually labeled with affective polarity.
contrasting
train_21777
For example, the words "abandon" and "damage" belong to the Affect category (corresponding to our Emotion category) in LIWC.
based on our definition the event "my house was damaged" actually belongs to the Finance category.
contrasting
train_21778
A solution to this task will represent a substantial step towards automatic warrant reconstruction.
we present experiments with several neural attention and language models which reveal that current approaches based on the words and phrases in arguments and warrants do not suffice to solve the task.
contrasting
train_21779
Some argue that the distinction of warrants from premises is clear only in Toulmin's examples but fails in practice, i.e., it is hard to tell whether the reason of a given argument is a premise or a warrant (van Eemeren et al., 1987, p. 205).
freeman (2011) provides alternative views on modeling an argument.
contrasting
train_21780
(2017) also experimented with reconstructing implicit knowledge in short German argumentative essays.
to our work, they used expert annotators who iteratively converged to a single proposition.
contrasting
train_21781
In an analysis of this dataset Sugawara and Aizawa (2016) found, though, that only 6.2% of the questions require causal reasoning, 1.2% logical reasoning, and 0% analogy.
these reasoning types often make up the core of argumentation (Walton, 2007a).
contrasting
train_21782
Since their logical formalism builds upon an enhanced version of Aristotle's syllogisms, its applicability to natural language argumentation remains limited (see our discussion above).
to our data source, a few synthetic datasets for general natural language reasoning have been recently introduced, such as answers to questions over a described physical world (Weston et al., 2016) or an evaluation set of 100 questions in the Winograd Schema Challenge (Levesque et al., 2012).
contrasting
train_21783
As a result, trying to give a plausible reasoning for the opposite claim ¬C either leads to nonsense or to a proposition that resembles a rebuttal rather than a warrant (Toulmin, 1958).
if both W and AW are available, they usually capture the core of a reason's relevance and reveal the implicit presuppositions (examples follow further below).
contrasting
train_21784
We perform further analysis to see the effect of joint-structured model on the sentence-level task under sparsely-labeled conditions in Section 5.4.
for the document-level task, the joint model (Joint) performs better than Joint doc and all the baseline approaches.
contrasting
train_21785
Our results show that the models are robust mainly where the semantics of the new instances do not change significantly with respect to the sampled instances and thus the class labels remain unaltered; i.e., the models are insensitive to our transformation to input data.
when the class labels change, the models significantly drop accuracy.
contrasting
train_21786
Furthermore, recent works on adversarial perturbation have tackled this problem (Goodfellow et al., 2015;Feinman et al., 2017).
most previous approaches require either annotations generated by each individual annotator (Guan et al., 2017), or both task-specific and instance-type (genuine or adversarial) labels for training (Hendrik Metzen et al., 2017;Zheng et al., 2016), or noise-free data (Xiao et al., 2015).
contrasting
train_21787
The two frameworks, however, differ in their views on the theory of learning as we describe below: Curriculum learning is inspired by the learning principle that humans can learn more effectively when training starts with easier concepts and gradually proceeds with more difficult ones (Bengio et al., 2009).
leitner system is inspired by spaced repetition (Dempster, 1989;Cepeda et al., 2006), the learning principle that effective and efficient learning can be achieved by working more on difficult concepts and less on easier ones.
contrasting
train_21788
Annotators are instructed to label a post as "relevant" if it describes a patient's experience (including sign and symptoms, treatments, etc.,) with respect to the cancer.
"irrelevant" posts are defined as generic texts (such as scientific papers, news, etc.,) that discuss cancer in general without describing a real patient experience.
contrasting
train_21789
This may decrease the training performance of Lit, especially with greater amount of noise in data.
this training strategy increases the spotting performance of Lit as spurious instances seem to occur in lower queues of Leitner more frequently, see Figure 4.
contrasting
train_21790
By contrast, without in-cremental commitment to each structure, competing material slows down the process, because it would lead to combinatory explosion.
later results establish nuance.
contrasting
train_21791
A commonly implied assumption is that lemma retrievals shouldn't interfere with phonological processes (e.g., Schriefers et al., 1990), though it is difficult to know if a speech slowdown is due to a phonological or semantic interference due to our experimental setup.
since in our experiment, effects are still observed at large distances from the target words, either phonological forms can be retrieved in a non-incremental way (possibly even before lemmas for other words are retrieved), or the retrieval of the lemma does interfere with phonological encoding in some way; for instance, by activating related phonological forms.
contrasting
train_21792
It is relatively smaller than the other corpora (Section 2).
it is the largest, if not the only, corpus for the evaluation of passage completion on multiparty dialog that still gives enough instances to develop meaningful models using deep learning.
contrasting
train_21793
It should be noted that character anonymization process makes it harder to for people to find the answer.
it also possible that some participants of the evaluation may enter the answer randomly (i.e the results may not truly reflect human performance).
contrasting
train_21794
Although these cues are usually parts of sentences in long utterances, since Bi-LSTM is based on only words, it still is able to locate them correctly.
our model encodes each utterance and then feeds encoded vectors to LSTMs, so the high level representation of the cues are mixed with other information, which hinders the model's ability to find the exact string matches.
contrasting
train_21795
We then experimented this model on our original length dataset.
even after an extensive hyperparameter turning on the development set, this model did not achieve results comparable to those of either Bi-LSTM or our models, so we did not make a further analysis on this model.
contrasting
train_21796
These ranking-based retrieval strategies have been well-applied as an important approach to dialogue systems, yet the set of scripted responses are limited and are short at generalization.
statistical machine translation (SMT) systems have been applied to dialogue systems (Ritter et al., 2011), taking user's query as a source language sentence and the chatbot's response as a target language sentence.
contrasting
train_21797
The domain adaptation from document comprehension to dialog generation is feasible by taking the rich context of the speakers as a "document", current user's query as a "question" and the chatbot's response as an "answer".
there are still several major challenges for this domain adaptation.
contrasting
train_21798
Existing evaluation metrics of dialog agents measure the quality of the generated sentences only by referring to the existing responses, which obeys the same principle with NMT models' metrics.
one essential difference between NRG and NMT lies in the fact that, a large group of responses can be considered as relevant to a given query in conversations, while the number of references to a translation result is quite limited for NMT models.
contrasting
train_21799
From this benchmarking table, it can be observed that the attention mechanism is helpful for decoders to improve the relevance of the generated responses, since the Attention-Seq2Seq performs better than the basic Seq2Seq on the dataset, in terms of all the three metrics.
the relative gain of the attention layer is limited, indicating that modeling relation of query and response by attention module is not able to directly solve the learning paradigm of conversations.
contrasting