id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_16100
We work with z in the same way as with other weights W : we use a log-uniform prior and approximate the posterior with a fully-factorized normal distribution with trainable mean and variance.
since z is a one-dimensional vector, we can sample it individually for each object in a mini-batch to reduce the variance of the gradients.
contrasting
train_16101
framework in (Luong et al., 2015) for the P2C task.
to related work that simply extended the source side with different sized context window to improve of translation quality (Tiedemann and Scherrer, 2017), we add the entire input utterance according to IME user choice at previous time (referred to the context hereafter).
contrasting
train_16102
Reducing the size of the embedding m leads to a significant reduction in the number of parameters, proportional to |V |, and the acceleration of softmax computation.
the size of the additional matrix L is only n × m and contributes very little to the overall size of the model.
contrasting
train_16103
Moreover, an LSTM cell carries over information both through the hidden state and a memory state; the latter is affected by tying only indirectly (see Hochreiter and Schmidhuber (1997) for details on LSTM architectures).
our experiments show that, in practice, two-layer LSTM LMs are still affected by tying despite these caveats.
contrasting
train_16104
The skipgram algorithm shows a small degradation of performance for the tied+L architecture with respect to the non-tied one; note that, as explained in Section 2.3, tying makes the most sense for CBOW.
the fact that standard tying obtains much worse results (similarly to the results of Press and Wolf, 2017) shows that the linear mapping substantially relaxes the tying constraint.
contrasting
train_16105
Most of these methods use an additional encoder Wang et al., 2017) to extract contextual information from previous source-side sentences.
this requires additional parameters and it does not ex-ploit the representations already learned by the NMT encoder.
contrasting
train_16106
The Zh-En subtitles corpus is a compilation of TV subtitles designed for research on context (Wang et al., 2018).
to the other sets, it has three references to compare.
contrasting
train_16107
Neural machine translation (NMT) typically makes use of a recurrent neural network (RNN) -based encoder and decoder, along with an attention mechanism (Bahdanau et al., 2015;Cho et al., 2014;Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014).
it has been shown that RNNs require some supervision to learn syntax (Bentivogli et al., 2016;Linzen et al., 2016;Shi et al., 2016).
contrasting
train_16108
(2016b), our networks operate at the subword level using byte pair encoding (BPE) with a shared vocabulary on the source and target sides.
the parser operates at the word level.
contrasting
train_16109
The parse2seq and multi-source systems require parsed source data at inference time.
the parser may fail on an input sentence.
contrasting
train_16110
2 Attempts to measure the impact of translation divergences in MT have focused on the introduction of noise in sentence alignments (Goutte et al., 2012), showing that statistical MT is highly robust to noise, and that performance only degrades seriously at very high noise levels.
neural MTs seem to be more sensitive to noise (Chen et al., 2016), as they tend to assign high probabilities to rare events (Hassan et al., 2018).
contrasting
train_16111
Accuracies obtained for various combinations of negative examples, where we see that non-divergent words in parallel and unpaired sentences (columns P and U) are easy to spot, as long as the model has seen these types of examples in training.
the accuracy drops dramatically when the model is not trained with unpaired sentences (rows PR, PI and PRI).
contrasting
train_16112
This solution, both with ISF and CSLS criteria, is applied with a transformation W learned using the square loss.
replacing the loss in Eq.
contrasting
train_16113
Most of the Neural Machine Translation (NMT) models are based on the sequence-tosequence (Seq2Seq) model with an encoderdecoder framework equipped with the attention mechanism.
the conventional attention mechanism treats the decoding at each time step equally with the same matrix, which is problematic since the softness of the attention for different types of words (e.g.
contrasting
train_16114
Our work is most similar to the work of Zoph and Knight (2016) and Anastasopoulos and Chiang 2018, which share a decoder and two separate attention models to read from two different sources.
we share information at the level of reconstructed frames.
contrasting
train_16115
Saying "I am happy" in English, does not encode any additional knowledge of the speaker that uttered the sentence.
many other languages do have grammatical gender systems and so such knowledge would be encoded.
contrasting
train_16116
The increasing amount of work on automatic author classification (or 'author profiling') reaching relatively high accuracies on domain-specific data corroborates these findings (Rangel et al., 2013;Santosh et al., 2013).
determining the gender of an author based solely on text is not a solved issue.
contrasting
train_16117
1 Human translators rely on contextual information to infer the gender of the speaker in order to make the correct morphological agreement.
most current MT systems do not; they simply exploit statistical dependencies on the sentence level that have been learned from large amounts of parallel data.
contrasting
train_16118
(2) ... je suis heureuse que...
we also encountered cases where the gender-informed system fails to produce the correct agreement, as in (4), where both the BASE and the TAG system produce a male form ('embarassé') instead of the correct female one ('embarassée' or 'gênée').
contrasting
train_16119
From the experiments, we see that informing the NMT system by providing tags indicating the gender of the speaker can indeed lead to significant improvements over state-of-the-art baseline systems, especially for those languages expressing grammatical gender agreement.
while analyzing the EN-FR translations, we observed that the improvements are not always consistent and that, apart from morphological agreement, the gender-aware NMT system differs from the baseline in terms of word choices.
contrasting
train_16120
This suggests that increasing the splitting factor k in Equation 1 might improve the model performance.
it also reduces the efficiency in terms of GPU memory usage.
contrasting
train_16121
We also observe gains of 0.7 and 1.0 BLEU for RNMT+ models, on En→De and Cs→En respectively, as indicated by Tables 3 and 4.
experiments comparing wide models against deeper ones are inconclusive.
contrasting
train_16122
In contexts that are not copy-prone, minimal copying occurs.
14 as they are placed in increasingly copyprone contexts, even these words that the system has learned it should translate are being copied.
contrasting
train_16123
Lowercase words are the least frequently copied (average copy rate of 40.2%), uppercase words are the most copied (94.4%), and the natural case falls in the middle (81.7%).
changing casing changes the BPE segmentation, and uppercase words tend to be split into more pieces: a mean of 4.4 segments, as compared to means of 3.1 (lowercase) and 2.9 (natural case).
contrasting
train_16124
The highlighted fragments of the source sentence and the matched TM source sentence are not actually the same in terms of their surface forms.
they are semantically close and can be translated into the same target translation.
contrasting
train_16125
We call this phenomenon the "beam search curse", which is listed as one of the six biggest challenges for NMT (Koehn and Knowles, 2017).
there has not been enough attention on this problem.
contrasting
train_16126
Therefore, their explanations are not satisfactory.
previous work adopts several heuristics to address this problem, but with various limitations.
contrasting
train_16127
By default, OpenNMT-py (Klein et al., 2017) stops when the topmost beam candidate stops, because there will not be any future candidates with higher model scores.
this is not the case for other rescoring methods; e.g., the score of length normalization (5) could still increase.
contrasting
train_16128
A general tendency is that ATT performs better compared to MAX, AVG and CON as the number of views increases (lines 30 and 34) and so the average number of views without information (i.e., missing views) for an entity increases.
to MAX, ATT can combine different views.
contrasting
train_16129
In contrast to MAX, ATT can combine different views.
to CON and AVG, ATT can ignore some of them based on low attention weights.
contrasting
train_16130
of learning a high-quality mapping from the utterance to its intent directly, so that such mapping can be further capitalized to measure the compatibility of an utterance with emerging intents.
the diverse semantic expressions may impede the learning of such mapping.
contrasting
train_16131
In the meanwhile, utterances of the same emerging intent but with nuances in expressions result in their proximity in the t-SNE space.
we do observe less satisfied cases where the model mistake an emerging intent DecreaseScreenBrightness (No.
contrasting
train_16132
All these methods heavily rely on numerous carefully hand-engineered features such as lexical (bag-of-words (BOW)), semantic (hypernyms, synonyms), structural (part of speech (POS) tags, lemmas, orthographic shapes, headings), statistical (statistical distributions of token types) and sequential (sentence position, surrounding features, predicted labels) features.
current emerging artificial neural network (ANN) based models have removed the need for manually selected features; instead, features are self-learned from the token and/or character embeddings.
contrasting
train_16133
This observation implies that the overall representation ability of TMN is enhanced as the increasing complexity of the model via combining more hops.
this enhancement will reach saturation when the hop number exceeds a threshold, which is 5 hops for most datasets in our experiment.
contrasting
train_16134
In our prior work (Rios and Kavuluru, 2018), we combine matching networks with a sophisticated thresholding strategy.
in Rios and Kavuluru (2018) we did not explore the few-and zeroshot settings.
contrasting
train_16135
We find that the word embedding derived label vectors work best for ESZSL on zero-shot labels.
this setup is outperformed by GRALS derived label vectors on the frequent and few-shot labels.
contrasting
train_16136
ZAGCNN outperforms ACNN by almost 5% and ZACNN by 1% in R@10 on few-shot classes.
aCNN still outperforms all other methods on frequent labels, but by only 0.3% when compared with ZaGCNN.
contrasting
train_16137
Both R@k and P@k give more weight to frequent labels, thus it is expected that ACNN outperforms ZAGCNN for frequent labels.
we also find that ACNN outperforms our methods with respect to Macro-F1.
contrasting
train_16138
In recent years, some neural models have made remarkable progress in this task.
they are all based on maximum likelihood estimation, which only learns common patterns of the corpus and results in lossevaluation mismatch.
contrasting
train_16139
Some neural models are proposed and achieve significant improvement.
existing models are all based on maximum likelihood estimation (MLE), which brings two substantial problems.
contrasting
train_16140
Poems generated by Base center in a few topics, which again demonstrates the claim: MLE-based models tend to remember the common patterns.
humanauthored poems spread on more topics.
contrasting
train_16141
The word 'fishing jetty' is confusing without any necessary explanation in the context.
poem (2) describes a clearer scene and expresses some emotion: a lonely man takes a boat from morning till night and then falls asleep solitarily.
contrasting
train_16142
Combining the virtues of probability graphic models and neural networks, Conditional Variational Auto-encoder (CVAE) has shown promising performance in many applications such as response generation.
existing CVAE-based models often generate responses from a single latent variable which may not be sufficient to model high variability in responses.
contrasting
train_16143
The CVAE based models incorporate stochastic latent variables into decoders in order to generate more relevant and diverse responses (Serban et al., 2017;Shen et al., 2017).
existing CVAE * Corresponding author based models normally rely on the unimodal distribution with a single latent variable to provide the global guidance to response generation, which is not sufficient to capture the complex semantics and high variability of responses.
contrasting
train_16144
2 Related Work As neural network based models dominate the research in natural language processing, Seq2Seq models have been widely used for response generation (Sordoni et al., 2015).
seq2seq models suffer from the problem of generating generic responses, such as I don't know (Li et al., 2016a).
contrasting
train_16145
VAE (Kingma and Welling, 2013) is one of the most successful models (Serban et al., 2017;Shen et al., 2017;Cao and Clark, 2017).
vAE-based models only use a single latent variable to encode the whole response sequence, thus suffering from the model collapse problem (Bowman et al., 2016).
contrasting
train_16146
erated by CVAE+BOW and our model.
we found that CVAE+BOW tends to copy the given queries (the first and fourth example in Table 4) and repeatedly generate redundant tokens (the second example).
contrasting
train_16147
Hybrid, DRESS, and DRESS-LS are good at generating shorter sentences, but they are not as good at choosing shorter words.
sBMT-sARI, DCss, and DMAss all generate shorter words.
contrasting
train_16148
Therefore, we believe that, by optimizing language model as a goal for the reinforcement learning, DRESS and DRESS-LS are tuned to simplify sentences by shortening the sentence lengths.
with the help of an integrated external knowledge base, SBMT-SARI and our models have more capability to generate shorter words in order to simplify sentences.
contrasting
train_16149
The neural text generation community has also recently been interested in "controllable" text generation (Hu et al., 2017), where various aspects of the text (often sentiment) are manipulated or transferred (Shen et al., 2017;Zhao et al., 2018;.
here we focus on controlling either the content of a generation or the way it is expressed by manipulating the (latent) template used in realizing the generation.
contrasting
train_16150
After training, we could simply condition on a new database and generate with beam search, as is standard with encoder-decoder models.
the structured approach we have developed allows us to generate in a more template-like way, giving us more interpretable and controllable generations.
contrasting
train_16151
Whereas there has been much recent interest in learning continuous latent variable representations for text (see Section 2), it has been somewhat unclear what the latent variables to be learned are intended to capture.
the latent, template-like structures we induce here represent a plausible, probabilistic latent variable story, and allow for a more controllable method of generation.
contrasting
train_16152
Neural text generation, including neural machine translation, image captioning, and summarization, has been quite successful recently.
during training time, typically only one reference is considered for each example, even though there are often multiple references available, e.g., 4 references in NIST MT evaluations, and 5 references in image captioning data.
contrasting
train_16153
There are many recent efforts in improving the generation accuracy, e.g., ConvS2S (Gehring et al., 2017) and Transformer (Vaswani et al., 2017).
all these efforts are limited to training with a single reference even when multiple references are available.
contrasting
train_16154
For example, the NIST Chinese-to-English and Arabic-to-English MT evaluation datasets (2003)(2004)(2005)(2006)(2007)(2008) have in total around 10,000 Chinese sentences and 10,000 Arabic sentences each with 4 different English translations.
for image captioning datasets, multiple references are more common not only for evaluation, but also for training, e.g., the MSCOCO (Lin et al., 2014) dataset provides 5 references per image and PASCAL-50S and ABSTRACT-50S (Vedantam et al., 2015) even provide 50 references per image.
contrasting
train_16155
These models embed entities and relations into latent vectors and complete KGs based on these vectors, such as TransE (Bordes et al., 2013), TransH (Wang et al., 2014) and TransR (Lin et al., 2015b).
most of the existing works simply embed relations into vectors.
contrasting
train_16156
Organizing scientific information into structured knowledge bases requires information extraction (IE) about scientific entities and their relationships.
the challenges associated with scientific IE are greater than for a general domain.
contrasting
train_16157
The prior knowledge based s 0 helps the agent narrow down the candidate space more quickly when the target object is a popular object.
it also becomes misleading when the target object is not popular and makes the agent even harder to correct the confidence of the target object.
contrasting
train_16158
However, it also becomes misleading when the target object is not popular and makes the agent even harder to correct the confidence of the target object.
the uniform distribution s 0 makes the agent keep track of the target object only based on the user's answers.
contrasting
train_16159
Besides, Zhao and Maxine (2016) also explores Q20 in their dialogue state tracking research.
they only use a small toy Q20 setting where the designed questions are about 6 person attributes in the Knowledge Base (KB).
contrasting
train_16160
Despite their simplicity, embedding-based models achieved state-of-the-art performance on KGQA (Das et al., 2018).
such models ignore the symbolic compositionality of KG relations, which limits their usage in more complex reasoning tasks.
contrasting
train_16161
In particular, we expect reward shaping to accelerate the convergence of RL (to a better performance level) as it propagates prior knowledge about the underlying KG to the agent.
a fair concern for action dropout is that it can be slower to train, as the agent is forced to explore a more diverse set of paths.
contrasting
train_16162
We observe that in general our proposed enhancements are effective in improving query-answering over both relation types (more effective for to-many relations).
adding the ConvE reward shaping module on WN18RR hurts the performance over both to-many and toone relations (more for to-one relations).
contrasting
train_16163
We define a learning algorithm as transductive 1 if its goal is to generalize from specific training examples to specific test examples (Vapnik, 1998).
in-ductive inference learns a general model that is independent of any test set.
contrasting
train_16164
Our example in §1 is that the model has difficulties producing initial letters not seen during training.
within each paradigm, forms are generally similar; thus, input subset sources contain valuable information about how to generate output subset targets.
contrasting
train_16165
On average, MED+PT clearly outperforms SIG17, the strongest baseline: by .0796 (.5808-.5012) on SET1, .0910 (.7486-.6576) on SET2, and .0747 (.8454-.7707) on SET3.
looking at each language individually (refer to Appendix A for those results), we find that MED+PT performs poorly for a few languages, namely Danish, English, and Norwegian (Bokmål & Nynorsk).
contrasting
train_16166
In contrast, the performance of MED, the neural model, is relatively independent of the choice of source; this is in line with earlier findings (Cotterell et al., 2016).
even for MED+PT, adding SHIP (i.e., MED+PT+SHIP) slightly increases accuracy by .0061 (.7547-.7486) on SET2, and .0029 (.8483-.8454) on SET3 (L53).
contrasting
train_16167
MED does not perform well for either SET1 or SET2.
on SET3 it even outperforms SIG17 for a few languages.
contrasting
train_16168
In contrast, on SET3 it even outperforms SIG17 for a few languages.
mED loses against mED+PT in all cases, highlighting the positive effect of paradigm transduction.
contrasting
train_16169
The correlation is not perfect because languages have different degrees of morphological regularity.
the overall trend is clearly recognizable.
contrasting
train_16170
Character-level features are currently used in different neural network-based natural language processing algorithms.
little is known about the character-level patterns those models learn.
contrasting
train_16171
Spanish While there is no single clear-cut rule for the Spanish gender, in general the suffix a denotes the feminine gender in adjectives.
there exist many nouns that are feminine but do not have the suffix a. Teschner and Russell (1984) identify d, and ión as typical endings of feminine nouns, which our models identified too as for example ad$ or ió/sió.
contrasting
train_16172
For Turkish, lemma performs better than lemma+morph, perhaps because the morphological analyzer outputs so many redundant properties which reduce the distance between words that are not particularly similar.
morph helps and lemma hurts in Hindi, perhaps because the morph analyzer outputs only a small number of highly informative properties, but is a poor general-purpose lemmatizer.
contrasting
train_16173
(2018) treated different argumentation formalisms as different tasks and combined respective extraction tasks and datasets in a MTL setting.
to these efforts that combine several AM subtasks or formalisms with joint optimization and MTL models, in this work we examine the dependencies between argumentative components and other rhetorical aspects of scientific writing.
contrasting
train_16174
2 The neural model described above generates an unlabeled temporal dependency tree, with each parent being the most salient reference time for the child.
it doesn't model the specific temporal relation (e.g.
contrasting
train_16175
(2017), both of which extract related spans of words (entity mentions for coreference resolution, and events or time expressions for temporal dependency parsing).
our temporal dependency parsing model differs from Lee et al's coreference model in that the ranking model for coreference only needs to output the best candidate for each individual pairing and cluster all pairs that are coreferent to each other.
contrasting
train_16176
", the emphasized text span is considered a causal explanation which indicates pessimistic personality -a negative event where the author believes the cause is pervasive.
in "My parser failed because I barely worked on the code.
contrasting
train_16177
Neural Network Model We used bidirectional LSTMs for causality classification and causal explanation identification since the discourse arguments for causal explanation can show up either before and after the effected events or results and we want our model to be optimized for both cases.
there is a risk of overfitting due to the dataset which is relatively small for the high complexity of the model, so we added a dropout layer (p=0.3) between the Word-level LSTM and the DA-level LSTM.
contrasting
train_16178
In other words, it's possible the linear model will not perform as well if the training size is increased substantially.
a linear model could still be used to do a first-pass, computationally efficient labeling, in order to shortlist social media posts for further labeling from an LSTM or more complex model.
contrasting
train_16179
Multimodal learning has shown promising performance in content-based recommendation due to the auxiliary user and item information of multiple modalities such as text and images.
the problem of incomplete and missing modality is rarely explored and most existing methods fail in learning a recommendation model with missing or corrupted modalities.
contrasting
train_16180
The authors did not set the problem in a supervised learning setup and instead find entities closest in terms of similarity of documents containing them.
in Section 3.2 we justify a supervised learning setup of the task.
contrasting
train_16181
The collection has a total of 1,003 books.
the dataset did not include book covers.
contrasting
train_16182
However, the visual features are not able to contribute much when combined with strong textual features that were already performing well.
for the MT setting, the performance decreases for most of the feature combinations with the addition of the visual modality.
contrasting
train_16183
After the subscription, users will receive a notification when a new comment arrives in that thread.
the speed of content generation in a well-known discussion forum is breakneck.
contrasting
train_16184
On one hand, the agent can choose the action with the highest estimated Q-value to exploit its current knowledge of the Q-function.
the agent can choose a non-greedy action to get more information about the Q-value of other actions.
contrasting
train_16185
The Q-value prediction of the permutation (1, 3, 2) almost triples that of the permutation (2, 3, 1).
any permutation of the comments does not change the Q-value prediction when we use our DRRN-Attention model.
contrasting
train_16186
We surmise that this information is related to the sentiment-bearing word "better" of the aspect "coffee", because a comparison using the word "better" implies the presence of a good ("coffee") and a bad ("cosi sandwiches") object.
this semantics is misconstrued by IAN, which leads to aspect misclassification.
contrasting
train_16187
We exploit this property to obtain state-of-the-art performance in aspect-based sentiment analysis in two distinct domains: restaurant and laptop.
there remains plenty of room for improvement in the memory network, e.g., for generation of better aspect-aware representations.
contrasting
train_16188
An intuitive way is to define the global error function for the network on the training set.
some important characteristics of relevant emotion ranking, such as ranking, not considering irrelevant emotions, are not considered in the classical back propagation algorithm (Rumelhart et al., 1988).
contrasting
train_16189
In addition, existing works train each instance separately.
we observe that the interactions among the aspects, which have the same context words, could bring extra useful information.
contrasting
train_16190
The main advantage of deep learning approach is that it is effective in exploring both linguistic and semantic relations between words, thus can overcomes the problems of lexicon-based approach.
current deep learning approach for sentiment analysis usually faces with the major shortcoming, i.e., being limited by the quantity of high quality labeled data.
contrasting
train_16191
For instance, the winning approach for SwissCheese (SemEval 2016) uses an ensemble of 6 CNN models along with a meta-classifier (random forest classifier).
our proposed model is a single neural model.
contrasting
train_16192
Traditionally, sentiment analysis Lee, 2005, 2008) has been applied to a wide variety of texts (Hu and Liu, 2004;Liu, 2012;Turney, 2002;Akhtar et al., 2016Akhtar et al., , 2017Mohammad et al., 2013).
multi-modal sentiment analysis has recently gained attention due to the tremendous growth of many social media platforms such as YouTube, Instagram, Twitter, Facebook Poria et al., 2016Poria et al., , 2017dZadeh et al., , 2016 etc.
contrasting
train_16193
We evaluate our proposed approach on two benchmark datasets, namely CMU Multi-modal Opinion-level Sentiment Intensity (CMU-MOSI) corpus (Zadeh et al., 2016) Each utterance in CMU-MOSI dataset has been annotated as either positive or negative, whereas in CMU-MOSEI dataset labels are in the continuous range of -3 to +3.
in this work we project the instances of CMU-MOSEI in a two-class classification setup with values ≥ 0 signify positive sentiments and values < 0 signify negative sentiments.
contrasting
train_16194
For MO-SEI dataset we observe that the precision and recall for positive class (84% precision & 88% recall; are quite encouraging.
the same are comparatively on the lower side for the negative class (68% precision & 58% recall.
contrasting
train_16195
We count the total occurrences for each noun sense (synset) of the candidate and match the candidate to the most frequent synset.
such a method is not good enough for our problem, as shown in the results later.
contrasting
train_16196
L = [l 1 , l 2 , ..., l 25 ] are the 25 golden aspect terms, where L (h) = [l 5h−4 , ..., l 5h ] are from the h-th human annotator.
the hard accuracy is defined as: counting the number of exact matches makes the accuracy score discrete and coarse.
contrasting
train_16197
(2015b) used forensics features for detecting multimedia fabrication to verify rumors.
these features did not lead to noticeable improvement.
contrasting
train_16198
Previous rumor verification datasets are mainly monolingual, such as English (Derczynski et al., 2017) or Chinese (Wu et al., 2015).
textual information in the native language where the rumor happened can be more helpful when it comes to ver-ifying worldwide rumors.
contrasting
train_16199
Those missing features make TFB perform poorly in the task setting.
tFB performs better than two of the baselines in the event setting.
contrasting