id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_15800
Word-by-word translation always outputs a target word for every position.
there are a plenty of cases that multiple source words should be translated to a single target word, or that some source words are rather not translated to any word to make a fluent output.
contrasting
train_15801
Neural LMs globally score the entire candidate plaintext sequence (Mikolov et al., 2010).
using a neural LM for decipherment is not trivial because scoring the entire candidate partially deciphered plaintext is computationally challenging.
contrasting
train_15802
Greydanus (2017) frames the decryption process as a sequence-to-sequence translation task and uses a deep LSTM-based model to learn the decryption algorithms for three polyalphabetic ciphers including the Enigma cipher.
this approach needs supervision compared to our approach which uses a pre-trained neural LM.
contrasting
train_15803
Previous work on using context for sentence classification used LSTM and CNN network layers to encode the surrounding context, giving an improvement in classification accuracy (Lee and Dernoncourt, 2016).
the use of CNN and LSTM layers imposes a significant computational cost when training the network, especially if the size of the context is large.
contrasting
train_15804
More traditional approaches employ an off-line structure predictor (e.g., a parser) to define the computation graph (Tai et al., 2015;Chen et al., 2017), sometimes with some parameter sharing (Bowman et al., 2016).
these off-line methods are unable to jointly train the latent model and the downstream classifier via error gradient information.
contrasting
train_15805
1 Code available at https://github.com/ sweetpeach/ReCode Tree-based approaches (Yin and Neubig, 2017;Rabinovich et al., 2017) represent code as Abstract Syntax Trees (ASTs), which has proven effective in improving accuracy as it enforces the well-formedness of the output code.
representing code as a tree is not a trivial task, as the number of nodes in the tree often greatly exceeds the length of the NL description.
contrasting
train_15806
Word n-grams are obvious candidates when generating a sequence of words as output, as in NMT.
in syntax-based code generation, the generation target is ASTs with no obvious linear structure.
contrasting
train_15807
(2016) proposes a sequence-to-sequence (Seq2Seq) network to model the SQL query and natural language jointly.
since the SQL is designed * Work done when the author was at IBM Research.
contrasting
train_15808
Most existing works of graph convolution neural networks focus more on node embeddings rather than graph embeddings (GE) since their focus is on the node-wise classification task.
graph embeddings that convey the entire graph information are essential to the downstream decoder, which is crucial to our task.
contrasting
train_15809
One possible reason is that in our graph encoder, the node embedding retains the information of neighbor nodes within K hops.
in the tree encoder, the node embedding only aggregates the information of descendants while losing the knowledge of ancestors.
contrasting
train_15810
CDMM assigns a high attention value to the word to predict Order and Accept compared to Discuss.
the attention values differ according to the speaker.
contrasting
train_15811
Accept as compared to the other classes when the speaker is king.
when officials use this word, CDMM assigns a high attention value to the word in the Order class.
contrasting
train_15812
Table 4 demonstrates the performance of our model over different window size K. We can see that all these results is better than the performance our model without attention mechanism.
a proper restriction window is helpful for the attention mechanism to take better effect.
contrasting
train_15813
Video content on social media platforms constitutes a major part of the communication between people, as it allows everyone to share their stories.
if someone is unable to consume video, either due to a disability or network bandwidth, this severely limits their participation and communication.
contrasting
train_15814
Feature-modulating methods like FiLM (De Vries et al., 2017;Perez et al., 2018) control image-comprehension process using modulation-parameters generated from the question, allowing models to be trained end-to-end.
the image-comprehension program in visual reasoning tasks can be extremely long and sophisticated.
contrasting
train_15815
The result shows that visual features can guide the comprehension of question logics with textual modulation.
question-based modulation parameters enable the ResBlocks to filter out irrelative objects.
contrasting
train_15816
More importantly, most state-of-the-art systems can only predict one most likely relation for a single entity pair.
it is very common that one sentence may contain multiple entity pairs and describe multiple relations.
contrasting
train_15817
To filter such strings, we need to rely on indicators that may reside in a different line such as a heading "Dissertations supervised".
if we only train the model on webpage-level input, the model may be dominated by the longest line on the homepage, such as biography information.
contrasting
train_15818
First, we inject character information into the model through character-level CNNs; these give the model a deeper ability to recognize character correspondences between the context and entity title.
these convolutional filters struggle to learn useful features in this noisy context and ultimately do not help performance.
contrasting
train_15819
Think The Warriors The gold title is The Warrior (film) and the base model correctly places 90% of its attention weight on the word movie when calculating attention for this title.
the character-level CNN model only places 60% of its attention weight on it, distributing its attention values more evenly across the rest of the words.
contrasting
train_15820
Fact salience is close to automatic text summarization (Erkan and Radev, 2004); both must detect the most prominent information in the text.
while text summarization generates summaries for humans, fact salience output must be interpretable by machines.
contrasting
train_15821
As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative.
many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations.
contrasting
train_15822
We can see that the Wasserstein and the classification-based methods perform very similarly, after proper hyper-parameter tuning.
wasserstein yields better stability, achieving an AUC variance 17 times lower than the one for classification across hyper-parameter settings.
contrasting
train_15823
Several question answering datasets have been proposed (Berant et al., 2013;Joshi et al., 2017;Trischler et al., 2017;Rajpurkar et al., 2018, inter alia).
all of them were limited to answering individual questions.
contrasting
train_15824
(2018) study the problem of sequential question answering, and introduce a dataset for the task.
we differ from them in two aspects: 1) They consider question-answering over structured knowledge-bases.
contrasting
train_15825
In the second iteration, a number of test samples (to which the classifier associated higher confidence scores) are added to the training set for another round of training.
the ground-truth labels of the added test samples are not necessary.
contrasting
train_15826
Therefore, we propose a shared-private (SP) model as shown in Fig 1.c, where we employ a shared LSTM layer to extract shared sentiment features for both sentiment and emotion classification tasks, and a target-specific LSTM layer to extract specific emotion features that are only sensitive to our emotion classification task.
as pointed out by Liu et al.
contrasting
train_15827
The task of sentiment modification requires reversing the sentiment of the input and preserving the sentiment-independent content.
aligned sentences with the same content but different sentiments are usually unavailable.
contrasting
train_15828
For instance, when the source text is "This is a wonderful movie", we expect an output like "This movie is disappointing".
the generated sentence may be "The waiters are very rude", which has little relevance to the source text.
contrasting
train_15829
(2017) augment the unstructured variables z in vanilla VAE with a set of structured variables c each of which targets a salient and independent semantic feature of sentences, to control sentence sentiment.
all of these work attempt to implicitly separate the non-emotional content from the emotional information in a dense sentence representation.
contrasting
train_15830
The main reason is that these methods using adversarial learning attempt to implicitly separate the emotional information from the context information in a sentence vector.
without parallel data, it is difficult to achieve such a goal.
contrasting
train_15831
This approach allows the system to learn from data sets across multiple domains, increasing the flexibility of the sentiment classifier.
the weakness of this approach is that the system uses a bigram bag-of-words as input, making it unable to learn long-distance syntactic phenomena.
contrasting
train_15832
We note a limitation of our experiment: the selection of arbitrary adjectives.
the 15 adjectives used are commonly used to describe humans, and the result was quite consistent.
contrasting
train_15833
Similar to the FakeNews, (Rashkin et al., 2017;Vlachos and Riedel, 2014) focused on political statements from Politifact.com to verify the degree of truthfulness.
they assume that the gold standard documents containing the evidence are already known, which overly simplifies the task.
contrasting
train_15834
Our DA rte model must correctly decide whether a claim is NEI, when the evidence retrieved is irrelevant and insufficient.
nEI claims have no annotated evidence, thus cannot be used to train RTE.
contrasting
train_15835
NoScoreEv is a simple classification accuracy that only considers the correctness of the verification label.
scoreEv is a stricter measure that also considers the correctness of the retrieved evidence.
contrasting
train_15836
The number of these actions is regarded as a measure of popularity.
popularity is not determined solely by content of a post, e.g., a text or an image it contains, but is highly based on its contexts, e.g., timing, and authority.
contrasting
train_15837
Categories of news articles (Isonuma et al., 2017) and ratings of online reviews (Xiong and Litman, 2014) were used as distant labels in extractive summarization.
these have been used as supplementary labels to enhance conventional summarization models, whereas we present labels which a model can solely be trained with.
contrasting
train_15838
We assume that measures of popularity reflects the informativeness, the index required for a summary (Erkan and Radev, 2004), and validate whether popularity can be used as a distant label for extractive summarization.
popularity is not solely determined by content, e.g., a text or an image, but is highly affected by contexts, e.g., timing, and authority (Cheng et al., 2017;Burghardt et al., 2017;Suh et al., 2010;Hessel et al., 2017;Jaech et al., 2015).
contrasting
train_15839
(Rotter, 1966) to identify an external locus of control when authors express that they feel controlled by other people or the environment.
authors communicate an internal locus of control when they ascribe the control of their decisions and circumstances to themselves.
contrasting
train_15840
Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences.
we demonstrate that PPL may not be the best metric to optimize in some tasks, and further propose a large margin formulation.
contrasting
train_15841
Moreover, "a decade as" appears 2,280 times, but "its defeat is" appears only 24 times.
this is undesirable because if there is another hypothesis that happens to be the same as reference, which will not be ranked as the best candidate.
contrasting
train_15842
All methods reduce the WER over the baseline without rescoring.
lMlM and rlMlM are notably better than the other two methods.
contrasting
train_15843
Both of them demonstrate superior performance over their minimum-PPL counterparts.
the performance gain from LMLM to rLMLM is small, although rLMLM is built on more pairwise comparisons and requires more training efforts.
contrasting
train_15844
These pre-trained layers brought significant improvements to various NLP benchmarks, yielding up to 30% relative error reductions.
due to high variability of language, gigantic NNs (e.g., LSTMs with 8,192 hidden states) are preferred to construct informative LMs and extract multifarious linguistic information (Peters et al., 2017).
contrasting
train_15845
Hence, we propose to compress the model by layer selection, which retains useful layers for the target task and prunes irrelevant ones.
for the widely-used stacked-LSTM, directly pruning any layers will eliminate all subsequent ones.
contrasting
train_15846
Since the training of language models needs nothing but the raw text, it has almost unlimited corpora.
conducting training on extensive corpora results in a huge dictionary, and makes calculating the vanilla softmax intractable.
contrasting
train_15847
The ideal choice for R would be the L 0 regularization of z, i.e., R 0 (z) = |z| 0 .
it is not continuous and cannot be efficiently optimized.
contrasting
train_15848
The model significantly outperforms a strong "Frequency" baseline in our experiments.
there are other discourse relations beyond lexical similarity.
contrasting
train_15849
Adding each voting feature individually produces mixed results.
adding all voting features improves all metrics.
contrasting
train_15850
2. it introduces weighted soft count kernels.
the PageRank baseline also does embedding tuning but produces poor results, thus the second change should be crucial.
contrasting
train_15851
The models predict a starting point (s) and duration (d) for each given temporal entity (t 1 , e 1 , and t 2 ) in the input.
dCT duration d dCT is modeled as a single variable that is learned (initialized with 1).
contrasting
train_15852
Notice that if we set margin m τ to 0, the second case becomes an L1 loss |x −ŷ|.
we use a small margin m τ to promote some distance between ordered points and prevent con-fusion with equality.
contrasting
train_15853
The combination of losses L * shows mixed results, and has lower performance for S-TLM and C-TLM, but better performance for TL2RTL.
it is slowest to compute, and less interpretable, as it is a combined loss.
contrasting
train_15854
The GCNs are designed to capture the dependencies between shortcut arcs, while the layer number of GCNs limits the ability to capture local graph information.
in this cases, we find that leveraging local sequential context will help to expand the information flow without increasing the layer number of GCNs, which means LSTMs and GCNs maybe complementary.
contrasting
train_15855
The construction of large-scale Knowledge Bases (KBs) like Freebase (Bollacker et al., 2008) and Wikidata (Vrandečić and Krötzsch, 2014) has proven to be useful in many natural language processing (NLP) tasks like question-answering, web search, etc.
these KBs are not exhaustive.
contrasting
train_15856
(2017) also attempt to mitigate noise in DS through their joint entity typing and relation extraction model.
kBs like Freebase readily provide reliable type information which could be directly utilized.
contrasting
train_15857
Traditional approaches to the task of ACE event detection primarily regard multiple events in one sentence as independent ones and recognize them separately by using sentence-level information.
events in one sentence are usually interdependent and sentence-level information is often insufficient to resolve ambiguities for some types of events.
contrasting
train_15858
Because of the ambiguity, a traditional approach may mislabel fired in S1 as a trigger of End-Position event.
if we know died triggers a Die event in S1, which is easier to disambiguate, we tend to predict that fired triggers an Attack event.
contrasting
train_15859
Some works (Li et al., 2013;Yang and Mitchell, 2016;Liu et al., 2016b) rely on a set of elaborately designed features and complicated natural language processing (NLP) tools to capture event interdependency.
these methods lack generalization, take a large amount of human effort and are prone to error propagation problem.
contrasting
train_15860
There have been some feature-based studies (Ji and Grishman, 2008;Liao and Grishman, 2010;Huang and Riloff, 2012) that construct rules to capture document-level information for improving sentence-level ED.
they suffer from two problems: (1) The features they used often need to be manually designed and may involve error propagation from existing NLP tools; (2) Sentence-level and document-level information are integrated by a large number of fixed rules, which is complicated to construct and it will be far from complete.
contrasting
train_15861
(2016) exploits a neural-based method to detect multiple events collectively.
they only use the sentence-level information and ne-glect document-level clues, and can only capture the interdependencies between the current event candidate and its former predicted events.
contrasting
train_15862
Including them during training distracts other parsing objectives (compare Core + PP with only analyzing Core in §6).
they do permit improvements on precision for PP attachment by 3.30, especially with our proposed joint decoding.
contrasting
train_15863
2017extend HMM or dependency model with valence (DMV) (Klein and Manning, 2004) with multinomials that use word (or tag) embeddings in their parameterization.
they do not represent the embeddings as latent variables.
contrasting
train_15864
This is partially because automatically parsing from words is difficult even when using unsupervised syntactic categories (Spitkovsky et al., 2011a).
inducing dependencies from words alone represents a more realistic experimental condition since gold POS tags are often unavailable in practice.
contrasting
train_15865
Thus, results from related work trained on gold tags are not directly comparable.
to measure how these systems might perform without gold tags, we run three recent state-of-theart systems in our experimental setting: UR-A E-DMV (Tu and Honavar, 2012), Neural E-DMV (Jiang et al., 2016), and CRF Autoencoder (CRFAE) (Cai et al., 2017).
contrasting
train_15866
We inspect the two clusters and the overlapping region in Figure 5, it turns out that the nouns in the separated clusters are words that can appear as subjects and, therefore, for which verb agreement is important to model.
the nouns in the overlapping region are typically objects.
contrasting
train_15867
Our approach can also be viewed in connection with generative adversarial networks (GANs) (Goodfellow et al., 2014) that is a likelihood-free framework to learn implicit generative models.
it is nontrivial for a gradient-based method like GANs to propagate gradients through discrete structures.
contrasting
train_15868
On the one hand, Kuncoro et al., 2017) proposed a top-down transition-based algorithm, which creates a phrase structure tree in the stack by first choosing the non-terminal on the top of the tree, and then considering which should be its child nodes.
to the bottom-up approach, this top-down strategy adds a lookahead guidance to the parsing process, while it loses rich local features from partially-built trees.
contrasting
train_15869
Liu and Zhang (2017a) report that the top-down approach is on par with the bottom-up strategy in terms of accuracy and the in-order parser yields the best accuracy to date on the WSJ.
despite being two adequate alternatives to the traditional bottom-up strategy, no further work has been undertaken to improve their performance.
contrasting
train_15870
This will serve as the basis for our reduction of constituent parsing to sequence labeling.
to go from theory to practice, we need to overcome two limitations of the theoretical encoding: non-surjectivity and the inability to encode unary branches.
contrasting
train_15871
Examples of practical tasks that can be formulated under this framework in natural language processing are PoS tagging, chunking or named-entity recognition, which are in general fast.
to our knowledge, there is no previous work on sequence labeling methods for constituent parsing, as an encoding allowing it was lacking so far.
contrasting
train_15872
While their purpose is also different from ours, as they use this mapping to generate training data for a parsing algorithm based on recursive partitioning using realvalued distances, their encoding could also be applied with our sequence labeling approach.
it has the drawback that it only supports binarized trees, and some of its theoretical properties are worse for our goal, as the way to define the inverse of an arbitrary label sequence can be highly ambiguous: for example, a sequence of n−1 equal labels in this encoding can represent any binary tree with n leaves.
contrasting
train_15873
We could achieve such a bigram from the unordered source tree .
that realization is not in fact appropriate for French, so that ordered tree would not be a useful training tree for French.
contrasting
train_15874
It ensures that each model's expected bigram counts match those in the POS sequences.
these maximum-likelihood estimates might overfit on our finite data, u and B.
contrasting
train_15875
The most commonly used framework for active learning is pool-based active learning, where the learner has access to the entire pool of unlabeled data at once, and can iteratively query for examples.
sequential active learning is a framework in which unlabeled examples are presented to the learner in a stream (Lewis and Gale, 1994).
contrasting
train_15876
Uncertainty sampling is a common method in pool-based active learning to identify the best example to improve a classifier.
for a given predicate p, we use this to choose the best label query involving that predicate, picking that object o ∈ O A which is closest to the hyperplane of the classifier for p. it is more challenging to narrow down the number of predicates.
contrasting
train_15877
MovieQA (Tapaswi et al., 2016) is most similar to our dataset, with both multiple choice questions and timestamp annotation.
their questions and answers are constructed by people posing questions from a provided plot summary, then later aligned to the video clips, which makes most of their questions text oriented.
contrasting
train_15878
For example, given an example sentence from TEMPO-TL (e.g., "The cross is seen for the first time then window is first seen in room"), the model does not need to reason about the ordering of "cross seen for the first time" and "window is seen for the first time" because both moments only happen once in the video.
when considering the sentence "The adult hands the little boy a stick then they begin to walk" (from Figure 3), "begin to walk" could refer to multiple video moments.
contrasting
train_15879
Rare word representation has recently enjoyed a surge of interest, owing to the crucial role that effective handling of infrequent words can play in accurate semantic understanding.
there is a paucity of reliable benchmarks for evaluation and comparison of these techniques.
contrasting
train_15880
Many of these English word similarity datasets have been translated to other languages to create frameworks for multilingual (Leviant and Reichart, 2015) or crosslingual (Camacho-Collados et al., 2017) semantic representation techniques.
these datasets mostly target words that occur frequently in generic texts and, as a result, are not suitable for the evaluation of subword or rare word representation models.
contrasting
train_15881
been regarded as the de facto standard evaluation benchmark for subword and rare word representation techniques.
our analysis shows that RW suffers from multiple issues: (1) skewed distribution of the scores, (2) low-quality and inconsistent scores, and as a consequence, (3) low inter-annotator agreement.
contrasting
train_15882
Knowledge-based methods (Lesk, 1986;Moro et al., 2014;Basile et al., 2014) exploit the lexical knowledge like gloss to infer the correct senses of ambiguous words in the context.
supervised feature-based methods (Zhi and Ng, 2010;Iacobacci et al., 2016) and neural-based methods (Kågebäck and Salomonsson, 2016;Raganato et al., 2017a) usually use labeled data to train one or more classifiers.
contrasting
train_15883
As shown in Table 1, the local word "football" is crucial for distinguishing the sense of word "play".
in more complex sentences such as "Investors played it carefully for maximum advantage" 1 , sentence-level information is necessary.
contrasting
train_15884
We use the validation set (SE7) to find the optimal hyper parameters of our models: the word embedding size d w , the hidden state size d s of LSTM, the optimizer, etc.
since there are no adverbs and adjectives in SE7, we randomly sample some adverbs and adjectives from training dataset into SE7 for validation.
contrasting
train_15885
While the basic senses of tight-e.g., being physically close together or firmly attached-conflict with the more abstract budget, the metaphoric use as meaning limited can be readily understood.
the use of penumbra in (2) is more creative and novel.
contrasting
train_15886
It is only recently that any work has introduced larger-scale novel metaphor annotations (Parde and Nielsen, 2018).
to our approach, they collect annotations on a relation level (see also Section 2).
contrasting
train_15887
On the one hand, this is a sensible approach because generally the context of a word determines its metaphoricity (and indeed, its novelty in case of metaphoric use).
such annotations lack the flexibility and ease of use of tokenbased annotations for which the context is not defined a priori.
contrasting
train_15888
For example, in this way the metaphor "[...] the artistic temperament which kept her tight-coiled as a spring [...]" (0.514) is treated as novel, while "To quench [thirst] is more than to refresh [...]" (0.424) is treated as conventionalized.
since we provide the scores, this threshold can be adjusted to suit a given application.
contrasting
train_15889
Another artifact of using Wikipedia as the background corpus can be seen on the left: try is only annotated as metaphoric in infinitive-compounds that are decidedly conventionalized (e.g., "trying to look"), yet it appears comparatively seldom in Wikipedia.
we chose to use Wikipedia instead of the BNC in order to have an out-ofdomain comparison with a more contemporary, larger background corpus.
contrasting
train_15890
Third, they update the memory according to whether the returned value is strictly the same as the target.
synonyms are common in natural language text.
contrasting
train_15891
Similarly to MPSG, AdaGram introduces latent variables for word sense indexes in the input text.
unlike MPSG, AdaGram does not assume a fixed number of word senses.
contrasting
train_15892
2) disambiguated skip-gram adopts the softmax model used in MPSG.
in contrast to the previous works, disambiguated skip-gram learns a parametric model for the conditional probability distribution over senses of the center word given the context words (Eq.
contrasting
train_15893
This does not mean, however, that the number of senses learned by Ada-Gram is independent of model hyper-parameters.
the number of senses learned by AdaGram is directly controlled by the hyper parameter α in the Dirichlet process used to define the prior over word meanings (Bartunov et al., 2016).
contrasting
train_15894
We choose to optimize this objective with a biased but low-variance gradient estimator.
parallel to this work there has been a significant progress in gradientbased training of models with discrete latent variables.
contrasting
train_15895
Training PIXIE amounts to inferring the posterior over the visual latent factors, p θ (z|x, y), as well as finding the decoder's parameters θ, the word and context embeddings, e and v, that maximize the likelihood (6).
as in many complex probabilistic models, the likelihood (due to the integral over z) and the posterior are intractable.
contrasting
train_15896
learn two separate functions f a and f b ) to predict Y a and Y b in a single-task learning setup.
if T a and T b are related somehow, either explicitly or implicitly, TL and MTL can improve the generalization of either task or both (Caruana, 1997;Pan and Yang, 2010;Mou et al., 2016).
contrasting
train_15897
Word embeddings such as Word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been widely recognized for their ability to capture linguistic regularities (including syntactic and semantic relations).
no linguistic property of their prepositional embeddings is known; to the best of our knowledge, we propose the first sense-specific prepositional embeddings and demonstrate their linguistic regularities.
contrasting
train_15898
Our model unifies these two approaches into an autoencoder.
we have a different goal: that of creating or improving word representations.
contrasting
train_15899
We emphasize that our method trains exclusively on the definitions and is thus applicable to any electronic dictionary.
in order to evaluate the quality of em-beddings on unseen definitions, WordNet relations comes in handy: we use the sets of synonyms to split the dictionary into a train set and a test set, as explained in Section 7.
contrasting