id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_21500
More specifically, the RMN and HTMM agree on environmental descriptions (e.g., boats, outdoors) and graphic sexual scenes ( Figure 5, middle).
the RMN is more sophisticated with in- terpersonal relationships.
contrasting
train_21501
This task was originally paired with a visual question answering task much simpler than the one just discussed, and is appealing for a number of reasons.
to the VQA dataset, GeoQA is quite small, containing only 263 examples.
contrasting
train_21502
Almost all these works focus on reducing the marginal distribution distance between different domain features in an unsupervised manner to make them indistinguishable.
considering a word is not evenly distributed conditioning on different labels, it may result in that the discriminative property of features from different domains may not be similar, which means that close source and target samples may not have the same label.
contrasting
train_21503
we conduct CRF parameter transfer by minimizing It turns out that a similar regularization term is applied in our CRF parameter transfer method and the regularization framework (RF) for domain adaptation (Lu et al., 2016).
rF is proposed to generalize the feature augmentation method in (Daume III, 2007), and these two methods are only discussed from a perspective of the parameter.
contrasting
train_21504
Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing.
our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context.
contrasting
train_21505
Current FETC systems sidestep the issue by either ignoring out-of-context labels or using simple pruning heuristics like discarding training examples with entities assigned to multiple types in the knowledge graph.
both strategies are inelegant and hurt accuracy.
contrasting
train_21506
To the best of our knowledge, the first use of a hierarchical loss function was originally introduced in the context of document categorization with support vector machines (Cai and Hofmann, 2004).
that work assumed that weights to control the hierarchical loss would be solicited from domain experts, which is inapplicable for FETC.
contrasting
train_21507
That is, the performance upper bounds of our proposed model are no longer 100%: for example, the best strict accuracy we can get in this setting is 88.28% for FIGER(GOLD).
as the strict accuracy of state-of-the-art methods are still nowhere near 80% (Table 3), the evaluation we perform is still informative.
contrasting
train_21508
For example, our model trained on D f iltered will misclassify S5 as Title, while the model trained on D raw can make the correct prediction.
there are still some errors that can't be fixed with our model.
contrasting
train_21509
For BREE and BRET, the definition follows directly from the fact that these are entity-pair and template-centered instantiations of BREX, respectively.
the disjunctive matching of instances for an extractor with entity pair and template seeds in BREJ (Figure 3 line "(i)" ) boosts the likelihood of finding positive instances.
contrasting
train_21510
In other words, config 9 (Table 5) is combination of both weighted negative and scaled positive extractions.
we also investigate ignoring w n p 1.0q in order to demonstrate the capability of BREJ with only scaling positives and without weighting negatives.
contrasting
train_21511
In our work, we tackle the problem of vowel system typology, i.e., we propose a generative probability model of which vowels a language contains.
to previous work, we work directly with the acoustic information-the first two formant values-rather than modeling discrete sets of phonemic symbols (IPA).
contrasting
train_21512
Also, we can notice that Wixarika has more unique words than the rest of our studied languages.
nahuatl has with 810 the highest number of unique morphemes.
contrasting
train_21513
MTT-U) is a single element in the input sequence (the one representing the task).
this information enables the model to handle each given instance correctly at inference time.
contrasting
train_21514
In the case of random strings, again, adding more training data seems to help more.
using corpus data seems to hurt performance and the more such examples we use, the worse accuracy we obtain.
contrasting
train_21515
For the similar problem of Chinese word segmentation, Zhang and Clark (2008) trained a model jointly on part-of-speech tagging.
we are not aware of any prior work on multi-task training or data augmentation for neural segmentation models.
contrasting
train_21516
Ruzsics and Samardzic (2017) extended the standard encoder-decoder architecture for canonical segmentation to contain a language model over segments and improved results.
a big difference to our work is that they still used more than ten times as much training data as we have available for the indigenous Mexican languages we are working on here.
contrasting
train_21517
It preserves information which is very close to what the decoder actually needs.
there might be some missing pieces of information or some incompatibility between the decoder and the table, so we do not freeze the morphology table during training, but let the decoder update it with respect to its needs in the forward and backward passes.
contrasting
train_21518
3 The main findings about the behaviour of the table are as follows: • The model assigns high attention weights to stem-C for almost all time steps.
the weights assigned to this class for t 1 and i 5 are much higher than those of affix characters (as they are part of the stem).
contrasting
train_21519
Translation information stored in the baseline decoder is not sufficient for selecting the right character 'G', so the decoder wrongly starts with 'i' and continues along a wrong path up to generating the whole word.
our decoder's information is accompanied with signals from the affix table which force it to start with a better initial character, whose sampling leads to generating the correct target word.
contrasting
train_21520
First, we note that adding any combination of acoustic-prosodic features (individually or in sets) improves performance over the text-only baseline.
certain combinations of acoustic-prosodic features are not always better than their subsets.
contrasting
train_21521
Our approach does not use a separate disfluency detection module; we hypothesized that the location-sensitive attention model helps handle these differences based on analysis of the text-only results (Table 1).
more explicit modeling of disfluency pattern match characteristics in a dependency parser (Honnibal and Johnson, 2014) leads to better disfluency detection performance (F = 84.1 vs. 76.7 for our text only model).
contrasting
train_21522
There were no pauses between all words in this sentence, the audio sample showed that the word own was both lengthened and raised in intonation, giving the prosody-enhanced parser (right) a signal that own is on a syntactic boundary.
the text-only parser (left) had no such information and made an NP-attachment error.
contrasting
train_21523
everything being an object of had).
in the context of this conversation (the speaker was talking about another person in an informal manner), and everything acts more like filler -e.g.
contrasting
train_21524
2017, and the performance of the base model was comparable to the one reported.
we were unable to reproduce the significant gains that were reported when using the reverse model (italicized in Table 3).
contrasting
train_21525
They used sequence-to-sequence models to transcribe Spanish speech and translate it in English, by jointly training the two tasks in a multitask scenario where the decoders share the encoder.
to our work, they use a large corpus for training the model on roughly 163 hours of data, using the Spanish Fisher and CALL-HOME conversational speech corpora.
contrasting
train_21526
We introduce a new corpus of speeches from campaign events in the months leading up to the 2016 U.S. presidential election and develop new models for predicting moments of audience applause.
to existing datasets, we tackle the challenge of working with transcripts that derive from uncorrected closed captioning, using associated audio recordings to automatically extract and align labels for instances of audience applause.
contrasting
train_21527
The intuition behind our model is that addressing certain parts of the OH's reasoning often has little impact in changing the OH's view, even if the OH realizes the reasoning is flawed.
some parts of the OH's reasoning are more open to debate, and thus, it is reasonable for the model to learn and attend to parts that have a better chance to change an OH's view when addressed.
contrasting
train_21528
Challenger 1 addresses the OH's general statement and provides a new fact, which received a ∆.
challenger 2 addresses the OH's issue about race but failed to change the OH's view.
contrasting
train_21529
For most OH replies, the (non-)existence of a ∆ indicates whether a comment to which the OH replied changed the OH's view.
an OH's view is continually influenced as they participate in argumentation, and thus a ∆ given to a comment may not necessarily be attributed to the comment itself.
contrasting
train_21530
Moreover, prior work has mainly borrowed metrics from machine translation (MT) and paraphrase communities for evaluating style transfer.
it is not clear if those metrics are the best ones to use for this task.
contrasting
train_21531
To fill the gap, implicit discourse relation prediction has drawn significant research interest recently and progress has been made (Chen et al., 2016; by modeling compositional meanings of two discourse units and exploiting word interactions between discourse units using neural tensor networks or attention mechanisms in neural nets.
most of existing approaches ignore wider paragraph-level contexts beyond the two discourse units that are examined for predicting a discourse relation in between.
contrasting
train_21532
Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance.
the performance on the three small classes (Comp, Cont and Temp) remains low.
contrasting
train_21533
This result indicates that the Pseudo-melody model uses the information of a given melody to make a better prediction of its lyrics word sequence.
the Heuristic model had the worst performance, despite training with a large amount of raw lyrics.
contrasting
train_21534
This result is consistent with the perplexity evaluation result.
regarding the "Grammaticality" and "Meaning" evaluation, workers gave high scores to the Lyrics-only and Pseudo-melody models that are well-trained on a large amount of text data.
contrasting
train_21535
These results indicate our pseudo data learning strategy contributes to generating high-quality lyrics.
the quality of lyrics automatically generated is still worse than the quality of lyrics that humans produce, and it still remains an open challenge for future research to develop computational models that generate high-quality lyrics.
contrasting
train_21536
Such frameworks enable a decoder to retrieve from a memory during generation.
less research has been done to take care of the memory contents from different sources, which are often of heterogeneous formats.
contrasting
train_21537
(2016) discuss different knowledge representations for a simple factoid QA task and show that classic structured KBs organized in a Key-Value Memory style work the best.
dealing with heterogeneous memory is not trivial.
contrasting
train_21538
Combining the two mechanisms to-gether gives HS-AttnHist the best performance in Enrichment.
hS-Attnhist still suffers from the repetition issue, to certain extent.
contrasting
train_21539
In recent work, Zhang and Lapata (2017) propose a reinforcement learning-based text simplification model which jointly models simplicity, grammaticality, and semantic fidelity to the input.
to these methods, Narayan and Gardent (2016)'s sentence simplification approach does not need a parallel corpus for training, but rather uses a deep semantic representation as input for simplification.
contrasting
train_21540
In Example 7.2, the top paraphrase identified by both AddCos-PPDB and AddCos-Simple PPDB for the word "monitor" is "track", which is a reasonable substitute.
in Example 7.3, AddCos-Simple PPDB model was able to identify a good simple substitute, when none of the other models were able to identify a suitable word with comparable complexity.
contrasting
train_21541
Previous works for factoid QG (Serban et al., 2016) claims to solve the issue of small size QA datasets.
encountering an unseen predicate / entity type will generate questions made out of random text generation for those out-of-vocabulary predicates a QG system had never seen.
contrasting
train_21542
General features: Most of the general features are based on (Attali and Burstein, 2006).
due to lack of tools for processing the Portuguese language, we implemented the following features, which are sub-divided as follows: Grammar and style: Features include the number of grammar errors and misspellings.
contrasting
train_21543
form document representations using citation relations, which are not available for unfinished or new documents.
our method does not need to be re-trained as the corpus of potential candidates grows.
contrasting
train_21544
That is, the textual embeddings for the query text and abstract share the same weights.
we had a significantly larger amount of data to train NNRank on OpenCorpus, and found that non-Siamese embeddings are beneficial.
contrasting
train_21545
We found that in the verification of answer-options, our annotations were in high agreement (98%) with those obtained from mechanical turk.
that was not the case for the verification of multi-sentence questions.
contrasting
train_21546
This method uses the top documents retrieved for each query to learn the association between the query terms and those occurring in the retrieved documents.
to retrieving ranked lists for every query as in (Zamani and Croft, 2017), we capture the semantic context of query words with the help of other useful cues for task-relatedness, e.g.
contrasting
train_21547
Informally speaking, the objective function aims to maximize the similarity between two word vectors w and v that are members of the same semantic context.
it minimizes the similarity between the word vector w and a word vector u randomly sampled from outside its context, as defined by the semantic relation S of Equation 2.
contrasting
train_21548
In order to compare our results with these studies, we use the same subset of 1424 queries from the AOL query log for the evaluation of task extraction effectiveness as used in these earlier studies.
since the purpose of these studies was only to extract tasks from a single session, in its annotation scheme two queries only qualified as part of the same task if they appeared within the same session.
contrasting
train_21549
The authors report F 1 scores of 23.9% and 56.6% for their definition extraction methods.
we assign meaning exclusively to variables, using denotations from a pre-computed dictionary of mathematical types, rather than freeform text.
contrasting
train_21550
The type dictionary distributed by Stathopoulos and Teufel (2016) contains 10,601 automatically detected types from the MREC.
the MREC contains 2.9 million distinct technical terms, many of which might also be types.
contrasting
train_21551
The TY/typed model yields almost double the MAP performance of its untyped counterpart (TY/untyped, .083 MAP).
the RT/typed and RT/untyped models perform comparably (no significant difference) but poorly.
contrasting
train_21552
Although learning paradigms like Transfer Learning (Pan and Yang, 2010) attempt to incorporate knowledge from one task into another, these techniques are limited in scalability and are specific to the task at hand.
humans have the intrinsic ability to elicit required past knowledge from the world on demand and infuse it with newly learned concepts to solve problems.
contrasting
train_21553
Incorporating Inductive Biases (Ridgeway, 2016) based on the known information about a domain onto the structure of the learned models, is an active area of research.
our motivation and approach is fundamentally different from these works.
contrasting
train_21554
The convolution based model, helped to reduce the space over entities and relationships over which attention had to be generated.
more sophisticated techniques using similarity based search (Wang et al., 2014a;Mu and Liu, 2017) can be pursued towards this purpose.
contrasting
train_21555
We can see that our fixnorm approach does not completely solve the mistranslation issue, since it translates Entoni Fauchi to UNK UNK (which is arguably better than James Chan).
fixnorm+lex gets this right.
contrasting
train_21556
As we can see in Figure 2, because of the alignment shift, both tied and fixnorm incorrectly replace the two unknown words (in bold) with But Deutsche instead of Deutsche Telekom.
under fixnorm+lex and the model of Arthur et al.
contrasting
train_21557
Some have focused on reducing the number of UNKs by enabling NMT to learn from a larger vocabulary (Jean et al., 2015;Mi et al., 2016); others have focused on replacing UNKs by copying source words (Gulcehre et al., 2016;Gu et al., 2016;Luong et al., 2015b).
these methods only help with unknown words, not rare words.
contrasting
train_21558
With such large systems, NMT showed that it can scale up to immense amounts of parallel data in the order of tens of millions of sentences.
such data is not widely available for all language pairs and domains.
contrasting
train_21559
Alternatively, we could train monolingual embeddings in a shared space and use these as the input to our MT system.
since these embeddings are trained on a monolingual objective, they will not be optimal for an NMT objective.
contrasting
train_21560
For instance, the influence can be approximately ranked as Es ≈ P t > F r ≈ It > Cs ≈ El > De > F i, which is interestingly close to the grammatical relatedness of Ro to these languages.
cs has a strong influence although it does not fall in the same language family with Ro, we think this is due to the geo-graphical influence between the two languages since cs and Ro share similar phrases and expressions.
contrasting
train_21561
1 Sequence to sequence models are usually trained with a simple token-level likelihood loss Bahdanau et al., 2014).
at test time, these models do not produce a single token but a whole sequence.
contrasting
train_21562
Dirichlet Multinomial Regression (DMR) and other supervised topic models can incorporate arbitrary document-level features to inform topic priors.
their ability to model corpora are limited by the representation and selection of these featuresa choice the topic modeler must make.
contrasting
train_21563
We calculated this topic quality metric on the top 20 most probable words in each topic, and averaged over the most coherent 1, 5, 10, and over all learned topics.
models were selected to only maximize average NPMI over all topics.
contrasting
train_21564
Model fitting is performed by back-propagation of a max-margin cost.
we use neural networks to learn feature representations for documents, not as a replacement for the LDA generative story.
contrasting
train_21565
Sample topic and discourse assignments z and d at the message level and word type switcher x at the word level, using the distributions, computed according to parameters optimized in step 1: Step 2 is analogous to Gibbs Sampling (Griffiths, 2002) in probabilistic graphical models, such as LDA (Blei et al., 2003).
distinguishing from previous models, the multinomial distributions in our models are not drawn from a Dirichlet prior.
contrasting
train_21566
(2014) examined principal roles in 80 discussions from the Wikipedia: Article for Deletion pages (focusing on stubbornness or ignoredness, among others) and found several typical roles, including 'rebels', 'voices', or 'idiots'.
to our data under investigation (Change My View debates), Wikipedia talk pages do not adhere to strict argumentation rules with manual moderation and have a different pragmatic purpose.
contrasting
train_21567
This corresponds to observations made in comments in newswire where 'weightier' topics tended to stir incivility (Coe et al., 2014).
'stupidity' (or 'reasonableness') does not seem to play any significant role.
contrasting
train_21568
This design ensures that an arc is associated with every word.
in our setting for scene graph generation, there may be no arc for some of the words, especially empty words.
contrasting
train_21569
Neutrality is comparably easier to achieve by reusing terms in the whole set of targets as decoys.
a decoy may hardly meet QoU and IoU simultaneously 1 .
contrasting
train_21570
describe-01 he curmudgeon she ARG0 ARG2 ARG1 Figure 1: AMR graph for "He described her as a curmudgeon", "His description of her: curmudgeon" and "She was a curmudgeon, according to his description" should result in the same AMR graph as shown in Figure 1.
in practice, things are different.
contrasting
train_21571
As such, one cannot expect to use AMR in the transparent way mentioned above to identify paraphrase relations.
we demonstrate in this paper that AMR can be used in a "softer" way to detect such relations.
contrasting
train_21572
(2016), which learns word representations by adopting a ranking-based loss function.
none of these models includes any contextual information beyond the neighbouring words.
contrasting
train_21573
The performance of the CNN model improves if POS vectors are considered together with GloVe word vectors in input, both when such POS vectors are randomly initialized ( w i p r ) and independently trained with the GloVe model ( w i p i ).
the best performance is achieved when word and POS tags vectors are jointly trained with our attr2vec model ( w j p j ).
contrasting
train_21574
Thus, these are time points that the model perceives as having increased semantic change.
there is a weakness to this analysis.
contrasting
train_21575
In the original dataset, there are 200 words categorized into 17 classes.
we remove words that do not rank in the top 20,000 by frequency in any decade in our training data to ensure that the synthetic words do not lack context words at a given time.
contrasting
train_21576
The statistic in M [w, c] is usually related to pointwise mutual information (Levy et al., 2015a): , and thus larger co-occurrence count #(w, c).
the derivation has two flaws: (1) c could contain negative values and (2) lower #(w, c) could still lead to larger P M I(w, c) as long as the #(w) is small enough.
contrasting
train_21577
At this point, the gradients coming from negative sampling (the second term) decrease the same amount of embedding values for both x and y.
the embedding of hypernym y would get higher or equal positive gradients from the first term than x in every dimension because #(x, c) ≤ #(y, c).
contrasting
train_21578
In theory, word embeddings obtained by these joint models could be as good as representations produced by models which finetune input vector space.
their performance falls behind that of fine-tuning methods (Wieting et al., 2015).
contrasting
train_21579
Lower absolute scores for Italian and German compared to the ones reported for English are due to multiple factors, as discussed previously by : 1) the AR model uses less linguistic constraints for DE and IT; 2) distributional word vectors are induced from smaller corpora; 3) linguistic phenomena (e.g., cases and compounding in DE) contribute to data sparsity and also make the DST task more challenging.
it is important to stress the consistent gains over the vector space specialised by the state-of-the-art ATTRACT-REPEL model across all three test languages.
contrasting
train_21580
Fixed-length context windows S running over the corpus are used in word embedding methods as in C-BOW (Mikolov et al., 2013b,a) and GloVe (Pennington et al., 2014).
here we have k = |V| and each cost function f S : R k → R only depends on a single row of its input, describing the observed target word for the given fixed-length context S. for sentence embeddings which are the focus of our paper here, S will be entire sentences or documents (therefore variable length).
contrasting
train_21581
(2017) who also use additive compositionality to obtain sentence embeddings.
in contrast to our Table 4: Comparison of the performance of the unsupervised and semi-supervised sentence embeddings by (Arora et al., 2017) with our models.
contrasting
train_21582
An important assumption of such work is that different tree structures lead to different semantic representations even for the same sentence.
they all resort to ex-ternal syntactic resources, such as parse trees or Treebank annotations (Marcus et al., 1993), which limits their broader applications.
contrasting
train_21583
A desirable solution would be to automatically and dynamically induce the tree structures for target-specific sentence representations.
the challenge is that the absence of external supervisions makes it difficult to evaluate the quality of the tree structures and train the parameters.
contrasting
train_21584
By comparing the decoded outputs with the ground-truth labels, the prediction errors can back-propagate to update parameters of the encoder-decoder network.
it is no more applicable in our settings, as we do not have any explicit supervisions from external syntactic resources.
contrasting
train_21585
Sentiment-based method gives the highest F1 score on the positive class.
its performance is not consistent on the negative class, which suggests that it tends to misclassify the sentence as positive.
contrasting
train_21586
(2014) cascades parse errors to later stages, which will hurt the performances on downstream tasks.
the adapted tree structures in Dong et al.
contrasting
train_21587
Literature survey suggests a wide range of research on sentiment analysis (at the document or sentence level) is being carried out in recent years (Turney, 2002;Kim and Hovy, 2004;Jagtap and Pawar, 2013;Poria et al., 2016;Kaljahi and Foster, 2016;Gupta et al., 2015).
most of these researches are focused on resource-rich language like English.
contrasting
train_21588
For a missing word representation, the literature suggests two possible solutions: a) zero vector (Bahdanau et al., 2017) or b) random vector (Dhingra et al., 2017).
in both the cases the resultant vector could be completely out of context and often does not fit well with others.
contrasting
train_21589
In a model that takes into account only the simple polar word score, the sentence would have high relevance towards the positive sentiment.
the sequence information of the phrase "far from liking and recommending" dictates the negative sentiment of the sentence.
contrasting
train_21590
"I'm far from liking and recommending this phone to anyone."
to A3, architecture A2 does not rely on the sequence information of the extracted features and let the network to learn on its own.
contrasting
train_21591
Another limitation of our work is that 7.2M sentences is not a big number in terms of word embedding computation.
the underlying method performs considerably better compared to the state-of-the-art systems, even with all these constraints.
contrasting
train_21592
SMT error & corpus size), the proposed method with bilingual embeddings (76.29%) performs considerably at par against the monolin-gual embeddings created from a very large corpus of 53M (77.74%).
the monolingual WE computed using the same amount of corpus (i.e.
contrasting
train_21593
First, we follow Katiyar and Cardie (2016) which set aside 132 documents for development and used the remaining 350 documents for 10-fold CV.
in the 10-fold CV setting, the size of the tests sets is 3 times smaller than the dev set size ( Table 2, row 3), and, consequently, results in high-variance estimates on the test sets.
contrasting
train_21594
As we have seen so far, many holders that are subjects or A0 roles, and targets that are A1 roles, are properly labeled by both models.
a considerable amount of such holders and targets are not correctly predicted (Table 7-8, col. 2, rows 2-3).
contrasting
train_21595
Finally, for example 7 our models make plausible predictions.
the gold holder is always the entity from the coreference cluster that is the closest to the opinion.
contrasting
train_21596
More closely related is our prior work (Vyas and Carpuat, 2016) where we used lexical context based embeddings to detect cross-lingual lexical entailment.
the focus of this work is on hypernymy, a more well-defined relation than entailment.
contrasting
train_21597
• reverse noising: For reverse noising, we simply train a reverse model from Y → X using our parallel noisy-clean corpus and run standard beam search to generate noisy tar-getsỸ from clean inputs Y .
we find vanilla reverse noising tends to be too conservative.
contrasting
train_21598
We observe that token noising, despite matching the frequency of errors, fails to generate realistic errors (Figure 4).
reverse noising yields significantly more convincing errors, but the edit distance between synthesized examples is significantly lower than in real data ( Figure 5).
contrasting
train_21599
To test this, we plot MAP for our best self-training model and various QA baselines as we vary the proportion of labeled training set in Figure 3.
we keep the unlabeled text fixed (10K Wikipedia paragraphs).
contrasting