id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_4800
Similarly non-neural textual entailment models have been developed that incorporate knowledge bases.
these also require model-specific engineering (Raina et al., 2005;Silva et al., 2018).
contrasting
train_4801
It consists of automatically recommending sememes for words, which is expected to improve annotation efficiency and consistency.
existing methods of lexical sememe prediction typically rely on the external context of words to represent the meaning, which usually fails to deal with low-frequency and out-ofvocabulary words.
contrasting
train_4802
(1) The performances of SPSE, SPWE, and SPWE + SPSE decrease dramatically with low-frequency words compared to those with high-frequency words.
the performances of SPWCF, SPCSE, and SP-WCF + SPCSE, though weaker than that on highfrequency words, is not strongly influenced in the long-tail scenario.
contrasting
train_4803
Humans can intuitively conclude that clock + craftsman → clockmaker.
the external model does not per-form well for this example.
contrasting
train_4804
We find that one of the 100 pairs -'good' and 'bad' -shows the highest AUC (92.4).
another random pair ('happy' and 'evil') results in the worst performance with the AUC of 67.2.
contrasting
train_4805
Machine translation and bilingual word embeddings provide some relief through cross-lingual sentiment approaches.
they either require large amounts of parallel data or do not sufficiently capture sentiment information.
contrasting
train_4806
Machine translation (MT) can provide a way to transfer sentiment information from a resource-rich to resourcepoor languages (Mihalcea et al., 2007;Balahur and Turchi, 2014).
mT-based methods require large parallel corpora to train the translation system, which are often not available for underresourced languages.
contrasting
train_4807
Given the simple classification strategy, all models perform relatively well on phrases with negation (all reach nearly 60 F 1 in the binary setting).
while BLSE performs the best on negation in the binary setting (82.9 F 1 ), it has more problems with negation in the 4-class setting (36.9 F 1 ).
contrasting
train_4808
Thus, they should have similar embeddings across domains.
domain-specific word embeddings represent the fact that the sentiments or meanings across domains are different.
contrasting
train_4809
For example, the domain-common word "good" has high probability to be positive in different reviews across multiple domains.
for the word "lightweight", it would be positive in the electronics domain, but negative in the movie domain.
contrasting
train_4810
3 is not feasible in practice, given that the computation cost is proportional to the size of Λ.
similar to the skip-gram model, we can rely on negative sampling to address this issue.
contrasting
train_4811
It proposed to extract the domain specific and the domain invariant representations, simultaneously.
it does not explored the usage of the domain specific information.
contrasting
train_4812
It first computes the alignment scores between context vectors and target vector; then carry out a weighted sum with the scores and the context vectors.
the context vectors have to encode both the aspect and sentiment information, and the alignment scores are applied across all feature dimensions regardless of the differences between these two types of information.
contrasting
train_4813
It has a convolutional layer generating aspect features via ReLU activation function, which controls the magnitude of the sentiment signals according to the given aspect information.
the sigmoid function in GTU and GLU has the upper bound +1, which may not be able to distill sentiment features effectively.
contrasting
train_4814
Hence, the figure confirms that our DM-MCNN approach is able to exploit and customize the provided sentiment weights for the target domain.
unlike the VADER data, our transfer learning approach results in multi-dimensional sentiment embeddings that can more easily capture multiple domains right from the start, thus making it possible to use them even without further fine-tuning.
contrasting
train_4815
There are already previous attempts to intricately combine both CNN and RNN (Zhou et al., 2016), resulting to a slower model.
hCWE resorts to combine them by simply concatenating the word vectors encoded from both CNN and RNN encoders, i.e.
contrasting
train_4816
e(w i , u, p) , has been proven to improve the performance of sentiment classifiers (Chen et al., 2016a).
this method assumes that the user and product vectors are always present.
contrasting
train_4817
Normally, a sentiment classifier transforms the final vector v up , usually in a linear fashion, into a vector with a dimension equivalent to the number of classes C. A softmax layer is then used to obtain a probability distribution y over the sentiment classes.
finally, the full model uses a crossentropy over all training documents D as objective function L during training, where y is the gold probability distribution: HCSC has a nice architecture which can be used to improve the training.
contrasting
train_4818
Studies have shown the positive impact of deliberation on the quality of several document types, such as scientific papers, research proposals, political reports, and Wikipedia articles, among others (Kraut et al., 2012).
deliberative discussions may fail, either by agreeing on the wrong action, or by reaching no agreement.
contrasting
train_4819
We then used this corpus and other data in (Al-Khatib et al., 2017) to identify patterns of strategies across different general topics.
to those two studies targeting monological texts, here we address argumentation strategies in dialogical texts.
contrasting
train_4820
Conceptual Captions consists of about 3.3M image, description pairs.
with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles.
contrasting
train_4821
For instance, they hallucinate "front of building" for the first image, "clock and two doors" for the second, and "birthday cake" for the third image.
conceptual-trained models do not seem to have this problem.
contrasting
train_4822
Past datasets have been limited to only a few high-resource languages and unrealistically easy translation settings.
we have collected by far the largest available dataset for this task, with images for approximately 10,000 words in each of 100 languages.
contrasting
train_4823
Their dataset is restricted to six high-resource languages and a small vocabulary of 557 English words.
we present results for over 260,000 English words and 32 foreign languages.
contrasting
train_4824
For example, only looking at the right lower region of the chest x-ray image (Figure 1) without accounting for other areas, we might not be able to recognize what we are looking at, not to even mention detecting the abnormalities.
the tags can always provide the needed high level information.
contrasting
train_4825
For the targeted caption method, we use the targeted caption S as the input of RNN.
for the targeted keyword method we no longer know the exact targeted sentence, but only require the presence of specified keywords in the final caption.
contrasting
train_4826
Similarly, Path Finding involves a search problem that requires simple spatial deductions such as 'A is east of B implies B is west of A'.
the questions in our datasets involve longer descriptions, more entities, and more relations; they are thus harder to answer with simple deductions.
contrasting
train_4827
Specifically, (Seo et al., 2015) combined information from textual questions and diagrams to build a model for solving SAT geometry questions.
our task is different as diagrams are not provided as part of the input, but are generated from the words/symbols themselves.
contrasting
train_4828
Moreover, supervised training of deep neural network mod-els needs a large number of training samples while many interesting applications require rapid learning from a small amount of data, which poses an even greater challenge to the supervised setting.
humans learn in a way very different from the supervised setting (Skinner, 1957;Kuhl, 2004).
contrasting
train_4829
The best permutation between hypotheses and reference labels in terms of cross-entropy is selected and used for backpropagation.
this method still requires reference labels in the form of senone alignments, which have to be obtained on the clean isolated sources using a single-speaker ASR system.
contrasting
train_4830
It was evaluated under the same evaluation data and metric as in (Isik et al., 2016) based on the WSJ corpus.
there are differences in the size of training data and the options in decoding step.
contrasting
train_4831
14.0 Several previous works have considered an explicit two-step procedure Isik et al., 2016;Chen et al., 2017Chen et al., , 2018.
with our work which uses a single objective function for ASR, they introduced an objective function to guide the separation of mixed speech.
contrasting
train_4832
(2017) trained a multi-speaker speech recognizer using permutation-free training without explicit objective function for separation.
with our work which uses an end-toend architecture, their objective function relies on a senone posterior probability obtained by aligning unmixed speech and text using a model trained as a recognizer for single-speaker speech.
contrasting
train_4833
7is still hard-it requires us to normalize the distribution p θ , which, in turn, involves a sum over all lemmatizations and taggings.
it does lend itself to an efficient Monte Carlo approximation.
contrasting
train_4834
On both the tasks of morphological tagging and lemmatization, neural models have supplanted linear models in terms of performance in the high-resource case .
we are interested in producing an accurate approximation to the posterior in the presence of minimal annotated examples and potentially noisy samples produced during the sleep phase, where linear models still outperform non-linear approaches (Cotterell and Heigold, 2017).
contrasting
train_4835
Closest to our work is Zhou and Neubig (2017), who describe an unstructured variational autoencoder.
the exact use case of our respective models is distinct.
contrasting
train_4836
These settings are not optimal for all languages.
since hyperparameter exploration is computationally demanding due to the number of languages we optimized these hyperparameters on initial development data experiments over a few languages.
contrasting
train_4837
(2017) tends to significantly outperform the winners of the CONLL 2017 Shared Task.
in general, our models still obtain better results, outperforming Dozat et al.
contrasting
train_4838
The output space of this model consists of tag sets such as {POS: Adj, Gender: Masc, Number: Sing}, which are predicted for a token at each time step.
this model relies heavily on the fact that the entire space of tag sets for the LRL must match those of the HRL, which is often not the case, either due to linguistic divergence or small differences in the annotation schemes between the two languages.
contrasting
train_4839
We have used this work as a baseline model for comparing with our proposed method.
to earlier work on morphological tagging, we use a hybrid of neural and graphical model approaches.
contrasting
train_4840
(2011), they had been considered of theoretical interest only due to their incompatibility with rich feature models: incorporation of complex features resulted in jumps in asymptotic runtime complexity to impractical levels.
the recent popularization of bidirectional long-short term memory networks (bi-LSTMs; Hochreiter and Schmidhuber, 1997) to derive feature representations for parsing, given their capacity to capture long-range information, has demonstrated that one may not need to use complex feature models to obtain good accuracy (Kiperwasser and Goldberg, 2016;Cross and Huang, 2016).
contrasting
train_4841
As in the projective case, a mildly non-projective decoder has been known for several years (Cohen et al., 2011), corresponding to a variant of the transition-based parser of Attardi (2006).
its Opn 7 q runtimeor the Opn 6 q of a recently introduced improvedcoverage variant (Shi et al., 2018) -is still prohibitively costly in practice.
contrasting
train_4842
2011, all valid items of the form ri, i`1s are considered to be axioms.
we follow Kuhlmann et al.
contrasting
train_4843
2011show how the items of a variant of MH 3 can be given a transition-based interpretation under the "push computation" framework, yielding the arc-hybrid projective transition system.
such a derivation has not been made for the non-projective case (k ą 3), and the known techniques used to derive previous associations between tabular and transition-based parsers do not seem to be applicable in this case.
contrasting
train_4844
This makes it possible to group derivation subtrees, and the transition sequences that they yield, into "push computations" that increase the length of the stack by a constant amount.
this does not seem possible in MH 4 .
contrasting
train_4845
(2018), there called ALLs 0 s 1 .
in that paper they show that it can be tabularized in Opn 6 q using the push computation framework.
contrasting
train_4846
Here, we have derived it as an interpretation of the Opn 4 q MH 4 parser.
in this case the dynamic programming algorithm does not cover the full search space of the transition system: while each item in the MH 4 parser can be mapped into a computation of this MH 4 transition-based parser, the opposite is not true.
contrasting
train_4847
Exact inference for dependency parsing can be achieved in cubic time if the model is restricted to projective trees (Eisner, 1996).
nonprojectivity is needed for natural language parsers to satisfactorily deal with linguistic phenomena like topicalization, scrambling and extraposition, which cause crossing dependencies.
contrasting
train_4848
RNNs have largely replaced approaches such as the fixed-window-size feed-forward networks of Durrett and Klein (2015) in part due to their ability to capture global context.
rNNs are not the only architecture capable of summarizing large global contexts: recent work by Vaswani et al.
contrasting
train_4849
The perhead weight matrices now multiply a matrix P containing the same position embeddings that are used at the input to the encoder, rather than the layer input X (as in Section 2.2.1).
value vectors remain unchanged and continue to carry content-related information.
contrasting
train_4850
We find this simple scheme remarkably effective: it is able to outperform pretagging and can operate even in the absence of word embeddings.
its performance is ultimately not quite as good as using a character LSTM.
contrasting
train_4851
Larger speedups are obtained for larger transducers because the GPU can utilize the streaming multiprocessors more fully.
the overhead created by CUDA calls, device synchronization, and memory transfers between the host CPU and the device might be too expensive when the inputs are too small.
contrasting
train_4852
On the one hand, the guiding-feature method projects the knowledge of the source-side treebank into the target-side treebank, and utilizes extra pattern-based features as guidance for the target-side parsing, mainly for the traditional discrete-feature based parsing .
the multi-task learning method simultaneously trains two parsers on two treebanks and uses shared neural network parameters for representing common-ground syntactic knowledge (Guo et al., 2016).
contrasting
train_4853
The UD (universal dependencies) project 3 releases a more detailed language-generic guideline to facilitate cross-linguistically consistent annotation, containing 37 relation labels.
after in-depth study, we find that the UD guideline is very useful and comprehensive, but may not be completely compact for realistic annotation of Chinese-specific syntax.
contrasting
train_4854
As a strong baseline for the conversion task, the multi-task trained target-side parser ("multi-task") does not use d src during both training and evaluation.
the conversion approaches use both the sentence x and d src as inputs.
contrasting
train_4855
Since beam search explores analyses in descending order of probability, DISTANCE and SUR-PRISAL ought to be yoked, and indeed they are correlated at r = 0.33 or greater across all of the beam sizes k that we considered in this study.
they are reliably associated with different EEG effects.
contrasting
train_4856
Both models use the same information regarding the true question and answer and are trained using the same number of model parameters.
17 the EVPI model, unlike the neural baseline, additionally makes use of alternate question and answer candidates to compute its loss function.
contrasting
train_4857
We find that the neural models beat the non-neural baselines.
the differences between all the neural models are statistically insignificant.
contrasting
train_4858
Setup We use the same pretrained deep RNNs and feed-forward prediction network paradigm.
we change the input from the previous experiments, as this task is not at the word-level, but rather concerns the relationship between two words; therefore, given a word pair w c , w p for which we have a dependency arc label, we input [w c ; w p ; w c • w p ] into the classifier.
contrasting
train_4859
In scenarios such as media-monitoring, where the main objective is to have a robust monitoring system for specific programs, we plot the WER across the 24 programs in the test set, and we can see in figure 2 that both the glass-box and black-box estimation are following the gold-standard WER per program.
unlike predicting word countN or error count ERR, we can see that the black-box, in general, over-estimates the WER, while the glass-box system under-estimates WER similar to figure 1.
contrasting
train_4860
Using gender as an example, a discriminator will attempt to predict the gender, b, of each instance from h, such that training involves joint learning of both the model parameters, θ M , and the discriminator parameters θ D .
the aim of learning for these components are in opposition -we seek a h which leads to a good predictor of the target y, while being a poor representation for prediction of gender.
contrasting
train_4861
We need b forward and backward passes to compute derivatives at each step of the path, leading to O(br) queries.
a naive loss-based approach requires computing the exact loss for every possible change at every stage of the beam search, leading to O(brL|V |) queries.
contrasting
train_4862
Since this data set is part of a clinical trial, an exact text message cannot be provided as an example.
the following messages illustrate typical messages in this data set, "I've been clean for about 7 months but even now I still feel like maybe I won't make it."
contrasting
train_4863
Note that on the imbalanced A-CHESS data set, on the standard classification task, KCCA embeddings perform better than the other baselines across all three performance metrics.
from Table 2, GlvCC embeddings achieve a higher average Fscore and AUC over KCCA embeddings that obtain the highest precision.
contrasting
train_4864
However, it performs similar to the random baseline in the other corpora.
the classifier strategy performs well on both ATIS and Social Network but poorly on NLMap.
contrasting
train_4865
Deep learning via triplet networks was first introduced in (Hoffer and Ailon, 2015), and has since become a popular technique in metric learning (Zieba and Wang, 2017;Yao et al., 2016;Zhuang et al., 2016).
previous usages of triplet networks were based on supervised data and were applied mainly to computer vision applications such as face verification.
contrasting
train_4866
Note that an additional interesting comparison is to a skip-thought version obtained by learning a linear transformation of the original vectors using the triplet datasets.
no off-the-shelf algorithm is available for learning such transformation, and we leave this experiment for future work.
contrasting
train_4867
FrameNet data enabled the development of wide-coverage frame parsers using supervised learning (Gildea and Jurafsky, 2002;Erk and Padó, 2006;Das et al., 2014, inter alia), as well as its application to a wide range of tasks, ranging from answer extraction in Question Answering (Shen and Lapata, 2007) and Textual Entailment (Burchardt et al., 2009;Ben Aharon et al., 2010).
frame-semantic resources are arguably expensive and time-consuming to build due to difficulties in defining the frames, their granularity and domain, as well as the complexity of the construction and annotation tasks requiring expertise in the underlying knowledge.
contrasting
train_4868
In this paper, we focus on the simplest setup with subject-verbobject (SVO) triples and two roles, but our evaluation framework can be extended to more roles.
to the recent approaches like the one by Jauhar and Hovy (2017), our approach induces semantic frames without any supervision, yet capturing only two core roles: the subject and the object of a frame triggered by verbal predicates.
contrasting
train_4869
As noted in Poesio et al., 1997), bridging descriptions consider many different types of relationships between a definite description (definite generic NP) and its antecedent; e.g., synonymy, hyponymy, meronymy, events, compound nouns, etc.
in this paper we focus on identity type of relationships only.
contrasting
train_4870
These models rely on paths between existing relations in order to infer new associations between entities in KGs.
for relation extraction from a sentence, related pairs are not predefined and consequently all entity pairs need to be considered to extract relations.
contrasting
train_4871
In the combined method of HITS and K-means algorithms, we first rank the instances and patterns based on their bipartite graph and then run K-means to cluster instances in our annotated dataset.
instead of choosing the instance nearest to the centroid, we retain the one that has the highest HITS hubness score in each cluster.
contrasting
train_4872
In the previous NBT model, system designers have to tune the belief state update mechanism manually whenever the model is deployed to new dialogue domains.
the proposed model learns to update the belief state automatically, relying on no domain-specific validation sets to optimise DST performance.
contrasting
train_4873
The nested term Chicago is annotated as location in the inner span annotation.
there are only few inner span annotations.
contrasting
train_4874
For example, the typo "sterreichischen Außenministerlum" (should be "Außenministerium", Austrian foreign ministry) is manually annotated in the data but not detected by any of the models.
"tschechoslowakischen Presse" (engl.
contrasting
train_4875
in Chinese "shī-xiān" (god of poetry) refers to the poet Li-bai and "shī-shèng" (saint of poetry) refers to the poet Du-fu.
few attempts have been made in Chinese analogical reasoning.
contrasting
train_4876
The metaphor annotation is at the relational level, which involves identification of metaphorical relations between source and target domain vocabulary.
scholars have different opinions on the distinction between literal and metaphorical senses.
contrasting
train_4877
The datasets used in these works are typically not directly applicable in the context of article commenting, and are small in scale that is unable to support the unique complexity of the new task.
our dataset consists of around 200K news articles and 4.5M human comments along with rich meta data for article categories and user votes of comments.
contrasting
train_4878
Despite their simplicity, they show rather good results.
while measuring normplagdet score (Table 2), M1 and M2 exhibit poor results, while tested state-of-the-art systems evenly achieve better recall and normplagdet scores.
contrasting
train_4879
The text highlighted indicates repetition, "#" refers to masked number.
recent studies show that there are salient problems in the attention mechanism.
contrasting
train_4880
(2018) found that news stories that were factchecked and found to be false spread faster and to more people than news items found to be true.
our methodology considers immediate reactions to news sources of varying credibility, so we can determine whether certain reactions or reactions to trusted or deceptive news sources evoke more or faster responses from social media users.
contrasting
train_4881
Our analysis of user reactions to trusted and deceptive sources on Twitter and Reddit shows significant differences in the distribution of reaction types for trusted versus deceptive news.
due to differences in the user interface, algorithmic design, or user-base, we find that Twitter users react to trusted and deceptive sources very differently than Reddit users.
contrasting
train_4882
A natural approach to perform text style transfer is to use a regular encoder-decoder network.
the training of such network would require parallel data.
contrasting
train_4883
(2017)'s model changes many words in the original sentence, significantly modifying the content.
our model produces worse results in terms of perplexity values.
contrasting
train_4884
We observe that a lot of the topics typically associated with older social media users in the 2011 model, 5 such as swearing, tiredness and sleep, changed their age bias.
topics popular among younger social media users -for instance, topics mentioning employable skills and meetings, percolated upward as the early adopters of social media grew older.
contrasting
train_4885
This suggests that in every subsequent year, the language of lateteens and early-twenties is more different from the language of their contemporaries from the year before.
compare the errors for 37-year-olds in 2011 (born in 1974) against those for 37-year-olds in 2012 (born in 1975).
contrasting
train_4886
Moreover, DQN agent outperforms SVM-ex by collecting additional implicit symptoms via conversing with patients.
there is still a gap between the performance of DQN agent and SVM-ex&im in terms of accuracy, which indicates that there is still rooms for the improvement of the dialogue system.
contrasting
train_4887
In our exploration of architectures for the sentence encoding model, we also tried using a selfattention layer following the intuition that not all words are equally important for the meaning of a sentence.
we later realized that the cross lingual sentence similarity objective is at odds with what we want the attention layer to learn.
contrasting
train_4888
Non-linear maps beyond those generated by feedforward neural networks have also been explored for this task (Lu et al., 2015;Shi et al., 2015;Wijaya et al., 2017;Shi et al., 2015).
no attempt was made to characterize the resulting maps.
contrasting
train_4889
Unsupervised or limited supervision methods for learning word translation maps have recently been proposed (Conneau et al., 2018;Zhang et al., 2017;Artetxe et al., 2017;Smith et al., 2017).
the underlying methods for learning the mapping function are similar to prior work (Xing et al., 2015).
contrasting
train_4890
Bouamor and Sajjad (2018) use averaged multilingual word embeddings to calculate a joint embedding of all sen-tences.
distances between all sentences are only used to extract a set of potential mutual translation.
contrasting
train_4891
As in other works, we use newstest-2014 as test set.
in order to follow the standard WMT evaluation setting, we use mteval-v14.pl on untokenized hypothesis to calculate case-sensitive BLEU scores.
contrasting
train_4892
In recent years, neural networks with distributed word representations (i.e., word embeddings) (Mikolov et al., 2013;Pennington et al., 2014) have been introduced to calculate word scores automatically for CRFs (Chiu and Nichols, 2016;.
semi-Markov conditional random fields (SCRFs) (Sarawagi and Cohen, 2005) have been proposed for the tasks of assigning labels to the segments of input sequences, e.g., NER.
contrasting
train_4893
Traditionally, the input of LDA is a documentterm matrix of term frequencies (TF), according to the bag-of-words model (BoW).
wilson and Chew (2010) showed that point-wise mutual information (PMI) term weighting model can be successfully applied to eliminate stop words from topic descriptors.
contrasting
train_4894
In general, NE Independent model showed improvement in coherence up to a certain value of α (different in each case), followed by a decline, reaching very low values for NE Independent (x10).
nE Document Dependent model does not introduce new parameters into LDA and manages to achieve best performance in the majority of settings, thus being more stable and easy to use.
contrasting
train_4895
AS Reader (Kadlec et al., 2016) which models the question and the paragraph using GRU followed by attention mechanism got 74.1% accuracy on the SciQ test set.
for a question, they used a different corpus to extract the text passage.
contrasting
train_4896
Using questionoption tuple as input gave significant advantage to our model.
there is a lot of scope for future work.
contrasting
train_4897
Indeed, the top-systems were built using Support Vector Machines (SVMs) trained with Tree Kernels (TKs), which were applied to a syntactic representation of question text (Filice et al., 2016Barrón-Cedeño et al., 2016).
nns-based models struggled to get good accuracy as (i) large training sets are typically not available 2 , and (ii) effectively exploiting full-syntactic parse information in nns is still an open issue.
contrasting
train_4898
This suggests that NNs require more data to learn complex relational syntactic patterns expressed by TKs.
the plot in Figure 1 shows that the improvement reaches a plateau around 100k examples.
contrasting
train_4899
set and almost 3% on the test set over TK models.
using too much automatically labeled data may hurt the performance on the test set.
contrasting