id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14900
For instance, when the question is relatively simple and does not have complex compositional structure, paths in the knowledge graph that connect the answers and the entities in the narrative can be interpreted as legitimate semantic parses.
as we will show in our experiments, learning from implicit supervision alone is not a viable strategy for algebra word problems.
contrasting
train_14901
6 While their algorithms are interesting, the experimental setting is somewhat unrealistic as the implicit signals are simulated.
the goal of our algorithm is to significantly improve a strong solver with a large quantity of unlabeled data.
contrasting
train_14902
These practical issues make it difficult to use the crawled equations for explicit supervision directly.
it is relatively easy to obtain question-solution pairs with simple heuristics.
contrasting
train_14903
Recent advances in deep learning techniques demonstrate that the error rate of machine learning models can decrease dramatically when large quantities of labeled data are presented (Krizhevsky et al., 2012).
labeling natural language data has been shown to be expensive, and it has become one of the major bottleneck for advancing natural language understanding techniques (Clarke et al., 2010).
contrasting
train_14904
Social media especially contains time-sensitive information and requires accurate temporal analysis, for example, for detecting real-time cybersecurity events (Ritter et al., 2015;Chang et al., 2016), disease outbreaks (Kanhabua et al., 2012) and extracting personal information (Schwartz et al., 2015).
most work on social media simply uses generic temporal resolvers and therefore suffers from suboptimal performance.
contrasting
train_14905
(2016) also apply the similar idea of combining pointer networks and softmax output.
different from all these discriminative models above, we explore generative models for sentence compression.
contrasting
train_14906
For the related tasks of document and multidocument summarization, Graham (2015) provides a fine-grained comparison of automated evaluation methods.
to the best of our knowledge, no studies of automatic evaluation metrics exist for abstractive compression of shorter texts.
contrasting
train_14907
In particular, operations concerning deletion and verbal features are the most frequent ones both at lower and higher readability scores differences.
they have an opposite distribution: transformations of verbal features increase at higher readability differences (>0.6) while deletions decrease.
contrasting
train_14908
Nevertheless, those models built for single-turn response selection ignore the whole context information, which makes it difficult to be implemented in the multi-turn response selection tasks.
research on multi-turn response selection usually takes the whole context into consideration and views the context and response as word sequences.
contrasting
train_14909
The latent variable enables us to jointly model discourse arguments and their relations, rather than conditionally model y on x.
the incorporation of the latent variable makes the modeling difficult due to the intractable computation with respect to the posterior distribution.
contrasting
train_14910
With the parameters of Gaussian distribution, we can access the representation z using different sampling strategies.
traditional sampling approaches often breaks off the connection between recognizer and approximator, making the optimization difficult.
contrasting
train_14911
However, our work differs from theirs significantly, which can be summarized in the following three aspects: 1) they employ the recurrent neural network to represent the discourse arguments, while we use the simple feedforward neural network; 2) they treat the discourse relations directly as latent variables, rather than the underlying semantic representation of discourses; 3) their model is optimized in terms of the data likelihood, since the discourse relations are observed during training.
varNDRR is optimized under the variational theory.
contrasting
train_14912
When trained on NW and tested on DF, supervised methods encounter out-of-domain situations.
the MSEP system can adapt well.
contrasting
train_14913
(2015) proposed a super Taxonomies which serve as the backbone of structured knowledge are useful for many NLP applications such as question answering (Harabagiu et al., 2003) and document clustering (Fodeh et al., 2011).
the hand-crafted, well-structured taxonomies including WordNet (Miller, 1995), Open-Cyc (Matuszek et al., 2006) and Freebase (Bollacker et al., 2008) that are publicly available may not be complete for new or specialized domains.
contrasting
train_14914
vised method to learn term embeddings based on pre-extracted taxonomic relation data.
this method is heavily dependent on the training data to discover all taxonomic relations, i.e.
contrasting
train_14915
Many works in the literature (Kozareva and Hovy, 2010;Navigli et al., 2011;Wentao et al., 2012) attempted to manually find these contextual patterns, or automatically learn them.
due to the wide range of complex linguistic structures, it is difficult to discover all possible contextual patterns between hypernyms and hyponyms in order to detect taxonomic relations effectively.
contrasting
train_14916
For example, in the Animal domain, our approach identifies 'wild sheep' as a hyponym of 'sheep', but in WordNet, they are siblings.
many references 1, 2 consider 'wild sheep' as a species of 'sheep'.
contrasting
train_14917
A method for inducing (binary) relations and the categories they connect was proposed by (Mohamed et al., 2011).
in that work, categories and their instances were known a-priori.
contrasting
train_14918
However, in that work, categories and their instances were known a-priori.
in case of SICTF, both categories and relations are to be induced.
contrasting
train_14919
we iteratively sample the partial changes χ k (i,j) .
we use equations 9 and 10 for predictive distributions of Dirichlet multinomials instead of 5 and 6.
contrasting
train_14920
In role-by-role probabilities (role conditioned on its parent role) we did not distinguish whether the role is on the left side or the right side of the parent.
this position keeps important information about the syntax of words (and their roles).
contrasting
train_14921
Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years.
tasks such as visual question answering require combining these vector representations with each other.
contrasting
train_14922
We compensated for this by stacking fully connected layers (with 4096 units per layer, ReLU activation, and dropout) after the non-bilinear pooling methods to increase their number of parameters.
even with similar parameter budgets, nonbilinear methods could not achieve the same accuracy as the MCB method.
contrasting
train_14923
CSP does not directly exploit this observation as it only updates its parameter vector with respect to the differences between complete assignments: w = w + ∆φ(x, y, z).
sWVP can exploit this observation in various ways.
contrasting
train_14924
Our approach is more amenable to such theoretical models than traditional distributed word embeddings.
we can also work with the expected word embeddings, which are vectors of probabilities, and can therefore be expected to hold the advantages of dense distributed representations (Bengio et al., 2013).
contrasting
train_14925
The original grammar performs slightly worse than the previous work in dependency tree-based translation, this can ascribed to the difference between the implementation of the original grammar and the dependency parser used in the previous work.
the similarized grammar achieves very significant improvement based on the original grammar, and also significant surpass the previous work.
contrasting
train_14926
Additionally, their model maps task difficulty and review bias to a difficulty parameter in IRT.
we naturally extended the model so that standard analysis approaches can be applied to maintain interpretability.
contrasting
train_14927
The model described above is different from the original GRM, which assumed that the values of a are independent from question to question, and that each a belongs to exactly one question.
in our problem setting, the judges evaluate multiple segments, and discrimination parameter a is independent from segment j.
contrasting
train_14928
Consequently, systems with low θ tended to lose to the baseline on segment 1858 due to their wrong translation (see the translation of "hawaiano de Mauna Kea").
some of the low-ranked systems beat the baseline on segment 1818, and the segment contributed to discriminate them.
contrasting
train_14929
In contrast, translations generated by neural models (both GroundHog and VNMT) are much more fluent and comprehensible.
there are essential differences between GroundHog and our VNMT.
contrasting
train_14930
The translation of the clause "体现 了 双方 可贵 的 和 平 诚意 。" at the end of the source sentence is completely lost.
our VNMT model does not miss or mistake these fragments and can convey the meaning of entire source sentence to the target side.
contrasting
train_14931
Specifically, for the model in question the EM algorithm still applies but we have to use a gradient-based algorithm within the learning step.
since such a gradient-based method introduces the necessary complication of a learning rate, we could also optimize the objective by picking θ and λ via cross-validation and using a multinomial EM algorithm for the learning of the lexical t terms.
contrasting
train_14932
For example, suppose word w 0 has k senses and thus it has k clusters of context windows, we denote the average embedding vectors for these clusters asξ 1 , • • • ,ξ k .
since the online dictionary uses some descriptions and example sentences to interpret each word sense, we can represent each word sense by the average embedding of those words including its description words and the words in the corresponding example sentences.
contrasting
train_14933
We could compare different models using the same number of total parameters.
this would inevitably introduce other biases, e.g., the number of hyper-parameters would become different.
contrasting
train_14934
2016have manually annotated these questions with logical forms.
automatic characterization of questions is hard, while manual characterization is costly and requires expertise.
contrasting
train_14935
(2016) show that only 66% of the WEBQUESTIONS answers, which were collected via crowdsourcing, are completely correct.
although questions generated from a KB may not follow the distribution of user information needs, it has the advantage of explicit question characteristics, and enables programmatic configuration of question generation.
contrasting
train_14936
PARASEMPRE, in a reverse manner, enumerates a set of logical forms, generates a canonical utterance for each logical form, and ranks logical forms according to how well the canonical utterance paraphrases the input question.
jA-CANA follows the information extraction paradigm, and builds a classifier to directly predict whether an individual is the answer.
contrasting
train_14937
The following experiments will give more insights about where the performance difference comes from.
jACANA is much faster, showing an advantage of information extraction.
contrasting
train_14938
If data is sparse, the posterior will still be sharply peaked around distributions in the gamma family, reducing the effective capacity of the model and minimizing overfitting.
given enough evidence in the data, the model will also deviate from the gamma-centered prior-depending on the kernel function chosen for the GP prior, any density function can in principle be represented.
contrasting
train_14939
Clearly, this includes all observations of saccade amplitudes and fixation durations observed in the training and test set.
we also need to evaluate the normalizer in Equation 26, and (for f ∈ {α 1 ,ᾱ 1 , α 2 , α 3 , α 4 }) the additional normalizer required when truncating the distribution (see Equation 15).
contrasting
train_14940
It is well known that there is substantial sociolinguistic variation between different communities, whether these communities are defined geographically (Trudgill, 1974) or via underlying sociocultural differences (Labov, 2006).
no previous work has systematically investigated community-specific variation in word sentiment at a large scale.
contrasting
train_14941
Of course, the sentiment lexicons induced by SENTPROP are not perfect, which is reflected in the uncertainty associated with our bootstrap-sampled estimates.
we believe that these userconstructed, domain-specific lexicons, which quantify uncertainty, provide a more principled foundation for CSS research compared to domain-general sentiment lexicons that contain unknown biases.
contrasting
train_14942
Most of these studies focus on building sentiment classifiers with features, which include bag-of-words and sentiment lexicons, using SVM (Mullen and Collier, 2004).
the results highly depend on the quality of features.
contrasting
train_14943
By utilizing syntax structures of sentences, tree-based LSTMs have been proved to be quite effective for many NLP tasks.
such methods may suffer from syntax parsing errors which are common in resourcelacking languages.
contrasting
train_14944
3 Attention-based LSTM with Aspect Embedding 3.1 Long Short-term Memory (LSTM) Recurrent Neural Network(RNN) is an extension of conventional feed-forward neural network.
standard RNN has the gradient vanishing or exploding problems.
contrasting
train_14945
(2015) proposed to combine deep learning and structured learning for language parsing which can be learned by structured perceptron.
they also separate neural network training with structured prediction.
contrasting
train_14946
Most existing works focus on either generic subjective expression or aspect expression extraction.
in opinion mining, it is often desirable to mine the aspect specific opinion expressions (or aspectsentiment phrases) containing both the aspect and the opinion.
contrasting
train_14947
In Yang and Cardie, (2012) a semi-CRF based approach is used which allow sequence labeling at segment level and (Yang and Cardie, 2014) employed semi-CRF for opinion expression intensity and polarity classification.
all the above works focus on generic subjective expressions as opposed to aspect specific opinionsentiment phrases.
contrasting
train_14948
It is a bit unfair to compare PSM with sMC-GPU because PSM is lacking phrase rank optimization whereas sMC-GPU enforces it, and the p@n metric uses rank position as its goodness criterion.
we will see that in an actual application task, both PSM and PSM-GPU does better than sMC-GPU.
contrasting
train_14949
We note that the Topic Coherence (TC) metric in (Mimno et al., 2011) which is often used to approximate coherence in unigram topic models as it correlates with human notions of coherence, uses codocument frequency of individual words in topics.
in our problem as phrases are sparse, their co-document frequency is far lower than words.
contrasting
train_14950
Automatically identifying users' emotional states from their texts and classifying emotions into finite categories such as joy, anger, disgust, etc., can be considered as a text classification problem.
it introduces a challenging learning scenario where multiple emotions with different intensities are often found in a single sentence.
contrasting
train_14951
In both models an annotator's response depends on an item only through its correct label.
iRT assumes a more sophisticated response mechanism involving both annotator qualities and item characteristics.
contrasting
train_14952
Then, it can be seen that if we sample uniformly from (w, c) ∈ Ω and c ∈ C \ {c}, then j(w, c, c which does not contain any expensive summations and is an unbiased estimator of (13), i.e., E [j(w, c, c )] = J (U, V, Ξ).
one can optimize ξ w,c exactly by using (12).
contrasting
train_14953
The update (12) indicates that ξ −1 w,c is proportional to rank (w, c).
one can observe that the loss function (•) in 14is weighted by a ρ ξ −1 w,c term.
contrasting
train_14954
Besides these state-of-the-art techniques, a few ranking-based approaches have been proposed for word embedding recently, e.g., (Collobert and Weston, 2008;Vilnis and McCallum, 2015;Liu et al., 2015).
all of them adopt a pair-wise binary classification approach with a linear ranking loss ρ 0 .
contrasting
train_14955
On one hand, semantic patterns may exist in neural representations that have not been fully captured by state-of-the-art word embeddings derived from text corpora.
we can see that the variation of the correlation coefficients for the WordSim-353 dataset with different β values is small.
contrasting
train_14956
(2015) proposed an improved approach to the concept identification subtask by using a simple classifier over actions which generate these subgraphs.
the overall architecture is still based on the pipelined model.
contrasting
train_14957
The underlying learning algorithm has shown the effectiveness on some other Natural Language Processing (NLP) tasks, such as dependency parsing and extraction of entity mentions and relations (Collins and Roark, 2004;Hatori et al., 2012;Li and Ji, 2014).
compared to these NLP tasks, the AMR parsing is more challenging in that the AMR graph is more complicated.
contrasting
train_14958
Comprehenders do not wait until the end of the sentence before they build a syntactic or semantic representation for the sentence.
the challenges of successfully applying the incremental joint model to this problem formulation are: 1) how can we design an effective decoding algorithm for identifying the relations between the nodes in an incremental fashion, given a partial sequence of spans, i.e., a partial sequence of goldstandard concept fragments; 2) further, if given a sentence, how can we design an incremental framework to perform concept identification and relation identification simultaneously.
contrasting
train_14959
When each concept fragment is processed, edges are added between the current concept fragment and its predecessors.
how to treat its predecessors is a difficult problem.
contrasting
train_14960
It is linear in the length of sequence of concept fragments.
the constant in the O is relatively large.
contrasting
train_14961
(2016) also presents an incremental AMR parser based on a simple transition system for dependency parsing.
compared to our parser, their parser cannot parse non-projective graphs, resulting in a limited coverage.
contrasting
train_14962
This effect holds when we run the same model on data subsets consisting only of dogmatic or non-dogmatic users, and also when we conservatively remove all words used by B from A's response (i.e., controlling for quoting effects).
to the computational models we have presented, dogmatism is usually measured in psychology through survey scales, in which study participants answer questions designed to reveal underlying personality attributes (Rokeach, 1954).
contrasting
train_14963
This information is pivotal when tailoring search results to the preferences of specific individuals.
in some cases a user may have very limited previous interactions with the system.
contrasting
train_14964
A similar approach was also explored in (Noll and Meinel, 2007) where the system performed re-ranking of Google search results based on social bookmarks and tags harvested from del.icio.us 1 .
the data sparsity problem poses a challenge to this approach as not all Web pages returned by search engines are tagged in the del.icio.us dataset.
contrasting
train_14965
Researchers have considered tag-tag relationships for personalized query expansion, by selecting the most related tags from a user's profile (Bender et al., 2008;Bertier et al., 2009).
tags cannot be relied upon to consistently provide precise descriptions of resources for use when searching.
contrasting
train_14966
However, all of the previously mentioned systems consider constructing user profiles solely from past usage information.
in this paper we extend personalized web search using social data by exploiting an external knowledge base to enhance the user profile.
contrasting
train_14967
are mixed together to infer unified latent topics.
the MEUP model may miss important information when the topics are learned.
contrasting
train_14968
The results also show that the improvements over baseline models in the User-SMALL group are more noticeable than in the User-LARGE group.
the differences are small.
contrasting
train_14969
Hence, sophisticated approximation techniques based on algorithms such as belief propagation and dual decomposition have been employed.
we propose a simple greedy search approximation for this problem which is very intuitive and easy to implement.
contrasting
train_14970
We first present a basic greedy algorithm that relaxes the global graph-based objective (Section 3).
as this simple algorithm does not provide a realistic estimation of the impact of an arc selection on uncompleted high-order structures in the partial parse forest, it is not competitive with state of the art approximations.
contrasting
train_14971
Both "国 航" and "中国国航" are appropriate abbreviations for "中 国 国 际 航 空 公 司(Air China)".
"国 航 司" is not a Chinese word and cannot be understood by humans.
contrasting
train_14972
Johansson (2013) introduced path-based feature templates in using one parser to guide another.
to the above discrete methods, our neural stacking method offers further feature integration by directly connecting the feature layer of the source tagger with the input layer of the target tagger.
contrasting
train_14973
The improvement over the best formmodel reaches up to 6 points (e.g., Czech nouns).
to form sum, LAMB improves over form opt on German nouns.
contrasting
train_14974
Thanks to the CTB-train sentences, the model may be able to choose the correct tag, but inevitably becomes more indecisive at the same time due to the PD-train sentences.
the offline pruning approach directly uses two baseline models for pruning, which is a job perfectly suitable for the baseline models.
contrasting
train_14975
This work is also closely related with multi-task learning, which aims to jointly learn multiple related tasks with the benefit of using interactive features under a share representation (Ben-David and Schuller, 2003;Ando and Zhang, 2005;Parameswaran and Weinberger, 2010).
as far as we know, multi-task learning usually assumes the existence of data with labels for multiple tasks at the same time, which is unavailable in our scenario, making our problem more particularly difficult.
contrasting
train_14976
Hence, these summaries are as coherent as the author intends them to be, but they are not informative.
cP 3 is very close in coherence to Lead indicating that our strategy is successful.
contrasting
train_14977
(2015b) is the only work exactly dealing with the news stream summarization challenge.
they studied the problem on a static timestamped corpus instead of on a dynamic text stream and their proposed pipeline-style approach cannot be applied on a real-time text stream due to high complexity in time and space.
contrasting
train_14978
In particular, in previous work (Zhang and Wallace, 2015) we observed that CNN outperforms SVM uniformly on sentence classification tasks (the average sentence-length in these datasets was about 10).
in the datasets we consider in the present paper, documents often comprise hundreds of sentences, each in turn containing multiple words.
contrasting
train_14979
4 with respect to µ i and set it to zero, we obtain, which implies that the MLE of µ i is the sample mean.
since we are dealing with a set of weighted samples, the sample mean is replaced by the weighted average (the weights w l are assumed to add up to one): Our model yields an intuitive result: to estimate the mean vector µ i of u i , we can simply take the weighted average of the latent factors u l from the nearest-neighbors as an estimator, where the weights are the similarity scores between the textual proles of user i and its neighbors.
contrasting
train_14980
For all baseline methods, we use K to denote the dimensionality of the latent variables.
when discussing about NT-MF, since the number of topics can be different from the number of user latent factors, we use T to denote the former and K to denote the latter to avoid confusion.
contrasting
train_14981
In Figure 6, most negative cues have good PCSs (higher than 70%).
"unable" has poor PCS of 16.67%.
contrasting
train_14982
data input (Vinyals et al., 2015;Zhou and Xu, 2015;Rocktäschel et al., 2015).
due to the complexity of the neural networks and the lack of effective analytic methodology, little is known about how a sequential model of sentence, such as recurrent neural network, captures linguistic knowledge.
contrasting
train_14983
Both some of the internal neurons of the memory cell in the language model and the sentiment model emerge to be sensitive to the length of the input sequence, as we expected, since the sentence length is a non-linguistic features shared by all the sequential input.
different optimization objectives force the models to represent and capture the linguistic properties of different aspects.
contrasting
train_14984
Their visualization results show some interesting properties of the memory neurons in LSTM unit.
their exploration on character-based model does not intend to correlate high-level linguistic knowledge, which are intuitively required for sequential modelling of a sentence.
contrasting
train_14985
In addition to the previously mentioned methods, a few researchers have studied the problem of extracting keyphrases from collections of tweets (Zhao et al., 2011;Bellaachia and Al-Dhelaan, 2012).
to traditional web applications, Twitter-like services usually limit the content length to 140 characters.
contrasting
train_14986
Because multiple tweets are usually organized by topic, many document-level approaches can also be adopted to achieve the task.
with the previous methods, Marujo et al.
contrasting
train_14987
They used several unsupervised methods and word embeddings to construct features.
the proposed method worked on the word level.
contrasting
train_14988
One way to capture the contextual information of a word sequence is to concatenate neighboring features as input features for a deep neural network.
the number of parameters rapidly increases according to the input dimension.
contrasting
train_14989
CRF inference with NN models cited previously do improve exact phrase labeling.
better ways of modeling the pairwise potential functions of CRFs might lead to improvements in labeling rare entities and detecting exact phrase boundaries.
contrasting
train_14990
Although the work (Wang et al., 2011;Rayana and Akoglu, 2015) have tried to employ graph structure to consider the interac-tions among the reviewers and products, it is a kind of local interaction defined within the same product review page.
the interaction among the reviewers and products from different review pages also provides much useful and global information, which is ignored by the previous work.
contrasting
train_14991
They find that the average review length of the spammers is quite short compared with non-spammers.
once a crafty spammer realizes that he left this type of footprint, he could produce a review that is as long as the non- spammers to pretend to be a normal reviewer.
contrasting
train_14992
For example, the tweet "@realDon-aldTrump is the only honest voice of the @GOP" expresses a positive stance towards the target Donald Trump.
when stance is annotated with respect to Hillary Clinton as the implicit target, this tweet expresses a negative stance, since supporting candidates from one party implies negative stance towards candidates from other parties.
contrasting
train_14993
A common stance detection approach is to treat it as a sentence-level classification task similar to sentiment analysis (Pang and Lee, 2008;Socher et al., 2013).
such an approach cannot capture the stance of a tweet with respect to a particular target, unless training data is available for each of the test targets.
contrasting
train_14994
The correct event type for the trigger word "leave" in this case is "End-Org".
the previous CNN models might not be able to detect "leave" as an event trigger or incorrectly predict its type as "Movement".
contrasting
train_14995
For these types, TransInit successfully recognises PERSON as the most related type to DOCTOR, as well as CARDINAL as the most related type to ID.
ccA often fails to identify meaningful type alignments, especially for small training data sizes.
contrasting
train_14996
For example, "burning tires" is a concrete phrase that would not fall into the category of more abstract events.
"gathered at site" is certainly more ambiguous.
contrasting
train_14997
Much recent research in event timeline generation suggests the usefulness of subevents in improving quality and completeness of automatically generated event summaries.
they often focus on a different notion of subevents that broadly covers pre-condition events and consequence events and is temporally-based.
contrasting
train_14998
In conventional ZSL, a novel word can be matched against images by projecting it into image space, and sorting images by their distance to the word (vector), or likelihood under the word (Gaussian).
results may be unreliable when used with polysemous words, or words with large appearance variability.
contrasting
train_14999
With regards to efficiency, our model is fast to train if fixing pre-trained word-Gaussians and optimising only the cross-modal mapping A.
training the mapping jointly with the word-Gaussians comes at the cost of updating the representations of all words in the dictionary, and is thus much slower.
contrasting