id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14800
While this neural network structure captures the idea of compositionality over the RST tree, the most deeply embedded discourse units can be heavily down-weighted by the recursive composition (assuming K s < K n ): in the most extreme case of a right-branching or left-branching structure, the recursive operator may be applied N times to the most deeply embedded EDU.
discourse depth reweighting applies a uniform weight of 0.5 to all discourse units with depth ≥ 3.
contrasting
train_14801
Sentiment polarity analysis has typically relied on a "preponderance of evidence" strategy, hoping that the words or sentences representing the overall polarity will outweigh those representing counterpoints or rhetorical concessions.
with the availability of off-the-shelf RST discourse parsers, it is now easy to include documentlevel structure in sentiment analysis.
contrasting
train_14802
Expert-annotated datasets of implicit discourse relations are expensive to produce, so it would be preferable to use weak supervision, by automatically labeling instances with explicit connectives (Marcu and Echihabi, 2003).
sporleder and Lascarides (2008) show that models trained on explicitly marked examples generalize poorly to implicit relation identification.
contrasting
train_14803
2 Related Work Marcu and Echihabi (2003) train a classifier for implicit intra-sentence discourse relations from explicitly-marked examples in the rhetorical structure theory (RST) treebank, where the relations are automatically labeled by their discourse connectives: for example, labeling the relation as CONTRAST if the connective is but.
sporleder and Lascarides (2008) argue that explicitly marked relations are too different from implicit relations to serve as an adequate supervision signal, obtaining negative results in segmented discourse representation theory (sDRT) relations.
contrasting
train_14804
The simplest way is taking the intersection of the corresponding constraints.
we should consider the fact that the properties assigned automatically can be erroneous, since none of the analyzer is perfect.
contrasting
train_14805
Maybe RAE needs more labeled training data for better results.
sCNN models perform remarkably well, producing comparable and even better results.
contrasting
train_14806
A promising result for language learning has been shown in (Yu and Siskind, 2013) and a quite challenging effort to describe cooking activities was made in (Regneri et al., 2013).
these studies rely only on visual information, while we aim to build a system that is able to describe everyday activities using multimodal information.
contrasting
train_14807
Other options for representing text structure such as full-text discourse parsers (Marcu, 2000) may be not available or don't have satisfied performance, especially for non-English languages.
modeling local coherence alone is not adequate to distinguish discourse elements in persuasive essays.
contrasting
train_14808
Our method provides a promising solution when retraining a system is impossible or difficult.
it may raise a question of the computing cost for tuning penalty scores especially with the large number of constraints.
contrasting
train_14809
This leads to a drop in lemmatization performance in all languages except Spanish (English has no additional attributes).
preliminary experiments showed that correct morphological attributes would substantially improve lemmatization as they help in cases of ambiguity.
contrasting
train_14810
In their case, the best results produced an accuracy of 93%.
their system is augmented with a dictionary, and the distribution of accents and grammatical behaviour are also quite different from Hungarian.
contrasting
train_14811
Sometimes such forms could be the correct ones, but the more productive compounding and derivation there is in a word, the lower score it should get.
the frequencies of the lemma and the inflectional pattern should increase the score of a candidate, thus these components were given positive weights.
contrasting
train_14812
Morphological analysis including word segmentation has been widely and actively studied, and for example, Japanese word segmentation accuracy is in the high 90s.
we often observe that strange outputs of downstream NLP applications such as machine translation and question answering come from incorrect word segmentations.
contrasting
train_14813
That is, if high-quality morphological analysis is available, we can learn a high-quality language model from a morphologically analyzed large corpus.
if a high-quality language model is available, we can achieve highquality morphological analysis by looking for a segmented word sequence with a large language model score.
contrasting
train_14814
The RNNLM is trained on an automatically analyzed corpus of ten million sentences, which possibly includes incorrect segmentations such as " (foreign)/ (carrot)/ (regime)."
on semantically generalized level, it is an unnatural semantic sequence like nation vegetable politics.
contrasting
train_14815
The recurrent model makes decoding harder than nonrecurrent neural network language models.
we use RNNLM because the model outperforms other NNLMs (Mikolov, 2012) and the result suggests that the model is more likely to capture semantic plausibility.
contrasting
train_14816
Murawaki and Kuro-hashi (2008) proposed an online method in a similar setting.
to these studies, this paper proposes to use other modalities, game states as the first trial, than languages.
contrasting
train_14817
For example, a verb and its corresponding direct object can be far away in terms of tokens if many adjectives lies in between, but they are adjacent in the parse tree (Irsoy and Cardie, 2013).
we do not know if this advantage is truly important, and if so for which tasks, or whether other issues are at play.
contrasting
train_14818
RNNs encode, to some extent, structural information by recursive semantic composition along a parse tree.
they may have difficulties in learning deep dependencies because of long propagation paths (Erhan et al., 2009).
contrasting
train_14819
Bag-of-words models A simple and intuitive method is the Neural Bag-of-Words (NBOW) model, in which the representation of sentences or documents can be generated by averaging constituent word representations.
the main drawback of NBOW is that the word order is lost.
contrasting
train_14820
In many of these settings, the polygraph test has been used as the main method to identify deceptive behavior.
this method requires the use of skin-contact devices and human expertise, making it infeasible for large-scale applications.
contrasting
train_14821
For example, the interviewer asks a random individual on his opinion on a non-existing film where the interviewee fabricates a story.
truthful videos are collected from individuals asked on their opinions on real movies.
contrasting
train_14822
Keyword search is often used as the first step.
that is not sufficient due to low precision and low recall.
contrasting
train_14823
For example, in an application, the training data has no negative examples about sports.
in testing, some sports posts show up.
contrasting
train_14824
In order to balance between model over-fitting and under-fitting, Tax and Duin (2001) proposed a method that tries to use artificially generated outliers to optimize the model parameters.
their experiments suggest that the procedure to generate artificial outliers in a hyper-sphere is only feasible for up to 30 dimensions.
contrasting
train_14825
A fully supervised baseline that uses 100% of the training set achieves an F1-score of 0.720 (using content) and 0.738 (using citation contexts).
co-training requires only 15% of the labeled training set to outperform the fully supervised content baseline and 30% of the training set to outperform the fully supervised citation contexts baseline.
contrasting
train_14826
As can be seen in the figure, overall, the co-training approach significantly outperforms both variations of EM.
the co-training method falls short when using 5% of the training instances, where EM Content and EM Citations methods are achieving higher F1-score values.
contrasting
train_14827
For example, words like learning, multi-agent or interface are more important in the content view.
words such as document or text achieve a higher information gain score for the citation contexts view.
contrasting
train_14828
Most puns are well structured and play with contrasting or incongruous meaning.
humor sentences in the 16000 One Liners often rely on the reader's awareness of attention-catching sounds (Mihalcea and Strapparava, 2005).
contrasting
train_14829
This may be evidence that the two communities have already independently identified appropriate dimensionality reduction techniques for their respective data sources.
our results support that the speech community can benefit from broader use of sparsity-inducing graphical models such as SAGE in tasks like spoken topic discovery and recommendation, in which humaninterpretable representations are desired.
contrasting
train_14830
Many model components of competitive statistical machine translation (SMT) systems are based on rather simplistic definitions with little linguistic grounding, which includes the definitions of phrase pairs, lexicalized reordering, and n-gram language models.
earlier work has also shown that statistical MT can benefit from additional linguistically motivated models.
contrasting
train_14831
Ge (2010) captures reordering patterns by defining soft constraints based on the currently translated word's POS tag and the words structurally related to it.
target syntax is more challenging to use in PBSMT, since a target-side syntactic model does not have access to the whole target sentence at decoding.
contrasting
train_14832
As for the varying features defining different BiSLM versions, we again see little effect of the labeling type or subtree completeness definition.
we see the opposite pattern for the unalign-adjoin feature, where unalign-adjoin+ is preferred.
contrasting
train_14833
As always, there is a trade-off between accuracy, space, and time, with recent papers considering small but approximate lossy LMs (Chazelle et al., 2004;Talbot and Osborne, 2007;Guthrie and Hepple, 2010), or loss-less LMs backed by tries (Stolcke et al., 2011), or related compressed structures (Germann et al., 2009;Heafield, 2011;Pauls and Klein, 2011;Sorensen and Allauzen, 2011;Watanabe et al., 2009).
none of these approaches scale well to very high-order m or very large corpora, due to their high memory and time requirements.
contrasting
train_14834
For 2-grams, D-CST is 3 times slower than a 2-gram SRILM index as the expensive N 1+ ( • α • ) is not computed.
for large mgrams, our indexes are much slower than SRILM.
contrasting
train_14835
(2010) defined three single similarities (i.e., Name similarity, Profile similarity and Structural similarity) based on the descriptions of an entity, then they employed a harmony-based method to aggregate the single similarities to get a final similarity for extracting the final mappings.
treating different kinds of descriptions of an entity separately suffers from two limitations.
contrasting
train_14836
And in the classifying phase, each pair of elements from two to-be-matched ontologies is predicted as matched or not according to its attributes.
eR-SOM is an unsupervised approach, but it does not exclude using external resources and training data to help learning the representations of entities and provide the initial similarity matrix for the SP method to further improve the performance.
contrasting
train_14837
DEP performs slightly worse than NG2 on CWS and Cilin in P@1 and P@5.
it achieves better results on Cilin in P@10 to P@100 when more candidate similar words are evaluated.
contrasting
train_14838
However, it achieves better results on Cilin in P@10 to P@100 when more candidate similar words are evaluated.
nG5 and nG2 mix more semantically related words.
contrasting
train_14839
These results show that dependency embeddings are relatively weak for answering analogy questions.
the performance also varies across different relation types.
contrasting
train_14840
Other WSI approaches use various forms of clustering techniques.
previous studies of the intrinsic dimensionality of distributional semantic spaces using fractal dimensions indicate that neighborhoods in semantic space have a filamentary rather than clustered structure (Karlgren et al., 2008).
contrasting
train_14841
Although feature norms have also been used, raw image data has become the de-facto perceptual modality in multi-modal models.
if the objective is to ground semantic representations in perceptual information, why stop at image data?
contrasting
train_14842
In the case of the full datasets this difference is only marginal, which is to be expected given how few of the words in the datasets are auditory-relevant.
the results indicate that adding auditory input even for words that are not directly auditoryrelevant is not detrimental to overall performance.
contrasting
train_14843
habitualLikewise, we mark modalized sentences as habitual if they have a strong implicature that an event has actually happened regularly (Hacquard, 2009), as in (11).
(7) is static as it does not imply that Mary actually swims regularly.
contrasting
train_14844
Our model would presumably benefit from a similar coupling mechanism which we could enforce as a constraint in the ILP.
we leave this to future work.
contrasting
train_14845
In the data used in (Dong et al., 2014a), one sentence contains only one aspect.
two or more aspects can be appeared in one sentence in SemEval 2014 data.
contrasting
train_14846
There have been several successful attempts at sentiment polarity detection in the past (Turney, 2002;Pang et al., 2002;Pang and Lee, 2004;Mohammad et al., 2013;Svetlana Kiritchenko and Mohammad, 2014).
prediction of star ratings still considered as a challenging task (Qu et al., 2010;Gupta et al., 2010;Boteanu and Chernova, 2013).
contrasting
train_14847
They exploited linguistic knowledge available in the corpora to compute similarity between adjectives.
their approach did not consider polarity orientation of adjectives, they provided ordering among non-polar adjectives like, cold, lukewarm, warm, hot.
contrasting
train_14848
3 To rule out that the lack of any syntactic information (which human annotators use) disadvantages the model, we also experimented with including dependency triples (dobj and nsubj, the most frequent dependencies) using the Stanford Parser (Klein and Manning, 2003).
performance did not improve, so due to limited space, we did not further explore this option.
contrasting
train_14849
Slightly more than 5% of ratings are more than two steps off.
comparing individual annotator ratings instead of mean ratings, some crowdsource annotators are a full nine steps off, and in a single case, even one of the trained annotators was eight steps off.
contrasting
train_14850
The aligned-distribution results also indicate that the model is biased towards mean ratings: MAE improves for author labels, since the relatively high variation is eliminated, but worsens for the annotator labels, as variance increases.
alignment also creates problems.
contrasting
train_14851
M AE M values are all similar across languages, again confirming what has been observed on agreement.
m AE m values on experiments are sensibly worse than those measured on agreement, possibly due to the fact that we used very basic features, with limited use of sentimentrelated information.
contrasting
train_14852
Given a training corpus with hand-annotated sentiment polarity labels, following Kim (2014), we train a deep convolutional neural network (CNN) on it.
instead of using it as a classifier, as Kim did, we use the values from its hidden layer as features for a much more advanced classifier, which gives superior accuracy.
contrasting
train_14853
The table shows that the best results were obtained for textual modality; the visual modality performed worse, and the audio was least useful.
even the worst of our results is much better than the state-of-the-art (Pérez-Rosas et al., 2013).
contrasting
train_14854
In case of the decision level fusion experiment, the coupling of Sentic Patterns to determine the weight of textual modality has enriched the performance of multimodal sentiment analysis framework considerably.
the parameter selection for decision level fusion produced suboptimal results.
contrasting
train_14855
Fortunately, if we have the global context of good like interesting or amazing, the sentiment meaning of the embedding will be explicit.
the training of log-linear neural language model is based on local word dependencies (e.g., the co-occurrence of the words in a local window).
contrasting
train_14856
That is because PV-DBOW tends to regard ibm and mac both as computers.
the two different computer brands are distinguished in Glo-PV-DBOW.
contrasting
train_14857
They concluded that false rumours are more likely to receive a comment with link to Snopes.com website.
none of the above attempted to automatically classify rumours.
contrasting
train_14858
Perfect precision was found for claims of renewable freshwater for which one textual pattern was responsible for all the claims identified and it was correct.
the zero precision for claims of internet user % was due to identifying correctly sentences listing countries and their respective values for this property but not identifying the country-value pairs correctly.
contrasting
train_14859
As explained, we tackle claim identification as an instance of information extraction, and propose a baseline able to perform both tasks.
it is important to distinguish between them.
contrasting
train_14860
On the other hand, after the submission of our paper we became aware of a parallel work (Coavoux and Crabbé, 2016) that also proposed a dynamic oracle for their own incremental constituency parser.
it is not optimal due to dummy non-terminals from binarization.
contrasting
train_14861
In GHKM, a tree fragment and a sequence of words are extracted together if they are minimal and their word alignments do not fall outside of their respective boundaries.
given that alignment violations are not allowed, the quality of the extracted rules degrades as the rate of misaligned words increases.
contrasting
train_14862
(2012) extended the previous method with dual decomposition and HPSG parsing.
to these symmetry-directed efforts, Kawahara et al.
contrasting
train_14863
Note that DEP and LEN are closely related; generally center-embedded constructions are accompanied by longer dependencies so LEN also penalizes center-embedding implicitly.
the opposite is not true and there exist many constructions with longer dependencies without center-embedding.
contrasting
train_14864
(Future) Second, we intentionally created the EventStatus corpus to concentrate on one particular event frame (class of events): civil unrest.
previous temporally annotated corpora focus on a wide variety of events.
contrasting
train_14865
Example 3 In the statement from Example 1, the extraction patterns capture the dependency path connecting the head words: Iraq, administrator and Paul Bremer.
to capture the contextual information, further qualification of the argument node, administrator, is required.
contrasting
train_14866
NESTIE uses an approach similar to OLLIE and WOE to learn dependency parse based syntactic patterns.
there are significant differences.
contrasting
train_14867
Linking an incorrect proposition generates more incorrect propositions which hurt the system performance.
we hope this problem can be alleviated to some extent as parsers become more robust.
contrasting
train_14868
It is a fundamental task which can serve as a pre-existing system and provide prior knowledge for information ex-traction, natural language understanding, information retrieval, etc.
automatic recognition of semantic relation is challenging.
contrasting
train_14869
Recursive Neural Network (RNN) (Socher et al., 2012) and Convolutional Neural Network (CNN) (Zeng et al., 2014) have proven powerful in relation classification.
to traditional approaches, neural network based methods own the ability of automatic feature learning and alleviate the problem of severe dependence on human-designed features and kernels.
contrasting
train_14870
In contrast to traditional approaches, neural network based methods own the ability of automatic feature learning and alleviate the problem of severe dependence on human-designed features and kernels.
previous researches (Socher et al., 2012) imply that some features exploited by traditional methods are still informative and can help enhance the performance of neural network in relation classification.
contrasting
train_14871
These models are still ambiguous to some degree, for example when an O-node has two child nodes and two parents, we cannot decide which of the parent node is paired with which child node.
in this paper we argue that: • This model is less ambiguous compared to the linear-chain model, as we will show later theoretically and empirically.
contrasting
train_14872
The last example shows a very hard case of overlapping 2 It is tempting to just ignore these entities since the N type does not convey any specific information about the entities in it.
due to the dataset size, excluding this type will lead to very small number of interactions between types.
contrasting
train_14873
In addition, our analysis shows that the AB model produces a significant positive correlation with the PD acceptability rating.
the AB model has no correlation with the verb bias score.
contrasting
train_14874
Specifically, let − → f () and ← − f () be the forward and backward recurrent unit, respectively, then Independent but context dependent selection of words is often sufficient.
the model is unable to select phrases or refrain from selecting the same word again if already chosen.
contrasting
train_14875
Neural network based models have achieved impressive results on various specific tasks.
in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data.
contrasting
train_14876
LSTM has an internal memory to keep useful information for specific task, some of which may be beneficial to other tasks.
it is non-trivial to share information stored in internal memory.
contrasting
train_14877
", which has a negative sentiment, while the standard LSTM gives a wrong prediction due to not understanding the informative words "cookie-cutter" and "cut-and-paste".
our model makes a correct prediction and the reason can be inferred from the activation of fusion gates.
contrasting
train_14878
al (2013) incorporate a dedicated causal component into their system, and note that it improves the overall performance.
their model is limited by the need for lexical overlap between a causal construction found in their knowledge base and the question itself.
contrasting
train_14879
(2015), in our vanilla alignment model.
due to the directionality inherent in causality, they do not apply to our causal model so there we omit them.
contrasting
train_14880
In their approach, types are included as a part of unary lexicon for building the logical forms from natural language questions.
no explicit type inference is exploited.
contrasting
train_14881
The final vec-tor for this word, < 0.37, 0.18, 0.0 > with TF-IDF or < 0.36, 0.06, 0.0 > with PPMI-IDF, is intended to guide the implicit model toward a contrastive relation, thus potentially helping in identifying the relation in example (1b).
the word "week" is more likely to be found in the arguments of temporal relations that can be triggered by before but also while, an ambiguity kept in our representation whereas approaches based on using explicit examples as new training data generally choose to annotate them using the most frequent sense associated with the connective, often limiting themselves to the less ambiguous ones (Marcu and Echihabi, 2002;Sporleder and Lascarides, 2008;Lan et al., 2013;Braud and Denis, 2014;.
contrasting
train_14882
Discourse connectives are words (e.g., but, since) or grammaticalized multi-word expressions (e.g., as soon as, on the other hand) that may trigger a discourse relation.
these forms can also appear without any discourse reading, such as because in: He can't sleep because of the deadline.
contrasting
train_14883
use of raw tokens (One-hot), a conclusion in line with the results reported in (Braud and Denis, 2015) for binary systems.
contrary to their findings, in multiclass, the best results are not obtained using the Brown clusters, but rather the dense, real valued representations (Embed.
contrasting
train_14884
We have described our neural attention framework and a content-based model in previous subsection.
the model mentioned above ignores the location information between context word and aspect.
contrasting
train_14885
This case shows the effects of multiple hops.
in Table 4(b), the content-based model also gives a larger weight to "dreadful" when the target we focus on is "food".
contrasting
train_14886
Past research has proposed many techniques to extract opinion targets (we will just call them targets hereafter for simplicity) and also to classify sentiment polarities on the targets.
a target can be an entity or an aspect (part or attribute) of an entity.
contrasting
train_14887
Basically, they used topics generated from past domains to help current domain model inference.
they are just for aspect extraction.
contrasting
train_14888
We can regard the extracted terms from a NER system as entities and the rest of the targets as aspects.
a NER system cannot identify entities such as "this car" from "this car is great."
contrasting
train_14889
But our type modifier (TM) does that, i.e., if an opinion target appears after "this" or "these" in at least two sentences, TM labels the target as an entity; otherwise an aspect.
tM cannot extract named entities.
contrasting
train_14890
(2007) first chose pivot words which have high mutual information with the sentiment labels, and then set up the pivot prediction tasks to be the predictions of each of these pivot words using the other words.
the original SCL method is based on traditional discrete feature representations and linear classifiers.
contrasting
train_14891
They use Stacked Denoising Auto-encoders (SDA) to induce a hidden representation that presumably works well across domains.
sDA is fully unsupervised and does not consider the end task we need to solve, i.e., the sentiment classification task.
contrasting
train_14892
However, SDA is fully unsupervised and does not consider the end task we need to solve, i.e., the sentiment classification task.
the idea behind SCL is to use carefullychosen auxiliary tasks that correlate with the end task to induce a hidden representation.
contrasting
train_14893
However, these methods are still based on traditional discrete representation and do not exploit the idea of using auxiliary tasks that are related to the end task.
the sentence embeddings learned from our method are derived from real-valued feature vectors and rely on related auxiliary tasks.
contrasting
train_14894
Most of the state-of-the-art sentiment classification methods are based on supervised learning algorithms which require large amounts of manually labeled data.
the labeled resources are usually imbalanced in different languages.
contrasting
train_14895
There are two classes of mainstreaming sentiment classification algorithms: unsupervised methods which usually require a sentiment lexicon (Taboada et al., 2011) and supervised methods (Pang et al., 2002) which require manually labeled data.
both of these sentiment resources are unbalanced in different languages.
contrasting
train_14896
"I felt it could have been a lot better with a little less comedy and a little more drama to get the point across.
its still a must see for any Jim Carrey fan. "
contrasting
train_14897
As a general tendency, the performance of all approaches worsens as sentence length increases.
for sentences longer than 35 words we see that NMT quality degrades more markedly than in PBMT systems.
contrasting
train_14898
As for lexical errors, a number of existing taxonomies further distinguish among translation errors due to missing words, extra words, or incorrect lexical choice.
given the proven difficulty of disambiguating between these three subclasses (Popović and Ney, 2011;Fishel et al., 2012), we prefer to rely on a more coarse-grained linguistic error classification where lexical errors include all of them (Farrús Cabeceran et al., 2010).
contrasting
train_14899
than using it for source pre-ordering, as done by the HPB and SPB systems.
this only results in a moderate reduction of verb reordering errors (-12% and -25% vs. HPB and SPB respectively).
contrasting