id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_9100
(2013) proposed to use translation process to model this task.
to the methods that focuses on topic modelling, Shi et al.
contrasting
train_9101
The input memory representation m i of tweet t i is a matrix of size N × dim, where N is the length of the tweet.
because not all words contribute equally to the tweets meaning, and the importance degree of the word w ij should be considered in our models, we propose a probability layer to achieve the goal.
contrasting
train_9102
CNN-Attention is the latest method used for this task, and it showed a good performance in this work.
cNN-Attention only considers the content of the tweets and uses an attention mechanism to find important words in tweets.
contrasting
train_9103
From Table 3, we observe that changing the number of hops has some impact on the overall performance.
the results disprove that more hops are better, because when the number of hops is larger than 2, the performances of HMemN2N and MemN2N are both decreasing.
contrasting
train_9104
2011used an IR approach, by querying English Wikipedia with the top-N topic terms.
in order to do this, they required external resources (two search APIs, one of which is no longer publicly available), limiting the general-purpose utility of the method.
contrasting
train_9105
As shown in Table 1, initial accuracy measures are worse than all four competitors.
our system generates exclusively binary-branching output, while all competitors can produce the higher arity trees attested in the PTB-like evaluation standard (notice that our recall measure for the binary branching output beats both CCL and UPPARSE).
contrasting
train_9106
Unfortunately, due to the flat representation of these constructions in the gold standard trees, this insight on the part of our learner is not reflected in the accuracy measures in Table 1. short length of child-directed utterances, and therefore the right-branching baseline (RB) outperforms all systems by a wide margin on this corpus.
we argue that such utterances are a more realistic model of input to human language learners than newswire text, and therefore preferable for evaluation of systems that purport to model human language acquisition.
contrasting
train_9107
We note that certain error types, having recurrent error patterns, allow for straightforward artificial error data generation.
we experimentally show that quality artificial data cannot be so easily generated for content words (see §6).
contrasting
train_9108
This is over the human upper bound of 74%, and it shows that the topic coherence features perform well in contextualising the OOC annotation.
since a 'real-world' pipeline system does not have access to the gold annotations, we replace the gold annotations with the output of COMPDIST.
contrasting
train_9109
As both coherence and cohesion are important text properties that are known to influence the readability of texts, readability studies have attempted to exploited both dimensions.
most studies focused on phenomena that falls inside the category of cohesion as defined in Section 2.1 which is why we decided to focus on cohesive features in this paper.
contrasting
train_9110
This variable was also included in Coh-Metrix (Graesser et al., 2004), along with similar measures such as word overlap, noun overlap, stem overlap, and argument overlap.
the efficiency of this variable for readability was not assessed before Pitler and Nenkova (2008), who measured its association with text difficult and obtained a non significant correlation (r = −0.1).
contrasting
train_9111
(2013) computed other characteristics of lexical chains and co-reference pairs (such as the number of chains, the distance between entities, the average word length of entities, etc.).
with these features, they only reached a precision of respectively 0.367 and 0.384 for a six-class classification problem.
contrasting
train_9112
For example, a proper noun is the subject of sentence n and the anaphoric pronoun referring to it is often the subject of sentence n+1 (" Subject to Subject" transition).
the syntactic functions of mentions might change across sentences : the object of the sentence n becomes the subject of the next sentence.
contrasting
train_9113
First, all classification models perform better than their regression counterparts.
even for the former, no model using coherence or cohesive features is able to overcome a simple model based on sentence length and word frequency.
contrasting
train_9114
On the one hand, 6 features only were found to be significant by semi-partial correlation (when sentence length and word frequency were controlled for).
integrating the best cohesive features in a readability model did not bring significant improvement over a simple baseline on our French data.
contrasting
train_9115
The development of semantic resources such as FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2005) made the training of models for semantic role labelers using supervised techniques possible.
as a consequence of the considerable manual efforts needed to build proposition banks, they exist only for a few languages.
contrasting
train_9116
Most previous approaches have used professionally translated parallel corpora, mainly EuroParl (Koehn, 2005) and United Nations Corpora (Rafalovitch and Dale, 2009), to transfer semantic annotation.
creating these resources requires manual efforts; they are thus limited in size and in the number of languages they cover.
contrasting
train_9117
(2015), we introduced the concept of using entities as a method for aligning sentences and transferring semantic content in loosely parallel corpora.
the presented approach has the following limitations: (1) it was evaluated on one language only and (2) the evaluation was performed on the generated PropBank itself.
contrasting
train_9118
Most previous work relies on word alignments or uses bilingual dictionaries to transfer the predicate annotation between languages.
when applied to new languages and domains, these approaches face a scaling problem requiring either training on parallel corpora or otherwise dictionaries which may not be available for every language.
contrasting
train_9119
We assign the argument role to the governing token in the token span covered by each entity.
if the argument token in s SL is dominated by a preposition, we search for a preposition in s T L governing the entity and assign it the argument role.
contrasting
train_9120
We observe that the precision increases with the number of entities used in the alignments.
this increase is followed by a decrease in the number of alignments created.
contrasting
train_9121
The database also has a cognacy judgment for each word.
the database is not in an uniform transcription.
contrasting
train_9122
Comparative summarization is not a new task.
to our best knowledge there is no public benchmark data set available.
contrasting
train_9123
(2010) Extracts Heterogeneous en 24 352 ≈ 2,560 characters Lloret et al.
(2013) Extracts Heterogeneous en 310 10 100-200 words Our work Coherent Extracts Heterogeneous de 10 4-14 ≈ 500 words † there are corpora containing only extracts, which are suitable for evaluating extractive MDS systems.
contrasting
train_9124
Assuming that we extract the previous word as a feature in line 12, this would result in learning that predicting "hotel" after "dog" is beneficial.
this appears to be the case due to the the loss function and the incorrect partial prediction.
contrasting
train_9125
Comparing the two systems, we see that LOLS performs worse in terms of BLEU and ROUGE, but also has lower ERR(%); this indicates that our system tries to include more attributes in the NL sentences.
when looking at the scores in the unique MR subset, the difference between the two systems narrows, with almost no difference in the SF hotel dataset.
contrasting
train_9126
(Ganesan et al., 2012) proposed some heuristic rules to generate phrases, they used a modified mutual information function and an n-gram language model to ensure the representativeness and readability of the phrases.
their method didn't consider the descriptiveness of the phrases.
contrasting
train_9127
Past studies in education showed that higher-level questions, in contrast to simple factoid questions, have more educational benefits for reading comprehension (Anderson and Biddle, 1975;Andre, 1979;Hamaker, 1986).
most of existing approaches to question generation have focused on generating questions from a single sentence, relying heavily on syntax and shallow semantics with an emphasis on grammaticality (Mitkov and Ha, 2003;Chen et al., 2009;Heilman and Smith, 2010;Curto et al., 2011;Becker et al., 2012;Lindberg et al., 2013;Mazidi and Nielsen, 2014).
contrasting
train_9128
Therefore, our ultimate goal is to generate multiple-choice questions from plain texts in an arbitrary domain.
the state-of-the-art in extracting semantic representations of event and entity relations from text does not perform well enough to support our question generation approach.
contrasting
train_9129
The distractor generation component needs to generate distractors as close to a rating of 3 as possible.
distractors labeled as 2 ("easily eliminated") often occur because they come from events preceding the event described in the question or from events following the results of the events described in the question.
contrasting
train_9130
(ii) Different from syntactic parsers that concentrate on the syntactic accuracy of the syntax trees that their grammars derive, computational construction grammars focus on semantic accuracy as a metric that better meets their objectives.
semantic accuracy is harder to measure than syntactic accuracy because it requires textual corpora annotated with large formal meaning representations that are agreed upon by different grammar developers.
contrasting
train_9131
The S-score obtained for a specific phenomenon should not be presented on its own, because it would lead to an erroneous understanding of the grammar accuracy over the whole corpus.
together they give a better understanding how the grammar is working.
contrasting
train_9132
Methods for text simplification using the framework of statistical machine translation have been extensively studied in recent years.
building the monolingual parallel corpus necessary for training the model requires costly human annotation.
contrasting
train_9133
Recent studies have treated text simplification as a monolingual machine translation problem in which a simple synonymous sentence is generated using the framework of statistical machine translation (Specia, 2010;Zhu et al., 2010;Coster and Kauchak, 2011a;Coster and Kauchak, 2011b;Wubben et al., 2012;Štajner et al., 2015a;Štajner et al., 2015b;Goto et al., 2015).
unlike statistical machine translation, which uses bilingual parallel corpora, text simplification requires a monolingual parallel corpus for training.
contrasting
train_9134
Therefore, they may obtain similar sentence pairs effectively.
our simple proposed method achieved equivalent or higher performance than their method without considering any ordering of sentences.
contrasting
train_9135
Considering the average correlation scores obtained, the configurations METEOR Vector and ME-TEOR DBnary are comparable, except on German language, for which METEOR Vector obtained a better correlation score.
when we combine lexical data with Vector module (METEOR DBnary + Vector), we observe a small increase of the correlation score, in particular when threshold is tuned, which suggests a tunable version of METEOR.
contrasting
train_9136
We aim to capture this drift in the BTC by collecting posts over a number of years, 3 as well as being taken from different times of years, days in the month, and times of day.
previous social media NE corpora were gathered during narrow contiguous time periods (Ritter et al., 2011).
contrasting
train_9137
That is to say, crowd recall over the oracle annotation of the data was higher.
agreement was lower in the crowd.
contrasting
train_9138
We also see that celebrity figures, such as @justinbieber and Kate Middleton, are more prevalent in the social media top ranking -as are journalists, such as @timhudak and David Speers.
the CoNLL data does contain a large number of sportsmen, as it is rich in cricket reportage; e.g.
contrasting
train_9139
In order to perform on this task with a good accuracy, the systems will be required to exhibit a deeper semantic understanding of the linguistic tail of the disambiguation tasks we analyze in this paper.
the only task that will explicitly be evaluated is the QA task itself, which means that the annotation task would be largely reduced to the components necessary for the questions and answers.
contrasting
train_9140
Using this cross-document information fusion, they find improved performance over monolingual systems.
this work relies on having documents across multiple languages that describe the exact same event, which is an unrealistic case in practice.
contrasting
train_9141
(2004) presents an approach for extracted patterns in a source language and translating these patterns for use on a target language.
these works are limited to entity extraction, whereas our focus is on event extraction.
contrasting
train_9142
The resultant labels are then used together with word length, existence of special characters in the word, current, previous and next words to train a CRF model that predicts the token level classes of words in a given sentence/tweet.
for within language varieties, AIDA (Elfardy et al., 2014) and AIDA2 (Al-Badrashiny et al., 2015) are the best published systems attacking this problem in Arabic.
contrasting
train_9143
Once we complete the analysis of a question's parse tree, not all words in the question are of further relevance to the task of QC.
so as to maximise the number of words that we have rules for, we try to create rules for all words that appear in training set.
contrasting
train_9144
Once we have the SM of a question, we use rules to identify the relevant QC.
before we can match appropriate words, we require a way of identifying the correct sense of a word.
contrasting
train_9145
(2015) all achieved not bad results using simple classifiers with manually constructed sparse features.
feature engineering is time consuming and has low extensibility to other domains.
contrasting
train_9146
They appended an LSTM layer on top of a Hidden Markov Model in automatic speech recognition.
the LSTM they applied was only a shallow architecture.
contrasting
train_9147
The best results were obtained using all the three sub-feature sets.
the contribution of the linguistic feature set was negligible (80.47% vs. 80.29%).
contrasting
train_9148
Consider the RC example in Figure 1(b), the context word "moved" is a strong indicator for classifying the relation of ⟨People, downtown⟩ as Entity-Destination.
most of the conventional approaches in SRL and RC only considers local context features through feature engineering, which might be incomplete.
contrasting
train_9149
In most of the prior work, ILP was only used for AC inference.
this approach limits the interaction of AI and AC when making decisions.
contrasting
train_9150
If the predicate is observed in the training data, then the syntactic patterns may still be useful.
in the worst case, if the predicate is unseen, both of the clues become weak leading to the most difficult case for SRL.
contrasting
train_9151
So are the verb → adjective patterns that form present participles (+end) and past participles (+t).
the agentive/instrumental nominalization pattern +er (fahren → Fahrer / drive → driver), where argument structure changes, is associated with a loss in performance.
contrasting
train_9152
Obvious remaining differences are the language and the type of the distributional model used.
these factors were outside the scope of the current study, so we leave them for future work.
contrasting
train_9153
While a lot of valuable information is contained in these linguistic studies, this information is often not readily usable by NLP due to factors such as information overlap and differing definitions across studies.
there is also a current trend towards systematically collecting typological information from individual studies in publicly-accessible databases, which are suitable for direct application in NLP (e.g., for defining features and their values).
contrasting
train_9154
For question answering and information retrieval tasks, sentence similarities between query-answer pairs are used for assessing the relevance and ranking all the candidate answers (Severyn and Moschitti, 2015;Wang and Ittycheriah, 2015).
sentence similarity learning has following challenges: 1.
contrasting
train_9155
Supervised, unsupervised, and knowledge-based approaches have been studied for WSD (Navigli, 2009).
for all-words WSD, where all words in a corpus need to be annotated with word senses, it has proven extremely challenging to beat the strong baseline, which always assigns the most frequent sense of a word without considering the context (Pradhan et al., 2007a;Navigli, 2009;Navigli et al., 2013;Moro and Navigli, 2015).
contrasting
train_9156
Figure 3 shows strong positive correlation between F1 and the capacity of the language model.
larger models are slower to train and use more memory.
contrasting
train_9157
Taking the CVC approach as an example, the adapted context vector h CV C t is only used for predicting outputs in the output layer at step t, but not in the adaptation operation at step t + 1.
the DAGRU approach can achieve sequential adaptation.
contrasting
train_9158
Generally, these metrics have been focused on translation into English.
there has been little attention into their direct applicability to languages with rich morphology.
contrasting
train_9159
System combination has also been successfully applied to statistical machine translation system (SMT) (Och and Ney, 2001;Matusov et al., 2006;Schwartz, 2008;Schroeder et al., 2009).
system combination methods in the phrase-based (PBSMT) (Koehn et al., 2003) and hierarchical (HSMT) frameworks (Chiang, 2007) tend to be rather complex, requiring potentially non-trivial mappings between the partial hypotheses across the search spaces of the individual systems.
contrasting
train_9160
The mixture model allows to back propagate errors both to the gating network and the experts themselves.
considering the small size of the training data and the complexity of the experts, in terms of number of parameters, full back propagation is likely to result in over-fitting.
contrasting
train_9161
Of course, our result are also influenced by several other factors such as the choice of languages, training data, etc.
we should point out that hierarchically combining a set of 4 systems does improve translation quality.
contrasting
train_9162
The drawback of LSTMs or BLSTMs is that we represent a very long sentence as a single vector which is the output of the last time step.
using BLSTM with Neural Attention (NA) mechanism, we represent the sequence of vectors as a combined weighted representation vector by selectively attending to the past outputs.
contrasting
train_9163
For example, the Chinese equivalent of English phrase "staged demonstrations" in Figure 4 is "进行 示威".
there is not a direct dependency relation between "进行" and "示威" in the dependency tree.
contrasting
train_9164
The most straightforward idea is executing the projection procedure in Section 3.1 many times.
in practice, the growth rate of the number of newly learned phrases is far beyond our imagination.
contrasting
train_9165
Second, we do get more useful phrases.
the test data is not large enough so that all newly learned phrases can be found in the test data.
contrasting
train_9166
(2012) produced an event-driven corpus on the ACE 2005 English corpus.
"the annotator was not required to annotate all pairs of event mentions, but as many as possible", as stated in their paper.
contrasting
train_9167
There are 9 directed relations and an undirected default relation Other; thus, we have 19 different labels in total.
the Other class is not taken into consideration when we compute the official measures.
contrasting
train_9168
Initially, the performance increases if the depth is larger in both settings with and without augmentation.
if we do not augment data, the performance peaks when the depth is 3.
contrasting
train_9169
They have a similar problem with ambiguous expressions which they need to solve to determine the correct class (e.g., China can belong to several classes such as LOCATION or PERSON).
they do not identify the actual real world entity of the expression (e.g., there are several cities called China in the US and other countries, but they all belong to the class LOCATION).
contrasting
train_9170
Like us, (Biran and McKeown, 2015) focus on DBpedia data and use bigram models.
their approach investigate discourse planning not content selection and relatedly, the basic units of their bigram models are discourse relations rather than triples.
contrasting
train_9171
For instance, trees such as (1a) where the subject entity is shared by two triples, will naturally induce the use of an adjective modifier (1b).
trees such as (1d) where the object entity of a triple is the subject of another triple naturally suggests the use of a participial or a relative clause (1d-e).
contrasting
train_9172
BL solutions also often include properties such as "source" which are generic rather than specific to the type of entity being described.
s-Model solutions often contain sets of topically related properties (e.g., birth date and birth place) while C-Model solutions enumerate facts (affiliations, mascot, president, battle) about related entities (University of Texas, Austin and United states Navy Reserve).
contrasting
train_9173
On the one hand, this corresponds to the monotonic nature of the OpenCCG search space, where new edges are being added without removing the old ones.
delete effects would be needed to capture empty coverage overlap in combination-rule applications.
contrasting
train_9174
In case 1a, "Winter comes" and "It comes" are valid solutions.
in case 1b it is impossible to convey all the semantics because it is not possible to use both "Winter" and "Summer" as required (assuming that there are no "and" connectives in our lexicon).
contrasting
train_9175
The above compilation provides a necessary criterion for an OpenCCG task to be solvable.
our actual purpose requires a necessary criterion for an OpenCCG edge e 0 to be feasible, i. e., to form part of a solution.
contrasting
train_9176
While research on MDS has also considered genres other than newswire (e.g., opinionated blog posts in TAC 2008 or biomedical research papers in TAC 2014), MDS has almost exclusively focused on homogeneous document collections that belong to the same genre.
this homogeneous nature of the existing MDS benchmark corpora does not reflect application scenarios where topically related documents from different genres need to be summarized.
contrasting
train_9177
In this example, it is assumed that both of these opinions are expressed about a single restaurant which is not mentioned explicitly.
take the following synthetic example that ABSA is not addressing: "The design of the space is good in Boqueria but the service is horrid, on the other hand, the staff in Gremio are very friendly and the food is always delicious."
contrasting
train_9178
Adding an additional aspect of misc was considered.
in the initial round of annotations, we realised that it had a negative effect on the decisiveness of annotators and it led to a lower overall agreement.
contrasting
train_9179
Some annotators suggested that the sentence also implies the same opinion for Islington.
at the end all annotators agreed that in such cases no implicit assumptions should be made and only confined area should be labeled.
contrasting
train_9180
Results on single location sentences mainly show the ability of the model to detect the correct sentiment for an aspect.
results on two location sentences demonstrate the ability of the system not only on detecting the relevant sentiment of an aspect but also on recognising the target entity of the opinion.
contrasting
train_9181
The system correctly identifies that a "Positive" sentiment is expressed for the general aspect about location2.
no sentiment is expressed for this aspect for location1.
contrasting
train_9182
(Jiang et al., 2011) was the first to propose targeted sentiment analysis on Twitter and demonstrates the importance of targets by showing that 40% of sentiment errors are due to not considering them in classification.
this task only identifies the overall sentiment and the existing corpora for the task consist only of text with one single entity per unit of analysis.
contrasting
train_9183
Early efforts in building them automatically also yielded lexicons of moderate sizes such as the SentiWordNet (Esuli and Sebastiani, 2006;Baccianella et al., 2010).
recent results have shown that automatically extracted large-scale lexicons (e.g., up to a million words and phrases) offer important performance advantages, as confirmed at shared tasks on Sentiment Analysis on Twitter at SemEval 2013-2016 (Nakov et al., 2013;Rosenthal et al., 2014;Rosenthal et al., 2015;Nakov et al., 2016a), where over 40 teams participated four years in a row.
contrasting
train_9184
We tried to download all Macedonian tweets based on the Twitter language classification.
it turned out that in many cases, the returned tweets were in Bulgarian or Russian, which are also Slavic languages and share the same alphabet with Macedonian.
contrasting
train_9185
The last 50 years have seen an increasing number of immigrants to other countries...
i strongly believe that they are able to sustain their cultural identities and doing so help they keep their origin values.
contrasting
train_9186
In general, the happiness category obtains the highest results in two datasets while fear does best on the third.
the most difficult category to be classified correctly seems to be surprise.
contrasting
train_9187
N-gram captures word order in short context and NB feature assigns more weights to those important words.
nBSVM suffers from sparsity problem and is reported to be exceeded by newly proposed distributed (dense) text representations learned by neural networks.
contrasting
train_9188
Most neural models directly learn compositions upon word embeddings, and are reported to be powerful enough to learn high-quality distributed representations without human intervention.
in sparse case, heuristic weighting techniques designed by humans are shown to be able to bring significant improvements over raw BOW representation (Wang and Manning, 2012;Martineau and Finin, 2009).
contrasting
train_9189
For example, NBSVM (Wang and Manning, 2012) uses the ratio of the number of words in positive texts and negative texts to weight words, and achieves competitive results on a range of text classification tasks.
traditional sparse BOW representations take each word or n-gram as a unit and ignore the internal semantics of them.
contrasting
train_9190
Hidden layers of RNNs can preserve the historical sequential information, and can be used as the representations of the texts (Dai and Le, 2015).
both CNNs and RNNs are essentially 'flat' models, where structural information (e.g.
contrasting
train_9191
In theory, accuracy should benefit from the sequential and structural information of texts.
we surprisingly find that our models outperform other approaches, even though only n-gram information is exploited in our models.
contrasting
train_9192
Supporting the observations we have made from the experiments on Dataset 1, we see CNN-SVM outperforming CNN on Dataset 2.
when we use all the features, CNN alone (F1-score: 89.73%) does not outperform the state of the art (Ptácek et al., 2014) (F1-score: 92.37%).
contrasting
train_9193
In our generalizability test, when the pre-trained features are used with baseline features, we get 4.19% F1-score improvement over the baseline features.
when they are not used with the baseline features, together they produce 64.25% F1-score.
contrasting
train_9194
Constrained SMT uses less data than bilingual word embeddings or stacked denoising autoencoders and still outperforms both.
this approach uses higher quality, in-domain data as well as tuning parameters which adapt it to this domain.
contrasting
train_9195
Microblogs such as Twitter and Facebook have gained tremendous popularity in the past decade, they often contain extremely current, even breaking, information about world events.
the writing style of microblogs tends to be quite colloquial and nonstandard, unlike the style found in more traditional, edited genres .
contrasting
train_9196
Most focus on content information and use models such as convolutional neural networks (CNN) (Kim, 2014) or recursive neural networks (Socher et al., 2013).
for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored.
contrasting
train_9197
In the Facebook dataset we study, we use comments instead of reply links.
as the ultimate goal in this paper is predicting not comment stance but post stance, we treat comments as extra information for use in predicting post stance.
contrasting
train_9198
With the transformed word embedding feature, SVM can achieve comparable performance as SVM with n-gram feature.
the much fewer feature dimension of the transformed word embedding makes SVM with word embeddings a more efficient choice for modeling the large scale social media dataset.
contrasting
train_9199
Table 4 reports the result of their best AD setting, which represents the full joint stance/disagreement collective model on posts and is hence more relevant to UTCNN.
to their model, the UTCNN user embeddings represent relationships between authors, but UTCNN models do not utilize link information between posts.
contrasting