id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_12300
Some of the popular features used in the existing studies include linguistic features such as morphological, syntactic and semantic information of words and domain-specific features from biomedical ontologies such as Bio-Thesaurus (Liu et al., 2006) and UMLS (Unified Medical Language System) (Bodenreider, 2004).
these features heavenly account to the problem of data sparsity.
contrasting
train_12301
We introduce MIML algorithms for neural networks.
to most MI/MIML methods, which are applied in relation extraction, we apply MIML to the task of finegrained entity typing.
contrasting
train_12302
probability for e, we define: where w t ∈ R h is the output weight vector and b t the bias scalar for type t, and a t is the aggregated representation of all contexts c i of e for type t, computed as follows: where α i,t is the attention score of context c i for type t and a t ∈ R h can be interpreted as the representation of entity e for type t. α i,t is defined as: where M ∈ R h×dt is a weight matrix that measures the similarity of c and t. t ∈ R dt is the representation of type t. Table 1 summarizes the differences of our MIML methods with respect to the aggregation function they use to get corpus-level probabilities.
for optimization of all MIML methods, we use the binary cross entropy loss function, to the loss function of distant supervision in Eq.
contrasting
train_12303
The parameters of the PMI framework are statistically estimated using a large amount (more than 1 000 000 word pairs) of cross-linguistically diverse data.
lexStat's initial alignment algorithm is based on manually assigned parameters, and the final parameters are estimated empirically from the word pairs in the doculects being compared, and no external information is being applied.
contrasting
train_12304
Hauer and Kondrak (2011) were the first to introduce this measure to test the accuracy of multilingual cognate detection algorithms.
to pair scores such as ARI, B-Cubed scores have the advantage of being independent of the evaluation data itself.
contrasting
train_12305
Their definition of success was a function of the number of downloads from Project Gutenberg.
downloading a book is not by itself an indicator of a highly liked or a commercially successful book.
contrasting
train_12306
However, the results in Table 3 show an unimpressive F1-score of 0.610 for sentiment features.
the bag of sentic concepts model with average scores for sensitivity, attention, pleasantness, aptitude, and polarity gave a more impressive F1-score of 0.670, much higher than the baseline.
contrasting
train_12307
Table 8 shows a breakdown of fragment types in the first fold.
with n-grams, we also see a large proportion of purely syntactic fragments, and fragments mixing both lexical elements and substitution sites.
contrasting
train_12308
On the one hand smaller fragments corresponding to one or two grammar productions are most common, and are predominantly positively correlated with the literary ratings.
there is a significant negative correlation between fragment size and literary ratings (r "´0.2, p ă 0.001); i.e., smaller fragments tend to be positively correlated with the literary ratings.
contrasting
train_12309
In the first large-scale study of sociolinguistic variation on social media in the UK, we identify distinctively Scottish terms in a data-driven way, and find that these terms are indeed used at a higher rate by users of pro-independence hashtags than by users of anti-independence hashtags.
we also find that in general people are less likely to use distinctively Scottish words in tweets with referendum-related hashtags than in their general Twitter activity.
contrasting
train_12310
While some people still use distinctive elements of Scots in their speech, until recently the average Scottish person's exposure to written Scots would have been largely confined to a select few literary domains such as poetry and comic narrative (Corbett et al., 2003).
social media has given rise to a new genre of casual, communicative writing that is potentially visible to large and diverse audiences, providing both a platform and an impetus to express one's identity through the use of written language.
contrasting
train_12311
Perhaps other tweets with many Scottish terms were filtered out, in which case we will underestimate the probability that users choose Scottish variants.
this issue should not cause us to find differences in use between different groups where there are none.
contrasting
train_12312
Due to the very low rates of Scottish variants overall, our data set is too small to study differences between individual variables or even conclusively say whether there may be effects of both topic and audience size on the use of Scottish language.
we hope to be able to answer these questions in future by collecting a more complete set of data for the particular users studied here.
contrasting
train_12313
In a similar work, Sangati and Zuidema (2009) proposed entropy minimization and greedy familiarity maximization techniques to obtain lexical heads from labeled phrase-structure trees in an unsupervised manner.
we used neural attention to obtain the "head rules" in the GA-RNNG; the whole model is trained end-to-end to maximize the log probability of the correct action given the history.
contrasting
train_12314
From an empirical point of view, a transition system that over-generates is necessary for robustness, and is desirable for fast approximate linear-time inference.
from a formal point of view, the relationship of the SR-GAP transition system with automata explicitly designed for LCFRS parsing (Villemonte de La Clergerie, 2002; Kallmeyer and Maier, 2015) requires further investigations.
contrasting
train_12315
To overcome this, one can use a tree-structured stack (TSS) to factorize the representations of common prefixes in the stack as described by for projective dependency parsing.
the discontinuites entail that a limited amount of copying cannot be entirely avoided.
contrasting
train_12316
It would be interesting to see if SR-SWAP is improved when swapping non-terminals is allowed.
it would make feature extraction more complex, because it would no longer be assumed that the buffer contains only tokens.
contrasting
train_12317
Neural models, with various neural architectures (Bengio et al., 2001;Mikolov et al., 2010;Chelba et al., 2014;Józefowicz et al., 2016), have recently achieved great success.
most of these neural architectures have a common issue: large output vocabularies cause a computational bottleneck due to the output normalization.
contrasting
train_12318
For the three noise distributions, the NCE score seems to converge.
for the unigram distribution, the log-partition function does not decrease, thus neither does the log-likelihood.
contrasting
train_12319
We suspect this is possible because a community's language in social media may capture economic-related community characteristics that are not otherwise easily available.
the challenge is incorporating noisy high-dimensional language features in such a way that they can contribute beyond the robust low-dimensional traditional predictors (i.e.
contrasting
train_12320
It shows that our approach correctly simplifies the largest number of problems, while making the fewest errors of type 3A and 4.
it can be noticed that NNLS makes many errors of type 5.
contrasting
train_12321
There have been recent advances in supervised summarisation mainly with respect to supervised learning using neural networks (for example (Rush et al., 2015;Chopra et al., 2016)).
due to data size requirements, these systems are constrained to title generation systems and therefore not in the scope of this work.
contrasting
train_12322
Evaluation against multiple, rather than single reference summaries is generally recognised as leading to fairer, better quality, evaluation: different human summaries appear to be good even though they do not have identical content (Nenkova and Passonneau, 2004).
averaging ROUGE scores across multiple summaries, as is standard practice, makes a perfect 100% score unattainable, even for abstractive systems.
contrasting
train_12323
Since the last step in Pipeline is to run PredPatt, Pipeline generates no malformed output.
15% of its 4 In linearized predicates, arguments are replaced by placeholders.
contrasting
train_12324
Both architectures use a nonlinearity to modulate the contents of a product; in an RAE this is an outer (bilinear) product while in an LSTM it is a Hadamard (element-wise) product.
lSTM memory gates represent an internal hidden state of the network, while RAE gates are part of the network input.
contrasting
train_12325
The model architecture is the same as an RAE with untied encoder and decoder weights (Eq 2).
the training signal differs from a classic RAE in two ways.
contrasting
train_12326
Distributional semantics (Turney and Pantel, 2010), and data-driven, continuous approaches to language in general including neural networks (Bengio et al., 2003), are a success story in both Computational Linguistics and Cognitive Science in terms of modeling conceptual knowledge, such as the fact that cats are animals (Baroni et al., 2012), similar to dogs (Landauer and Dumais, 1997), and shed fur (Erk et al., 2010).
distributional representations are notoriously bad at handling discrete knowledge (Fodor and Lepore, 1999;Smolensky, 1990), such as information about specific instances.
contrasting
train_12327
's (2014) results that DDSq works best for hypernymy detection in the NOTHYP setup.
for instantiation detection it is the concatenation of the input vectors that works best (cf.
contrasting
train_12328
This indicates that geospatial information does provide some useful semantic information.
the GEOD embeddings underperformed the TEXT5 embeddings in all cases.
contrasting
train_12329
This means that if an emoji is frequent, it is more likely to be on top of the possible choices even if it is a mistake.
the F-measure does not seem to depend on frequency, as the highest F-measures are scored by a mix of common and uncommon emojis ( , , , and ) which are respectively the first, second, the sixth and the second last emoji in terms of frequencies.
contrasting
train_12330
English is broadly considered to be a morphologically impoverished language, and there are certainly many regularities in morphological patterns, e.g., the common usage of -able to transform a verb into an adjective, or -ly to form an adverb from an adjective.
there is considerable subtlety in English derivational morphology, in the form of: (a) idiosyncratic derivations; e.g.
contrasting
train_12331
information-theoretic constraints, this should be reflected in production preferences and therefore in corpora of text types which allow for the respective omissions.
accurately finding all instances of article omission is not a trivial issue, as there are several special cases of singular nouns which allow for or even require article omission even in standard written German, e.g.
contrasting
train_12332
(2006), there are no bounds for LSP at all.
for CR, as it can be seen from our experiments, values of T for LSP and LSPA can be reliably selected on a validation set for a fixed training data size and a choice of features/instances.
contrasting
train_12333
As we show, tying the input and the output embeddings is indeed detrimental in word2vec.
it improves performance in NNLMs.
contrasting
train_12334
In NLP, multi-task learning typically involves very heterogeneous tasks.
while great improvements have been reported (Luong et al., 2016;Klerke et al., 2016), results are also often mixed (Collobert and Weston, 2008;Martínez Alonso and Plank, 2017), and theoretical guarantees no longer apply.
contrasting
train_12335
As discussed in Section 1, there have been studies to reduce the time complexity of spell correction by various methods.
the recent work of de Amorim and Zampieri (2013) is closest to our work in terms of the goal of the study.
contrasting
train_12336
We do indeed regularize in this way throughout all our experiments, tuning the multiplier on a held-out development set.
regularization has only minor effects with large training corpora, and is not in the original word2vec implementation of skip-gram.
contrasting
train_12337
Having seen that skip-gram is a form of matrix factorization, we can generalize it to tensors.
to the matrix case, there are several distinct definitions of tensor factorization (Kolda and Bader, 2009).
contrasting
train_12338
Hence, all of the possible replies to X must be stored within the decoder's probability distribution P (Y |X), and during decoding it is hard to disentangle these possible replies.
our model contains a stochastic component z in the decoder P (Y |z, X), and so by sampling different z and then performing ML decoding on P (Y |z, X), we hope to tease apart the replies stored in the probability distribution P (Y |X), without resorting to sampling from the decoder.
contrasting
train_12339
For this reason we cannot directly compare the scores.
we follow the steps to replicate the challenge's pipeline on our unaltered data and use the score as our baseline.
contrasting
train_12340
One of the issues in the area of affective computation is that the amount of annotated data is very limited.
the number of ways that the same emotion can be expressed verbally is enormous due to variability between speakers.
contrasting
train_12341
Pretraining convolutional neural network (Ebrahimi Kahou et al., 2015) on an external dataset of faces improves the performance of the emotion classification model.
the problem of augmenting emotional datasets with audio samples to improve the performance of solely audio processing models remained unsolved.
contrasting
train_12342
In our implementation the MLP computes a whole vector with values for each v k at once.
in this notation we use just the value corresponding to v j .
contrasting
train_12343
Marmot's edit-tree method of candidate selection favors fewer lemmas, which allows the lemmatization module to run efficiently.
dIRECTL+ has no bias towards lemmas or tags.
contrasting
train_12344
Both systems correctly identify it as a plural subjunctive form of the verb lacrar.
marmot also outputs an alternative analysis that involves a bizarre lemma lacr.
contrasting
train_12345
15 We additionally compared with WN-Domains-3.2 (Magnini and Cavaglià, 2000;Bentivogli et al., 2004), which is the latest released version of WordNet Domains 16 .
this approach involves manual curation, both in the selection of seeds and correction of errors.
contrasting
train_12346
With low-resource languages, we cannot presume the availability of tag dictionaries.
we often have highquality bilingual dictionaries with translations of common words into a resource-rich language such as English.
contrasting
train_12347
We see the evidence for that in the very wide confidence intervals in our Wiktionary sampling.
even the smallest of frequency-aware Wiktionaries prove to be much more reliable.
contrasting
train_12348
By doing this, we are transferring the semantics of LKIF to Wikipedia entities and populating the LKIF ontology with Wikipedia entities and their mentions.
even using Wikipedia, many of the classes have few instances.
contrasting
train_12349
However, we find that curriculum learning does not produce the expected improvements.
reversed curriculum learning, learning from most specific to most general, produces better results, and it helps to indicate that there may be incoherences in the mappings between ontologies.
contrasting
train_12350
In contrast, the NERC does a better job at distinguishing YAGO classes, which are natively built from Wikipedia, even if the classification problem is more difficult because of the bigger number of classes.
the fact that smaller classes are recognized better than bigger classes indicates that bigger classes are ill-delimited.
contrasting
train_12351
This is the case, for example, of the detection of the main components in scholarly publications (Teufel et al., 2009;Liakata et al., 2012;De Waard and Maat, 2012;Burns et al., 2016) or the annotation of content zones, i.e., functional constituents of texts (Bieler et al., 2007;Stede and Kuhn, 2009;Baiamonte et al., 2016).
the notion of Content Types that we have adopted applies across genres.
contrasting
train_12352
The proposed model ignores long-range dependencies that could conceivably be captured using alternative architectures, such as recurrent neural networks (RNN) (Mikolov et al., 2010;Luong et al., 2013).
topical and stylistic information is contained in shorter word and character sequences for which the shallow neural network architectures with n-gram feature representations are likely to be sufficient, while having the advantage of being much faster to run.
contrasting
train_12353
In the real estate domain, user-generated free text descriptions form a highly useful but unstructured representation of real estate properties.
there is an increasing need for people to find useful (structured) information from large sets of such descriptions, and for companies to propose sales/rentals that best fit the clients' needs, while keeping human reading effort limited.
contrasting
train_12354
For example, real estate descriptions in natural language may not be directly suited for specific search filters that potential buyers want to apply.
a hierarchical data structure representing the real estate property enables specialized filtering (e.g., based on the number of bedrooms, number of floors, or the requirement of having a bathroom with a toilet on the first floor), and is expected to also benefit related applications such as automated price prediction (Pace et al., 2000;Nagaraja et al., 2011).
contrasting
train_12355
The conditional distribution over all dependency structures y ∈ T (x) is: normalized by the partition function Z(x; θ), which would require a summation over the exponentially large number of all possible dependency structures in T (x).
the MTT allows directly computing Z(x; θ) as det(L(θ)), in which L(θ) is the Laplacian matrix of the graph.
contrasting
train_12356
As the results show, the generic PBMT system outperforms its NMT counterpart in all the domains by a very large margin; and as the NMT system becomes more specific by observing more domain-specific data, the gap between the performances reduces until the NMT outperforms; which confirms the results of the previous works in this field (Luong and Manning, 2015).
it is interesting to see what is the reason behind the very low performance of the generic NMT system compared to the generic PBMT.
contrasting
train_12357
Recently, (Junczys-Dowmunt et al., 2016) performed a very extensive experiment in which the performance of NMT is compared with PBMT and hierarchical SMT on multiple language directions and showed that NMT systems in almost all the cases outperform their SMT counterparts and to solve the only remaining issue which is the decoding time of the NMT systems, they introduce an efficient neural decoder which makes it feasible to deploy NMT systems in-production line.
all their experiments are performed on one single domain for which there exists a very large training corpus.
contrasting
train_12358
A related task to TLS is the TREC update summarization task (Aslam et al., 2015).
to TLS, this task requires online summarization by presenting the input as a stream of documents.
contrasting
train_12359
For r, we sum up all of the features of the input given by the encoder (Line 2) and estimate the frequency.
for g, we expect Lines 5 and 6 to work as a kind of voting for both positive and negative directions since g needs just occurrence information, not frequency.
contrasting
train_12360
Note that its model structure is nearly identical to our baseline.
mRT trained a model with a sequence-wise min- DUC-2004 (w/ 75-byte limit) Gigaword (w/o length limit) method imum risk estimation, while we trained all the models in our experiments with standard (pointwise) log-likelihood maximization.
contrasting
train_12361
They also show that the task of PP attachment with multiple noun candidates is considerably more difficult than the traditional binary classification task.
de Kok et al.
contrasting
train_12362
• Multipass: In the Multipass method, we train two separate models like the Monolingual method.
we apply these models on the code-mixed data differently.
contrasting
train_12363
At inference time, our transliteration model would predict the most likely word form for each input word.
the single-best output from the model may not always be the best option considering an overall sentential context.
contrasting
train_12364
For the most part, the advances are due to general improvements in parsing technology or feature representation, and do not explicitly target any specific language or syntactic construction.
despite the high overall accuracy, parsers are still persistently wrong in attaching certain relations.
contrasting
train_12365
Lattice minimum Bayes-risk (LMBR) decoding has been applied successfully to translation lattices in traditional SMT to improve translation performance of a single system (Kumar and Byrne, 2004;Tromble et al., 2008;.
minimum Bayes-risk (MBR) decoding is also a very powerful framework for combining diverse systems (Sim et al., 2007;de Gispert et al., 2009).
contrasting
train_12366
We argue that MBR-based methods in their present form are not well-suited for NMT because of the following reasons: • Previous approaches work well with rich lattices and diverse hypotheses.
nMT decoding usually relies on beam search with a limited beam and thus produces very narrow lattices (Li and Jurafsky, 2016;Vijayakumar et al., 2016).
contrasting
train_12367
2 allows to build up hypotheses from left to right.
our definition of the evidence in Eq.
contrasting
train_12368
1 contains a sum over the (unordered) set of all n-grams.
we can rewrite our objective function in Eq.
contrasting
train_12369
Rescoring a 10-best list already yields a large improvement of 1.2 BLEU.
the hypotheses are still close to the SMT baseline.
contrasting
train_12370
Although attention is an intuitively appealing concept and has been proven in practice, existing models of attention use contentbased addressing and have made only limited use of the historical attention masks.
lessons from better word alignment priors in latent variable translation model suggests value for modeling attention dependent of content.
contrasting
train_12371
While other work has sought to address this problem (Cohn et al., 2016;Tu et al., 2016;Feng et al., 2016), these models either rely on explicitly engineered features (Cohn et al., 2016), resort to indirect modeling of the previous attention decisions as by looking at the content-based RNN states that generated them (Tu et al., 2016), or only model coverage rather than coverage together with ordering patterns (Feng et al., 2016).
we propose to model the sequences of attention levels for each word with an RNN, looking at a fixed window of previous alignment decisions.
contrasting
train_12372
One major limitation is that attention at each time step is not directly dependent of each other.
in machine translation, the next word to attend to highly depends on previous steps, neighboring words are more likely to be selected in next time step.
contrasting
train_12373
Recent work has mainly focused on morphologically complex rare words has often tried to alleviate the problem by spreading the available knowledge across words that share the same morpheme (Luong et al., 2013;Botha and Blunsom, 2014;Soricut and Och, 2015).
these techniques are unable to induce representations for words whose morphemes are not seen during training, such as infrequent domain specific terms.
contrasting
train_12374
Dependency-based vector spaces steer the induced WEs towards functional similarity (e.g., tiger:cat) rather than topical similarity/relatedness (e.g., tiger:jungle), They support a variety of similarity tasks in monolingual settings, typically outperforming BOW contexts for English (Bansal et al., 2014;Hill et al., 2015;Melamud et al., 2016).
despite the steadily growing landscape of CL WE models, each requiring a different form of cross-lingual supervision to induce a SCLVS, syntactic information is still typically discarded in the SCLVS learning process.
contrasting
train_12375
Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.
textual similarity detection can be used to detect plagiarism.
contrasting
train_12376
By removing stopwords before training, we obtain a slightly higher M I(d, k) than removing stopwords after training, in support of hypothesis 1 in Section 2.
this difference is relatively small compared to including more stopwords or changing the number of topics.
contrasting
train_12377
Of these, the best results have been obtained using tree models (Mou et al., 2015;Tai et al., 2015), which use sentence syntactic trees originating from grammar to help construct sentence embeddings.
noisy text (such as found in online reviews) does not always contain much grammatical structure, which reduces the effectiveness of tree models.
contrasting
train_12378
Applying softmax to the attention scores indicated that the receiving neural network must focus on a certain part of the input.
this assumption might not hold in the GRA framework as phrase information is not always needed at each timestep of RNN training.
contrasting
train_12379
For instance, in softmax, each input is simply weighted by its contribution to the sum.
in GRA, the sum of similarity scores is not a good scaling factor since phrase vectors are sometimes omitted.
contrasting
train_12380
We suppose that the drop of accuracy in the not-aligned version is a result of phrase vectors being over-counted with large weights, and thus reducing the effectiveness of the sequence learning ability in RNN.
with soft-alignment, GRA can incorporate CNN phrase vectors into an RNN without impacting the sequence learning effectiveness.
contrasting
train_12381
Latent Semantic Analysis (Deerwester et al., 1990) and Latent Dirichlet Allocation (Blei et al., 2003) are the main employed methods for this task.
these methods do not systematically yield improved performance compared to the BOW representation.
contrasting
train_12382
Most hyperparameters in Table 1 are also shared by the training of DV-LSTM.
due to the limitation of the current experiment platform and the cost of grid searches, we do not tune these hyperparameters in training DV-LSTM.
contrasting
train_12383
Another plausible reason is that PTB's genres are almost unrelated to the topics and it likely requires more abstract sequential information for their classification.
the Brown Corpus has a relatively strong overlapping between topics and genres.
contrasting
train_12384
Task-oriented dialogue focuses on conversational agents that participate in dialogues with user goals on domain-specific topics.
to chatbots, which simply seek to sustain open-ended meaningful discourse, existing task-oriented agents usually explicitly model user intent and belief states.
contrasting
train_12385
One line of work has tackled the problem using partially observable Markov decision processes and reinforcement learning with carefully designed action spaces (Young et al., 2013).
the large, hand-designed action and state spaces make this class of models brittle and unscalable, and in practice most deployed dialogue systems remain hand-written, rule-based systems.
contrasting
train_12386
While reasoning over memory is appealing, these models simply choose among a set of utterances rather than generating text and also must have temporal dialogue features explicitly encoded.
the present literature lacks results for now standard sequence-to-sequence architectures, and we aim to fill this gap by building increasingly complex models of text generation, starting with a vanilla sequence-to-sequence recurrent architecture.
contrasting
train_12387
(2017) recently showed that UTD can be improved using the translations themselves as a source of information, which suggests joint learning as an attractive area for future work.
poor precision is most likely due to the simplicity of our MT model, and designing a model whose assumptions match our data conditions is an important direction for future work, which may combine our approach with insight from recent, quite different audio-to-translation models Adams et al., 2016a;Adams et al., 2016b;Berard et al., 2016).
contrasting
train_12388
The description of the discourse structure is still an open issue.
some low level characteristics have already been clearly identified, e.g.
contrasting
train_12389
Supervised approaches to DA recognition have been successfully investigated by many authors (Stolcke et al., 2000;Klüwer et al., 2010;Kalchbrenner and Blunsom, 2013).
annotating training data is both slow and expensive process.
contrasting
train_12390
This encourages the embedding of a word to be representative of the context surrounding it.
a careful look reveals that the context around a word can be split into multiple categories, specifically that each word has at least 2c contexts, one for each position in the window being considered.
contrasting
train_12391
Le and Mikolov (2014)'s Paragraph Vector framework also focus on capturing word order in their embeddings.
our method is more efficient since ECO does not require training the n-gram embeddings.
contrasting
train_12392
Success is usually measured in terms of correlation with human similarity judgments.
evaluating measures of phrase-level similarity directly against human judgments of similarity ignores the problem that it is not always possible to determine meaning in a compositional manner.
contrasting
train_12393
Using multi-sense embeddings in a hard clustering (rather than single-sense embeddings in a soft clustering) avoids the usage of a cluster membership threshold, which most soft clustering algorithms require.
the clustering algorithm outputs a membership degree for each element and each cluster, i.e., a fuzzy membership.
contrasting
train_12394
Stance Detection Over the last decade, there has been active research in modeling stance (Thomas et al., 2006;Somasundaran and Wiebe, 2009;Anand et al., 2011;Walker et al., 2012a;Hasan and Ng, 2013;.
all of these previous works treat each target independently, ignoring the potential dependencies that could exist among related targets.
contrasting
train_12395
However, the spectrum kernel obtains lower results in all the cases.
to the other two kernels, the spectrum one assigns a high score even when only one of the texts has a high frequency for a particular n-gram (see Section 3).
contrasting
train_12396
Previous work focus on learning a target specific representation for a given input sentence which is used for classification.
they do not explicitly model the contribution of each word in a sentence with respect to targeted sentiment polarities.
contrasting
train_12397
Interestingly, compared to BILSTM-ATT without contextualized attention, BILSTM-ATT-C loses accuracies on positive (-1.1%).
bILSTM-ATT-G gives large improvements on positive (+4.2%) and neutral (+1.2%) targets but loses accuracy on negative (-2.4%).
contrasting
train_12398
(2016) who propose a two-view label propagation approach instead.
to our knowledge, only Mohammad and Turney (2013) investigated the effects of these perspectives on annotation quality, finding differences in interannotator agreement (IAA) relative to the exact phrasing of the annotation task.
contrasting
train_12399
They used hand-crafted regular expressions to identify tweets indicating that a user may have bought or wanted a product.
their dataset was biased towards bought/want tweets and their patterns covered only a subset of possible bought/want phrases.
contrasting