id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_21200
|
Our analysis so far suggests that some syntactic units are relatively impervious to the automatic sentence segmentation, while others are more susceptible to error.
|
when we examine the mean values given in Table 3, we observe that even in cases when the complexity metrics are significantly different in the automatic transcripts, the differences appear to be systematic.
|
contrasting
|
train_21201
|
For the sake of example, annotator A5 in the CONFLICT setting will annotate documents with the true class B as B exactly 10% of the time but might annotate B as C 85% of the time.
|
annotator A4 might annotate B as D most of the time.
|
contrasting
|
train_21202
|
In information extraction, for example, one needs to predict the relation labels y that an entity-pair x can have based on the hidden relation mentions h, i.e., the relation labels for occurrences of the entity-pair in a given corpus.
|
these models are often trained by optimizing performance measures (such as conditional log-likelihood or error rate) that are not directly related to the task-specific non-linear performance measure, e.g., the F 1 -score.
|
contrasting
|
train_21203
|
However, these models are often trained by optimizing performance measures (such as conditional log-likelihood or error rate) that are not directly related to the task-specific non-linear performance measure, e.g., the F 1 -score.
|
better models may be trained by optimizing the taskspecific performance measure while allowing latent variables to adapt their values accordingly.
|
contrasting
|
train_21204
|
The goal of our learning problem is to find w ∈ R d which minimizes the expected loss, aka risk, over a new sample D of size N : Generally, the loss function ∆ cannot be decomposed into a linear combination of a loss function δ over individual training samples.
|
most discriminative large-margin learning algorithms assume for simplicity that the loss function is decomposable and the samples are i.i.d.
|
contrasting
|
train_21205
|
We will show below that this pretraining is critical for getting good performance in the paraphrase task.
|
the general design principle of this type of unsupervised pretraining should be widely applicable given that next-word prediction training is possible in many NLP applications.
|
contrasting
|
train_21206
|
Parsing is a potentially powerful tool for identifying the important meaning units of a sentence, which can then be the basis for determining meaning equivalence.
|
reliance on parsing makes these approaches less flexible.
|
contrasting
|
train_21207
|
These vectors can then be used to perform query classification or web search.
|
to existing representation learning methods which employ either unsupervised or single-task supervised objectives, our model learns these representations using multi-task objectives.
|
contrasting
|
train_21208
|
In the following, we elaborate the model in detail: Word Hash Layer (l 1 ): Traditionally, each word is represented by a one-hot word vector, where the dimensionality of the vector is the vocabulary size.
|
due to the large size of vocabulary in realworld tasks, it is very expensive to learn such kind of models.
|
contrasting
|
train_21209
|
For such languages, many forms will not be attested even in a large corpus.
|
different lemmas often exhibit the same inflectional patterns, called paradigms, which are based on phonological, semantic, or morphological criteria.
|
contrasting
|
train_21210
|
Allomorphic representations have the potential advantage of reducing the complexity of transductions by the virtue of being similar to the correct form of the affix.
|
we found that allomorphic affixes tend to obfuscate differences between distinct inflections, so we decided to employ abstract tags instead.
|
contrasting
|
train_21211
|
Our Basic model outperforms DDN for both nouns and verbs, despite training on less data.
|
reranking actually decreases the accuracy of our system on Czech nouns.
|
contrasting
|
train_21212
|
A popular surrogate is an 1 penalty, Ω(θ) def = w |θ w |.
|
1 would not recognize that θ is simpler with the features {ab, abc, abd} than with the features {ab, pqr, xyz}.
|
contrasting
|
train_21213
|
They report superior performance of their hybrid model over both component models.
|
their model does not consider the coherence of the target word during the generation process, nor other important features that have been shown to significantly improve machine transliteration (Li et al., 2004;Jiampojamarn et al., 2010).
|
contrasting
|
train_21214
|
and combine the pivot model with a grapheme-based model, which works better than either of the two approaches alone.
|
their model is not able to incorporate more than two languages.
|
contrasting
|
train_21215
|
Unfortunately, phonetic transcriptions are rarely available, especially for words which originate from other languages, and generating them on the fly is less likely to help.
|
transliterations from other languages constitute another potential source of information that could be used to approximate the pronunciation in the source language.
|
contrasting
|
train_21216
|
In general, compatible terms will be semantically related (dog and animal).
|
relatedness does not suffice: many semantically related, even very similar terms are not compatible (dog and cat).
|
contrasting
|
train_21217
|
A high accuracy was reported on a word translation task, where a word projected to the vector space of the target language is expected to be as close as possible to its translation (Mikolov et al., 2013b).
|
we note that the 'closeness' of words in the projection space is measured by the cosine distance, which is fundamentally different from the Euler distance in the objective function (3) and hence causes inconsistence.
|
contrasting
|
train_21218
|
The component monolingual n-gram LMs must be trained on monolingual corpora in their respective languages.
|
due to the lack of codified orthographic conventions concerning spelling, diacritic usage, and spacing, compounded by the liberal use of now-obsolete shorthand notations by printers, statistics gleaned from available modern corpora provide a poor representation of the language used in the printed documents.
|
contrasting
|
train_21219
|
Our system, as currently designed, attempts to faithfully transcribe text.
|
for the purposes of indexability and searchability of these documents, it may be desirable to also produce canonicalized transcriptions, for example collapsing spelling variants to their modern forms.
|
contrasting
|
train_21220
|
Citation sentences (citances) to a reference article have been extensively studied for summarization tasks.
|
citances might not accurately represent the content of the cited article, as they often fail to capture the context of the reported findings and can be affected by epistemic value drift.
|
contrasting
|
train_21221
|
Such improvement is expected, as UMLS-expand augments the citance with all possible formulations of the detected biomedical concepts.
|
its precision is only comparable with the baseline, as it does not remove any noisy terms from the citance.
|
contrasting
|
train_21222
|
These two approaches can, of course, be combined.
|
to our knowledge, the issues of how to combine the approaches and when that is likely to be useful have not been thoroughly studied.
|
contrasting
|
train_21223
|
We 9 We are not aware of an appropriate significance test for experiments where subsets of the training data are used.
|
the benefits of stacking seem unlikely to be due to chance.
|
contrasting
|
train_21224
|
Simply combining the outputs from a timeline generation system and a comment summarization system may lead to timelines that lack cohesion.
|
articles and comments are from intrinsically different genres of text: articles emphasize facts and are written in a professional style; comments reflect opinions in a less formal way.
|
contrasting
|
train_21225
|
An exception is the unsupervised model of Guinaudeau and Strube (2013) (G&S), which converts the document into a graph of sentences, and evaluates the text coherence by computing the average out-degree over the entire graph.
|
despite the apparent success of these methods, they rely merely on matching mentions of the same entity, but neglect the contribution from semantically related but not necessarily coreferential entities.
|
contrasting
|
train_21226
|
For example, the text in Figure 1a 1 has no common entity in s 2 and s 3 .
|
the transition between them is perfectly coherent, because there exists close semantic relatedness between two distinct entities, Gates in s 2 and Microsoft in s 3 , which can be captured by the world knowledge that Gates is the person who created Microsoft (represented by Gates-create-Microsoft).
|
contrasting
|
train_21227
|
For a given pair of entities in the text, the chance is rather low to find instances in the knowledge bases where the two arguments perfectly match the pair of entities, because entities in the source document might appear in aliases or abbreviations.
|
partial matching between arguments and entities usually increases coverage but at risk of introducing more noise.
|
contrasting
|
train_21228
|
This representation captures not only the distribution information of individual entities but also the semantic relatedness between different entities.
|
the original graph-based model by G&S (Figures 2a and 2b) includes common-entity edges only and misses the semantic relatedness information.
|
contrasting
|
train_21229
|
As mentioned previously, numerous extensions have been proposed to the original entity-based model of B&L.
|
those extensions mostly rely on entity matching and thus fail to incorporate the information from semantically related yet distinct entities.
|
contrasting
|
train_21230
|
Intuitively, Feature 1 is a recall-enhancing feature: it encodes a condition whose satisfaction can help discover many event coreference links.
|
it is not designed to be precision-oriented, as it is computed based solely on the triggers and not their surrounding contexts.
|
contrasting
|
train_21231
|
Coreference is a core nlp problem.
|
newswire data, the primary source of existing coreference data, lack the richness necessary to truly solve coreference.
|
contrasting
|
train_21232
|
We argue in Section 2 that to truly solve coreference resolution, the research community needs high-quality datasets that contain many challenging cases such as nested coreferences and coreferences that can only be resolved using external knowledge.
|
newswire is deliberately written to contain few coreferences, and those coreferences should be easy for the reader to resolve.
|
contrasting
|
train_21233
|
Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge.
|
usually a large set of such formulae is necessary to achieve generalization.
|
contrasting
|
train_21234
|
The degree to which r s ⇒ r t is captured is quite high for all models (0.94, 0.96, and 0.97 for matrix factorization, pre-factorization inference, and joint optimization respectively).
|
the probability of r t ⇒ r s is also relatively high for matrix factorization and pre-factorization inference (0.81 and 0.83 respectively), suggesting that these methods are primarily capturing symmetric similarity between relations.
|
contrasting
|
train_21235
|
Specifically, approaches based on Markov Logic Networks (MLNs) (Richardson and Domingos, 2006) encode logical knowledge in dense, loopy graphical models, making structure learning, parameter estimation, and inference hard for the scale of our data.
|
in our model the logical knowledge is captured directly in the embeddings, leading to efficient inference.
|
contrasting
|
train_21236
|
The basic intuition is that the entity referents of m and related mentions should be similarly connected in the KB.
|
there might be many entity mentions in the context of a target entity mention that could potentially be leveraged for disambiguation.
|
contrasting
|
train_21237
|
But this approach usually introduces many irrelevant mentions, and it's very difficult to automatically determine the scope of discourse.
|
some recent work exploited more restricted measures by only choosing those mentions which are topically related (Cassidy et al., 2012;Xu et al., 2012), bear a relation from a fixed set (Cheng and Roth, 2013), coreferential (Nguyen et al., 2012;Huang et al., 2014), socially related (Cassidy et al., 2012;Huang et al., 2014), dependent (Ling et al., 2014), or a combination of these through meta-paths (Huang et al., 2014).
|
contrasting
|
train_21238
|
News and social media are source text genres that tend to focus on new information, trending topics, breaking events, or even mundane details about the entity.
|
the KB usually provides a snapshot summarizing only the entity's most representative and important facts.
|
contrasting
|
train_21239
|
(2013) with artificial paragraph tokens, which accumulate the meaning of words appearing in the respective paragraphs.
|
to these shallow methods, other approaches employ deep multi-layer networks for the processing of sentences.
|
contrasting
|
train_21240
|
Although attractive from this perspective, the kernel-based approach comes with a high computational cost.
|
to prior work, our approach effectively learns lowdimensional representation of words and their roles, eliminating the need for heavy manual feature engineering.
|
contrasting
|
train_21241
|
The predicates in each sentence are also given during both training and testing.
|
we neither predict nor use the sense for each predicate.
|
contrasting
|
train_21242
|
A micro-averaged accuracy measure is used so as not to disproportionately weigh short recipes.
|
in order to allow comparison to mean Kendall's Tau, commonly used in works on order learning, we further report a macroaveraged acc 2 by computing acc 2 for each recipe separately, and taking the average of resulting accuracy levels.
|
contrasting
|
train_21243
|
Derivations generated for these documents are such that both translation model features (with or without the LM) and retrieval features agree on a path close to the SMT Viterbi translation.
|
other relevant documents require more non-standard lexical choices that are harder to achieve in a +LM search space, since the strong weight on the language model, plus a language model-driven pruning technique, strongly favor lexical choices that agree with the language model's concept of fluency.
|
contrasting
|
train_21244
|
For evaluation of metrics that operate at the system or document-level such as BLEU, inconsistency in individual human judgments can, to some degree, be overcome by aggregation of individual human assessments over the segments within a document.
|
for evaluation of segment-level metrics, there is no escaping the need to boost the consistency of human annotation of individual segments.
|
contrasting
|
train_21245
|
For a sample mean for which the variance is known, the required sample size can be computed for a specified standard error.
|
due to the large number of distinct translations we deal with, the variance in sample score distributions may change considerably from one translation to the next.
|
contrasting
|
train_21246
|
The reconstruction of the copy history of manuscript texts is largely similar to that of DNA, which is why phylogenetic approaches have been adopted (Robinson and O'Hara, 1996;Robinson et al., 1998;van Reenen et al., 1996;van Reenen et al., 2004;Spencer et al., 2004;Roos and Heikkilä, 2009;Roelli and Bachmann, 2010;Andrews and Macé, 2013).
|
the main goal of the philological work on ancient manuscripts is not the reconstruction of the copy history but compiling an edition of a historical text.
|
contrasting
|
train_21247
|
This may be seen as a computer philological co-loan from bio-informatics.
|
it might correspond to a more Bédierian edition practise, where stemmatology is emphasized because it can point to the most important manuscript, which is however implicit and unlikely.
|
contrasting
|
train_21248
|
Bootstrapped classifiers iteratively generalize from a few seed examples or prototypes to other examples of target labels.
|
sparseness of language and limited supervision make the task difficult.
|
contrasting
|
train_21249
|
To train good word alignment models, we require access to a large parallel corpus.
|
collection of parallel corpora has mostly focused on a small number of widely-spoken languages.
|
contrasting
|
train_21250
|
They compare their automated reconstructions with the ones reconstructed by historical linguists and find that their model beats an edit-distance baseline.
|
their model has a requirement that the tree structure between the languages under study has to be known beforehand.
|
contrasting
|
train_21251
|
PLSA and LDA are the typical unsupervised topic models, that is non-knowledgeable model.
|
biterm topic model (bTM) (Yan et al., 2013) leverages self-contained knowledge into semantic analysis.
|
contrasting
|
train_21252
|
vast amount of lexical knowledge about words and their relationships, denoted as LR-sets, available in online dictionaries or other resources can be exploited by this model to generate more coherent topics.
|
for external knowledge-based models, the incorporated knowledge is too general to be consistent with the short text in the semantic space.
|
contrasting
|
train_21253
|
Here, in both plots, it is clearly shown that, as the frequency of nouns in the corpus increases our approach outperforms baselines for both Hindi and English WSD.
|
semCor baseline accuracy decreases for those words which occur more than 8 times in the test corpus.
|
contrasting
|
train_21254
|
Our approach is language independent.
|
due to time and space constraints we have performed our experiments on only Hindi and English languages.
|
contrasting
|
train_21255
|
When faced with a new domain, one option is to try to leverage available unlabeled data.
|
rather than resorting to pure self-training approaches (self-labeling), we here resort to another source of information.
|
contrasting
|
train_21256
|
Dev and test sets Our approach is basically parameter free.
|
we did experiment with different ways of extending Wiktionary and hence used an average over three English Twitter dev sections as development set (Ritter et al., 2011;Gimpel et al., 2011;Foster et al., 2011), all mapped and normalized following .
|
contrasting
|
train_21257
|
Our approach is similar to mining high-precision items.
|
previous approaches on this in NLP have mainly focused on well-defined classification tasks, such as PP attachment (Pantel and Lin, 2000;Kawahara and Kurohashi, 2005), or discourse connective disambiguation (Marcu and Echihabi, 2002).
|
contrasting
|
train_21258
|
It is thus verified that density peaks clustering algorithm is able to handle MDS effectively.
|
this work is still preliminary.
|
contrasting
|
train_21259
|
Efforts have been made to port the existing semantic annotation system to other languages (Finnish and Russian) (Löfberg et al., 2005;Mudraya et al., 2006), so a prototype software framework could be used.
|
manually developing semantic lexical resources for new languages from scratch is a time consuming task.
|
contrasting
|
train_21260
|
Recent developments in this area include Zhang and Rettinger's work (2014) in which they tested a toolkit for Wikipedia-based annotation (wikification) of multilingual texts.
|
in the work described here we employ a lexicographically-informed semantic classification scheme and we perform all-words annotation.
|
contrasting
|
train_21261
|
Compiled without professional editing, these bilingual word lists contain errors and inaccurate translations, and hence they introduced noise into the mapping process.
|
they provided wider lexical coverage of the languages involved and complemented the limited sizes of the high-quality dictionaries used in our experiment.
|
contrasting
|
train_21262
|
Our experiment demonstrates that, if appropriate high-quality bilingual lexicons are available, it is feasible to rapidly generating prototype systems with a good lexical coverage with our automatic approach.
|
our experiment also shows that, in order to achieve a high precision, parallel/comparable corpus based disambiguation is needed for identifying precise translation equivalents, and a certain amount of manual cleaning and improvement of the automatically generated semantic lexicons is indispensible.
|
contrasting
|
train_21263
|
The most intuitive way to integrate the similarities between terms is averaging them: (2) This similarity averages all the pairwise similarities between terms a i 's and b j 's.
|
we can expect a lot of the similarities φ(a i , b j ) to be close to zero.
|
contrasting
|
train_21264
|
For example, the suffix ed is generally indicative of the past tense in English.
|
distributional similarity has also been shown to be an important cue for morphology (Yarowsky and Wicentowski, 2000;Schone and Jurafsky, 2001).
|
contrasting
|
train_21265
|
(Prabhakaran et al., 2012) achieves a commendable accuracy in detecting overt display of "power".
|
by our definitions, this is a lower level attribute and is similar to authoritative behavior which is a lower level concept than Leadership or Status.
|
contrasting
|
train_21266
|
We assume the following ordering exists: authoritative behavior > motivational behavior > negative deference > positive deference in the opposite direction > closeness.
|
we do not assume such an ordering for the SC Leadership.
|
contrasting
|
train_21267
|
We achieved a very high recall (close to 1.0) for most indicators with these rules on test data.
|
in few cases, the frequency of such indicators (such as politeness) were very low deeming the set of regular expressions as incomplete.
|
contrasting
|
train_21268
|
Then, the probability of predicting the word w o given the word w i is defined as: wo) w∈V e ow i (w) This is referred as the softmax objective.
|
for larger vocabularies it is inefficient to compute o w i , since this requires the computation of a |V |×d w matrix multiplication.
|
contrasting
|
train_21269
|
It also shows that careful initialization of model parameters can bring further improvements.
|
we also find that words that are close to the centroid are not necessarily representative of what linguists consider to be prototypical.
|
contrasting
|
train_21270
|
The MPQA opinion annotated corpus Wilson, 2007) is entirely span-based, and contains no eTarget annotations.
|
it provides an infrastructure for sentiment annotation that is not provided by other sentiment NLP corpora, and is much more varied in topic, genre, and publication source.
|
contrasting
|
train_21271
|
The target is the event Rushdie insulting the Prophet.
|
the assertion that the Imam is negative toward the insult event is within the scope of this article.
|
contrasting
|
train_21272
|
The parallel computation of large number of small dense matrix operations (multiply, LU decomposition, triangular solve) is a perfect fit for implementation a GPU, which can achieve vastly more throughput than modern CPUs can.
|
using the CPU's neighborhood structure on the GPU has a crippling bottleneck.
|
contrasting
|
train_21273
|
In initial experiments, we found that when this approach is implemented on a GPU it is even slower than the CPU-based implementation.
|
memory bandwidth is extremely high on modern GPUs.
|
contrasting
|
train_21274
|
We find that both techniques of enhancing the lexical coverage of the semantic parsers result in improved parsing performance, and that the improvements add up nicely.
|
improved parsing performance does not correspond to improved F1-score in answer retrieval when using the respective parser in a response-based learning framework.
|
contrasting
|
train_21275
|
The most straightforward strategy to perform model selection for the task of response-based learning for SMT is to rely on parsing evaluation scores that are standardly reported in the literature.
|
as we will show experimentally, if precision is taken as the percentage of correct answers out of instances for which a parse could be produced, recall as the percentage of total examples for which a correct answer could be found, and F1 score as their harmonic mean, the metrics are not appropriate for model selection in our case.
|
contrasting
|
train_21276
|
Developing a system that can automatically respond to a user's utterance has recently become a topic of research in natural language processing.
|
most works on the topic take into account only a single preceding utterance to generate a response.
|
contrasting
|
train_21277
|
Open domain relation extraction systems identify relation and argument phrases in a sentence without relying on any underlying schema.
|
current state-of-the-art relation extraction systems are available only for English because of their heavy reliance on linguistic tools such as part-of-speech taggers and dependency parsers.
|
contrasting
|
train_21278
|
Possible inconsistencies are resolved by adjudication, and models are induced assuming there is one single ground truth.
|
there exist linguistically hard cases where there is no clear answer (Zeman, 2010;Manning, 2011), and incorporating such disagreements into the training of a model has proven helpful for POS tagging (Plank et al., 2014a;Plank et al., 2014b).
|
contrasting
|
train_21279
|
For FCM, the size of T is 1.92 × 10 7 ; potentially yielding a high variance estimator.
|
for LRFCM with 20-dimensional feature embeddings, the size of T is 1.28 × 10 5 , significantly smaller with lower variance.
|
contrasting
|
train_21280
|
To the best of our knowledge, our approach is novel in using the Google Books Ngram corpus for the word context, Open Thesaurus for the synonyms, and real web frequencies for disambiguating synonym candidates.
|
google Ngram have been previously used to find synonyms, for instance to expand user queries by including synonyms (Baker and Lamping, 2011).
|
contrasting
|
train_21281
|
As it can be observed from the frequency band results and the complexity measure, LexSiS offers better synonyms for high frequency and not for low frequency words.
|
our method improves with low frequency complex words.
|
contrasting
|
train_21282
|
These systems show an acceptable level of accuracy, they are easy to build and are highly computationally efficient as the only operation required to assign a polarity label are the word lookups and averaging.
|
the information about word polarities in a document are best exploited when using machine learning models to train a sentiment classifier.
|
contrasting
|
train_21283
|
as having the highest positive sentiment score.
|
an SVM model assigns higher scores to bigrams containing negative words problem, bad, worries, to outweigh their negative impact.
|
contrasting
|
train_21284
|
Although a performance drop was expected due to the big genre differences, results suggest the presence of some corpus-independent features that capture cross-linguistic influence.
|
they also suggest that a large portion of the features helpful for NLI are genre-dependent.
|
contrasting
|
train_21285
|
Finally, we the sense embeddings for the lemma Equations (4) and (5) show the role of the mix variables: if p ij = 0, then the sense embedding E(s ij ) is completely determined by the neighbors of the sense (that is, it is equal to the weighted centroid).
|
if p ij = 1, then the sense embedding becomes equal to the embedding of the lemma, F (l i ).
|
contrasting
|
train_21286
|
Such ideas have been explored in the past as subcomponents of extractive summarizers (Schiffman et al., 2002;Hong and Nenkova, 2014) or as features derived from small datasets for sentence compression (Woodsend and Lapata, 2012).
|
in our work we rely on large corpora and exclusively focus on the task of acquiring input independent indicators of importance.
|
contrasting
|
train_21287
|
Due to the large number of "false positives" and time constraints, we cannot impose on our Assyriologist informant the task of verifying all of the system reported names for us at the moment.
|
the current evaluation result reveals that the systematic lemmatization on CDLI, as discussed in Section 2, follows an extremely conservative approach.
|
contrasting
|
train_21288
|
As a result, WordNet has enabled a wide variety of NLP techniques such as Word Sense Disambiguation (Agirre et al., 2014), information retrieval (Varelas et al., 2005), semantic similarity (Pedersen et al., 2004;Bär et al., 2013), and sentiment analysis (Baccianella et al., 2010).
|
semantic knowledge bases such as WordNet are expensive to produce; as a result, their scope and domain are often constrained by the resources available and may omit highly-specific concepts or lemmas, as well as new terminology that emerges after their construction.
|
contrasting
|
train_21289
|
Finally, we create a feature vector in the target language that is used as input for the text classifier.
|
due to the translation ambiguity of a word in the source language, it is important to carefully choose the translation probability for calculating the expected frequencies of the target words.
|
contrasting
|
train_21290
|
e = restrict e = restrain e = custody p(e|f, F1) 0.33 0.10 0.57 p(e|f, F2) 0.02 0.00 0.98 Table 2: Shows the translation probabilities for the source word f = , within document F 1 (military related, class is "foreign policy") and document F 2 (terror related, class is not "foreign policy").
|
to most previous work, we focused on the word translation problem, rather than the domain-adaptation problem for cross-lingual text classification.
|
contrasting
|
train_21291
|
Indeed, some recent work suggests that drawing attention to framing may help mitigate framing effects (Baumer et al., 2015).
|
such reflection is no mean feat.
|
contrasting
|
train_21292
|
Here, different framings are used to support the same position on an issue.
|
framing involves an ensemble of rhetorical elements to create an "interpretive package" (Gamson and Modigliani, 1989) that functions by altering the relative salience or importance of different aspects of an issue (Chong and Druckman, 2007).
|
contrasting
|
train_21293
|
Single words such as "but," "all," or "not" could arguably be related to framing.
|
accounts from our pilot study participants made us less likely to believe that other single words, such as "an," "of," or "that," instantiated framing.
|
contrasting
|
train_21294
|
The features used here are informed by a combination of theoretical literature on framing, our own pilot studies, and prior work in computational linguistics.
|
we have little means of knowing a priori which of these features will be most important or even necessary.
|
contrasting
|
train_21295
|
Precision for all feature sets is around 34% (all statisticially significantly better than the dummy and significantly indistinguishable from one another), while human average precision is 91.5%.
|
the three top performing feature sets (All, Lexical, and Theoretical) all achieve recall around 70%, while average recall for the human annotators is only 49.3%.
|
contrasting
|
train_21296
|
Since these models first predicted the content (SVO triples) and then generated the sentences, the S,V,O accuracy captured the quality of the content generated by the models.
|
in our case the sequential LSTM directly outputs the sentence, so we extract the S,V,O from the dependency parse of the generated sentence.
|
contrasting
|
train_21297
|
BLEU is the metric that is seen more commonly in image description literature, but a more recent study (Elliott and Keller, 2014) has shown METEOR to be a better evaluation metric.
|
since both metrics have been shown to correlate well with human eval-Model S% V% O% HVC (Thomason et al., 2014) 86.87 38.66 22.09 FGM (Thomason et al., 2014) 88 Table 1: SVO accuracy: Binary SVO accuracy compared against any valid S,V,O triples in the ground truth descriptions.
|
contrasting
|
train_21298
|
This is likely due to the fact that previous work explicitly optimizes to identify the best subject, verb and object for a video; whereas the LSTM model is trained on objects and actions jointly in a sentence and needs to learn to interpret these in different contexts.
|
with regard to the generation metrics BLEU and METEOR, training based on the full sentence helps the LSTM model develop fluency and vocabulary similar to that seen in the training descriptions and allows it to outperform the template based generation.
|
contrasting
|
train_21299
|
We also showed that exploiting image description data improves performance compared to relying only on video description data.
|
our approach falls short in better utilizing the temporal information in videos, which is a good direction for future work.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.