id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_97500
To address this difficulty through EP (section 4), we will need the ability to approximate any probability distribution p that is given by a WFSA, by choosing a "simple" distribution from a family Q.
that bookkeeping can be handled with an expectation semiring (Eisner, 2002), or simply with backpointers.
neutral
train_97501
For each arc or final state a in A, we can define a feature function "How many times is a used when A accepts v?"
each message update is found by minimizing a certain KL-divergence (Minka, 2001a).
neutral
train_97502
According to BP, the belief b V is the pointwise product of all "incoming" messages to V .
at a given step, we only have to compute the gradient with respect to the currently nonzero features (green nodes in Figure 2) and their immediate children (yellow nodes).
neutral
train_97503
Transcriptions obtained from a third language are not only noisy because of the imperfect G2P conversion, but often also lossy, in the sense of missing some phonetic information present in the source pronunciation.
the entries that have no supplemental transliterations are removed from the test sets, which results in 2,321 and 1,226 test entries.
neutral
train_97504
Since we may need to leverage information from other sources, e.g., phonemes of supplemental transliterations, each training instance can be composed of a source word, a target word, and a list of supplemental strings.
we train our generalized joint model on the graphemes of the source word, as well as on the graphemes of supplemental transliterations.
neutral
train_97505
For example, all animal pairs would be treated as com-patibles, whereas 54% of them are actually incompatible.
compatibility is also central to recognizing entailment (and contradiction): Standard DSMs are of relatively little use in recognizing entailment as they treat antonymous, contradictory words such as dead and alive as highly related (Adel and Schütze, 2014;Mohammad et al., 2013), with catastrophic results for the inferences that can be drawn (antonyms are just the tip of the incompatibility iceberg: dog and cat are not antonyms, but one still contradicts the other).
neutral
train_97506
These features are relics of the Hearst (1992) pattern "y such as x".
we then add contextual-features (as described in §2.1), on top of the lexical features, and train classifiers analogously.
neutral
train_97507
For Skip-gram and CBOW, a 5-word window size is used to allow them to get the same amount of raw information, also words appearing 5 times or fewer are filtered out.
both full English Wikipedia and Simple English Wikipedia are used as training corpora with minimal preprocessing procedures: XML tags are removed and infoboxes are filtered out, thus yielding four models: Full English Wikipedia -CbOW(FW-CbOW), Full English Wikipedia -Skip-gram(FW-SG), Simple English Wikipedia -CbOW(SW-CbOW) and Simple English Wikipedia -Skip-gram(SW-SG).
neutral
train_97508
As pointed out by Agirre et al (2009) and Levy & Goldberg (2014), relatedness may actually be measuring topical similarity and be better predicted by a bag-of-words model, and similarity may be measuring functional or syntactic similarity and be better predicted by a contextwindow model.
comparing FW-SG with SW-SG and SW-cBOW, there is almost no sign of performance gain from training using full Wikipedia instead of the much smaller Simple Wikipedia.
neutral
train_97509
We therefore develop a low-resource approach that relies on sourceside dependency parses only.
a schematic representation is shown in Figure 1.
neutral
train_97510
This work largely follows the methodology and experimental settings of (Mikolov et al., 2013b), while we normalize the embedding and use an orthogonal transform to conduct bilingual translation.
we seek a simple approximation in this work.
neutral
train_97511
The main purpose of preordering is to find a better translation performance in fast decoding conditions.
additionally, we report results on the English-to-Hindi WMT 2014 shared task (Bojar et al., 2014a) using the data provided 2 .
neutral
train_97512
The new mechanism beyond Huang and Sagae (2010) is the non-trivial dynamic programming treatment of unary actions (un x and st), which is not found in dependency parsing.
experiments on both english and Chinese treebanks show that our DP parser outperforms almost all other parsers except of Carreras et al.
neutral
train_97513
To alleviate the propagation of errors from part-of-speech tagging, we also extend the parser to take a tag lattice instead of a fixed tag sequence.
we first present an odd-even shift-reduce constituency parser which always finishes in same number of steps, eliminating the complicated asynchronicity issue in previous work (Zhu et al., 2013;Wang and Xue, 2014), and then develop dynamic programming on top of that.
neutral
train_97514
Our implementation of the reference-based approach ("ref" in §4) uses SVR to estimate a model to predict human scores from various measures of the similarity between the response and information from the scoring guidelines provided to the human scorers.
also, the models with response-based features outperform those with just reference-based features, as observed previously by Heilman and Madnani (2013).
neutral
train_97515
We treat this model as a strong baseline to which we will add reference-based features.
we use the following information from §2: (a) sentences expressing key concepts that should be present in correct responses, and (b) small sets of exemplar responses for each score level.
neutral
train_97516
Turkers are presented with a timeline consisting of five consecutive days' article summaries and four variations of the accompanying comment summary: Table 4: ROUGE-2 (R-2) and ROUGE-SU4 (R-SU4) scores (multiplied by 100) for different timeline generation approaches on four event datasets.
the full objective function consists of the three parts discussed above: Furthermore, using the following notation, we can show a closed form solution to Equation 4 as follows: Basic Features Social Features -num of words -avg/sum frequency of -absolute/relative position words appearing in comment -overlaps with headline -avg/sum frequency of -avg/sum TF-IDF scores dependency relations -num of NEs appearing in comment -TF/TF-IDF simi with comments -contains URL -JS/KL divergence (div) with article -user rating (pos/neg) -JS/KL div with comments Sentiment Features -num /proportion of positive/negative/neutral words (MPQA (Wilson et al., 2005), General Inquirer (Stone et al., 1966)) -num /proportion of sentiment words Now we present an optimization framework for timeline generation.
neutral
train_97517
Let's slide it up, you mind?
special thanks to Bharat Ambati, Lea Frermann, and Daniel Renshaw for their help with system evaluation.
neutral
train_97518
We observe that SceneSum summaries are overall more informative compared to those created by the baselines.
examples of the features we used for the classification task include the barycenter of a character (i.e., the sum of its distance to all other characters), PageRank (Page et al., 1999), an eigenvectorbased centrality measure, absolute/relative interaction weight (the sum of all interactions a character is involved in, divided by the sum of all interactions in the network), absolute/relative number of sentences uttered by a character, number of times a character is described by other characters (e.g., He is a monster or She is nice), number of times a character talks about other characters, and type-tokenratio of sentences uttered by the character (i.e., rate of unique words in a character's speech).
neutral
train_97519
We interpret the term scene in the screenplay sense.
aMT participants are able to answer more questions regarding the story of the movie when reading SceneSum summaries.
neutral
train_97520
In both cases, the graphs are constructed based on surface text; it is not a representation of propositional semantics like AMR.
given an initial step size η, the update for β on iteration t is: generation from AMR-like representations has received some attention, e.g., by Langkilde and Knight (1998) who described a statistical method.
neutral
train_97521
Though we know of work in progress driven by the goal of machine translation using AMR, there is currently no system available.
our work operates on semantic graphs, taking advantage of the recently developed AMR Bank.
neutral
train_97522
In total, we have 96 summaries (for more details, see B&L).
an exception is the unsupervised model of Guinaudeau and Strube (2013) (G&S), which converts the document into a graph of sentences, and evaluates the text coherence by computing the average out-degree over the entire graph.
neutral
train_97523
For a given pair of entities in the text, the chance is rather low to find instances in the knowledge bases where the two arguments perfectly match the pair of entities, because entities in the source document might appear in aliases or abbreviations.
we then use the frequencies of these distribution patterns over the entire document as additional features into the entity-based model.
neutral
train_97524
In a well-written document, sentences are organized and presented in a logical and coherent form, which makes the text fluent and easily understood.
moreover, for sentence ordering, world knowledge is shown to be especially useful on short documents.
neutral
train_97525
To the best of our knowledge, the only exception is the unsupervised method proposed by G&S, which transforms the entity grid into a sentence graph and measures text coherence by computing the average out-degree of the graph.
we incorporate world knowledge into two existing frameworks: (1) the unsupervised graph-based model (G&S), and (2) the supervised entity-grid model (B&L).
neutral
train_97526
Somewhat unfortunately, simplifying assumptions have to be made when a sentence containing multiple noncoreferent event mentions is encountered.
feature 1 encodes whether c t and e t , the trigger words of c and e, satisfy any of the following three conditions: 1. c t and e t are lexically identical; 2. c t and e t contain the same basic verb (BV) and their verb structures are compatible; 3. the similarity between c t and e t is greater than a certain threshold (which we set to 0.8 in our experiments).
neutral
train_97527
To start the induction process, we initialize all parameters with uniform values.
further make the simplifying assumption that event coreference chains are all and only those coreference chains that involve at least one verb.
neutral
train_97528
Since logical connections between relations are modeled explicitly, such approaches are generally hard to scale.
we present two techniques for injecting logical background knowledge, pre-factorization inference ( §3.1) and joint optimization ( §3.2), and demonstrate in subsequent sections that they generalize better than direct logical inference, even if such inference is performed on the predictions of the matrix factorization model.
neutral
train_97529
Towell and Shavlik (1994) introduce Knowledge-Based Artificial Neural Networks whose topology is isomorphic to a knowledge base of facts and inference formulae.
joint optimization leads to low-rank logic embeddings that outperform all other methods in the 0 to 30% Freebase training data interval.
neutral
train_97530
We plan to combine collaborator and coherence methods into a unified approach, and to use edge labels in knowledge networks for context comparison (note that the last of these is quite challenging due to normalization, polysemy, and semantic distance issues).
if an AMR parse includes no time information, we use the document creation time as an additional collaborator for mention in question.
neutral
train_97531
Highly weighted term-match features are then used to find a decoding path that gives highest score to the document that is optimal with respect to both relevance and translational adequacy.
the sentences do not agree on a common set of documents.
neutral
train_97532
Other CLIR approaches such as probabilistic structured queries (Darwish and Oard, 2003;Ture et al., 2012b) try to mitigate this early disambiguation by keeping enumerated translation alternatives at retrieval time.
most of a translation's structural information is lost during retrieval, and lexical choices may not be optimal for the retrieval task.
neutral
train_97533
Evaluation of segment-level machine translation metrics is currently hampered by: (1) low inter-annotator agreement levels in human assessments; (2) lack of an effective mechanism for evaluation of translations of equal quality; and (3) lack of methods of significance testing improvements over a baseline.
on the one hand, if we specify a standard error that's lower than is required, and subsequently collect more repeat assessments than is needed, we would be wasting resources that could, for example, be targeted at the annotation of additional translation segments.
neutral
train_97534
Scores are sampled according to annotation time to simulate a realistic setting.
for evaluation of segment-level metrics, there is no escaping the need to boost the consistency of human annotation of individual segments.
neutral
train_97535
With the increasing size of parallel corpora it has become possible to achieve very high quality translation.
we show using the other languages as additional pivots leads to the construction of better phrase tables and better translation results.
neutral
train_97536
Let your eyes run rapidly over several lines of print at a time.
in our experiment, we use 343 triggers in total and for each fact there are about 38 triggers in average.
neutral
train_97537
Furthermore, we carefully investigated the TAC-KBP SF 2012 ground truth corpus and find that 94.36% of the biographical facts are mentioned in a sentence containing indicative fact-specific triggers.
we compare with two successful approaches: (1) the combination of distant supervision and rules (e.g., (Grishman, 2013;); (2) patterns based on dependency paths (e.g., Yu et al., 2013)).
neutral
train_97538
However, bacterial asexual reproduction is generally more similar to manuscript copying than mammalian sexual reproduction.
both tasks could thus profit from each other provided they are understood as separate and developed each in its own right.
neutral
train_97539
This distributed representation can inform an inductive bias to generalize in a bootstrapping system.
5 labeling entities solely based on similarity scores resulted in lower performance.
neutral
train_97540
Both our methods outperform the baseline and the interpolation approach.
this combination process is referred to as triangulation (see §5).
neutral
train_97541
We argue that our joint training procedure can be seen as optimizing the posterior likelihood of the three models.
we argue that our joint training procedure can be seen as optimizing the posterior likelihood of the three models.
neutral
train_97542
Based on the above analysis, we first identify key-phrases from short text, which can be deemed as self-contained knowledge, then propose phrase topic model (PTM), which constrains same topic for terms in key-phrase and sample topics for non-phrase terms from mixture of keyphrase's topic.
the hidden variables consist of z m,n and δ m,s .
neutral
train_97543
vast amount of lexical knowledge about words and their relationships, denoted as LR-sets, available in online dictionaries or other resources can be exploited by this model to generate more coherent topics.
bTM learns topics over short texts by modeling the generation of biterms in the whole corpus.
neutral
train_97544
As seen in Figure 1, the word embedding of the word table is more closer to the centroid C 2 as compared to the centroids C 1 and C 3 .
this is also justified when we consider only synset members, gloss members, hypernym/hyponym synset members, hypernym/hyponym gloss members which give a score close to the best obtained score.
neutral
train_97545
The other reason is to bring down the impact of topic drift which may have occurred because of polysemous synset members.
due to time and space constraints we have performed our experiments on only Hindi and English languages.
neutral
train_97546
This data set consists of 10017 sentences and nine types of relations between nominals (Hendrickx et al., 2010).
recursive Neural Network (rNN) has proven to be highly successful in capturing semantic compositionality in text and has improved the results of several Natural Language Processing tasks (Socher et al., 2012;Socher et al., 2013).
neutral
train_97547
We use dictionaries extended with Brown clusters to collect labeled training data from unlabeled data, saving additional annotation work.
for evaluation, we use three domains: tweets, spoken data and queries.
neutral
train_97548
For evaluation, we use three domains: tweets, spoken data and queries.
we mine for sequences of unambiguous tokens in a structured prediction task.
neutral
train_97549
It is thus verified that density peaks clustering algorithm is able to handle MDS effectively.
we further revise the clustering algorithm to address the summary length constraint.
neutral
train_97550
In this work, we measure diversity in the ranking model.
we define the following function to calculate the representativeness score s REP (i) for each sentence s i : where sim ij denotes the similarity value between the i-th and j-th sentence, and K denotes the number of sentences in the datasets.
neutral
train_97551
In addition, we used large English-Italian and English-Portuguese bilingual lexicons available from FreeLang site (http://www.freelang.net/dictionary) as well as an English-Chinese bilingual word list available from LDC (Linguistic Data Consortium).
in this paper, we report on an experiment to develop prototype semantic annotation tools for italian, Chinese and Brazilian Portuguese based on an existing English annotation tool.
neutral
train_97552
In this paper, we have investigated the feasibility of rapidly bootstrapping semantic annotation tools for new target languages 2 by mapping an existing semantic lexicon and software architecture.
bank as river bank vs. money bank), translation errors and missing of the translation words in the English semantic lexicons.
neutral
train_97553
We can expect that these two texts are related but the similarity value does not reflect that.
then we identified the concepts that appear both in the document ESA representation and in the label ESA representation.
neutral
train_97554
Note that this similarity is not symmetric.
in addition to the original documents, we also split each document into 2, 4, 8, 16 equal length parts, computed the ESA representation of each, and then the intersection with the ESA representation of the label.
neutral
train_97555
We can think about S A (x, y) as leveraging term many-to-many mapping, while S M (x, y) uses only one-to-many term mapping.
we propose to align different indices of x and y together to increase the similarity value.
neutral
train_97556
In September 2014, Twitter users unequivocally reacted to the Ray Rice assault scandal by unleashing personal stories of domestic abuse via the hashtags #WhyIStayed or #WhyILeft.
the vast majority of cases can be classified accurately with ngrams alone.
neutral
train_97557
To the best of our knowledge, no previous word-embedding techniques have attempted to incorporate morphological tags into embeddings in a supervised fashion.
we conducted experiments on the TIGER corpus of newspaper German (Brants et al., 2004).
neutral
train_97558
Our framework supports determination of various important Social Constructs such as Leadership, Status, Group Cohesion and Sub-Group Formation.
*" to detect LIs such as "Command".
neutral
train_97559
The Structured Skip-gram and CWindow models can process 34.64k and 124.43k words per second, respectively.
when defining a window size of 5, the actual window size used for each sample is a random value between 1 and 5.
neutral
train_97560
The left-most point (α = 0) for each figure corresponds to simple regression.
the knowledge on numerical attributes is also very useful on many other occasions.
neutral
train_97561
Sometimes sellers are encouraged to find similar products to those they sell and adopt this category to their products.
we conducted an empirical evaluation on 445, 408 product titles and used a rich product taxonomy of 319 categories organized into 6 levels.
neutral
train_97562
Such process will both alleviate human labor and further improve product categorization consistency in e-Commerce websites.
this mechanism leads to two main problems: (1) it takes a lot of time for a merchant to categorize items and (2) such taggings can be inconsistent since different sellers might categorize the same product differently.
neutral
train_97563
The most straightforward strategy to perform model selection for the task of response-based learning for SMT is to rely on parsing evaluation scores that are standardly reported in the literature.
statistical significance is measured using an Approximate Randomization test (Noreen, 1989;Riezler and Maxwell, 2005).
neutral
train_97564
Indeed, the response generated by applying SMT to the most recent stimulus "What?"
grammaticality of the output is handled by the language model, and the language model is constructed upon the target language only, which in our case corresponds to the target utterances that remain untouched.
neutral
train_97565
If there are multiple phrase-pairs in P that correspond to the same target phrase phr t , we select the shortest source phrase (phr s ).
followed by Hindi and Russian (64.0%).
neutral
train_97566
(2014a) propose IAA-weighted costsensitive learning for POS tagging.
we use the doubly-annotated data to regularize our model, hopefully preventing overfitting to annotators' biases.
neutral
train_97567
This might be due to: (a) our use of @-mention features; (b) l 1 regularisation, which is essential to preventing overfitting for large feature sets; or (c) our use of l 2 normalisation of rows in the design matrix, which we found reduced errors by about 20% on GEOTEXT, in keeping with results from text categorisation (Lee, 1995).
there is no clear consensus on whether text-or network-based methods are empirically superior at the user geolocation task.
neutral
train_97568
Jurgens (2013) defined an undirected network from interactions among Twitter users based on @-mentions in their tweets, a mechanism typically used for conversations between friends.
we thank the anonymous reviewers for their insightful comments and valuable suggestions.
neutral
train_97569
Section 5 presents experimental setup and results.
it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively.
neutral
train_97570
The key idea in FCM is that it gives similar words (i.e.
for each instance (y, x) we compute the gradient of the log-likelihood = log P (y|x; T, W f ).
neutral
train_97571
Instead of having a target word with different senses, we included the target word in each sense, and we kept a list of unique senses, including for each word its frequency in the Web using a large search engine index.
hence, we use it here as the stateof-the-art in our evaluation.
neutral
train_97572
See two possible contexts for noche and fortuna in the examples below: era una noche oscura de ('it was a dark night of') de probar fortuna en el ('to try fortune in the') Third, we define the complexity of a word using the relative frequency of the synonyms within the same sense in the List of Senses.
our method improves upon LexSiS and the baseline for all the measures.
neutral
train_97573
We also compare our results to those reported by Berg-Kirkpatrick et al.
in the process, the model learns to cluster words into soft equivalence classes (words that have similar distributions).
neutral
train_97574
Regarding Twitter sentiment analysis, the top performing system from Semeval-2013 Twittter Sentiment Analysis task (Nakov et al., 2013) follows this recipe by training an SVM on various surface form, sentiment and semantic features.
with more than 40 systems participating in Semeval-2014 challenge, we note that the majority of systems perform well only on few test sets at once while failing on the others 5 .
neutral
train_97575
While the idea to model statistical correlations between the words and tweet labels using PMI or any other metric is rather intuitive, we believe there is a more effective way to exploit noisy labels for estimating the word-sentiment association scores.
(Tang et al., 2014) showed that learning sentiment-specific word embeddings and using them as features can boost the accuracy of existing sentiment classifiers.
neutral
train_97576
The influence of this head-direction parameter on English acquisition has been previously investigated (Flynn, 1989).
frequencies of 400 English function words 3 are extracted as features.
neutral
train_97577
In order to align the CAREFUL SCOTUS and ORIGINAL OYEZ transcripts, we use a dynamic programming algorithm for sequence alignment with matching scores as given in Where there used to be the decision The ANNOTATED OYEZ training set is a very small dataset, and other work has shown that Switchboard (SWBD) is useful for cross-domain training for SCO-TUS (Zayats et al., 2014).
in this work, we use a simple self-training approach.
neutral
train_97578
The authors also thank Sangyun Hahn for his contribution in the two-stage model.
those transcripts are identical to the original OYEZ transcripts, but in addition contain disfluency annotation derived from CARE-FUL SCOTUS.
neutral
train_97579
This time for word "state": state : e→t.
for evaluation, we follow Zettlemoyer and Collins (2005) Recall = # of correctly parsed questions # of questions .
neutral
train_97580
Several works in literature (Zettlemoyer and Collins, 2005;Zettlemoyer and Collins, 2007;Wong and Mooney, 2007;Kwiatkowski et al., 2013) employ some primitive type hierarchies and parse with typed lambda calculus.
in its outermost lambda abstraction, variable P needs to be grounded on-the-fly before we push the expression onto the stack.
neutral
train_97581
Semantic parsing has made significant progress, but most current semantic parsers are extremely slow (CKY-based) and rather primitive in representation.
argmax is a polymorphic function, and to assign a correct type for it we have to introduce type variables: where type variable 'a is a place-holder for "any type".
neutral
train_97582
We next show how to use the template kernels within a reranker.
each slot is associated with a set of properties.
neutral
train_97583
A key observation to make is that the v generated by the PA algorithm will depend on two parameters.
to previous works on parsing with kernels (Collins and Duffy, 2002), in which the kernels are defined over trees and count the number of shared subtrees, our focus is on feature combinations.
neutral
train_97584
Table 2 shows the result of this search, and again the result is very useful.
these two ideas are stated as an optimization problem where the first becomes the objective and the second a constraint.
neutral
train_97585
It uses two ideas: first, that vectors for polysemous words can be decomposed into a convex combination of sense vectors; secondly, that the vector for a sense is kept similar to those of its neighbors in the network.
a very large number of words are monosemous, and the procedure will leave the embeddings of these words unchanged.
neutral
train_97586
Characterizing verbs on the personal vs. nonpersonal dimension indeed turned out to beneficial for explaining domain-level importance of verbs in world news: personal narratives are not considered important in this domain and verbs that tended to get excluded from summaries also tended to appear more frequently in personal blog entries.
the goal is to collect evidence of verb importance globally, without regard to a particular input or its context.
neutral
train_97587
The first rule indicates that a person is acting as an intermediary in the transaction.
it aims to provide an open access to electronic documentation of ancient cuneiform, consisting of texts, images, transliterations and glossaries of 3500 years of human history.
neutral
train_97588
The classifier with Web Context Features achieved an F score > 70% using only 10 training texts, and approached its best performance with just 100 training texts.
to our knowledge, classifying medication mentions with respect to administration use categories has not yet been studied.
neutral
train_97589
The second group of approaches performs taxonomy induction to learn hypernymy relation-ships between words (Moro and Navigli, 2012;Meyer and Gurevych, 2012).
a new synset is created for that lemma and a hypernym relation is added to the appropriate WordNet synset.
neutral
train_97590
2013, ADW, 3 which first represents a given linguistic item (such as a word or a concept) using random walks over the WordNet semantic network, where random walks are initialized from the synsets associated with that item.
structurebased approaches are limited only to the concepts appearing in Wikipedia article titles, which almost always correspond to noun concepts.
neutral
train_97591
The corpora NEWS contains news articles in English and : Shows the break-even point (f1-score) of the proposed method Co and three baselines for each pair of corpora.
text classification hugely relies on manually annotated training data in one language.
neutral
train_97592
Others argued that al-Assad's actions were a threat to regional security, also argu-ing for military action.
we excluded sources outside the US (e.g., BBC), news aggregators (e.g., Yahoo News, Google News), blogs (Huffington Post), and sites without a dedicated politics feed (e.g., USA Today, weather.com).
neutral
train_97593
Framing is often instantiated by specific "keywords, stock phrases," (Entman, 1993, p. 52) or "catchphrases" (Gamson and Modigliani, 1989, p. 3).
for example, "depictions," "visual images," and figurative language such as metahpor (Gamson and Modigliani, 1989) often invoke frames.
neutral
train_97594
The results above suggest that, when included, such features also emerge as important for identifying the language of framing.
bias involves a clear, often intentional, preference shown in writing for one position or opinion on an issue.
neutral
train_97595
We did experiment with different post-hoc decision thresholds to make the classifier less aggressive.
moreover, this precision-recall trade off may have different ramifications in different applications.
neutral
train_97596
Imagery & Context Imagery rating of the word and average imagery rating of its context (Paivio et al., 1968;van der Veur, 1975).
the words and phrases the classifier highlights should align with that population's perception of framing.
neutral
train_97597
As a next step, we conducted another analysis on the distribution of "extreme cases", i.e.
finally, the average homogeneity scores are significantly (p < .001) higher in persuasive sentences in all datasets except CORPS, where the scores of non-persuasive sentences are significantly higher (p < .01) than persuasive ones.
neutral
train_97598
Another known problem with RNNs is that, it can be difficult to train them to learn longrange dependencies (Hochreiter et al., 2001).
the dataset comes with several human generated descriptions in a number of languages; we use the roughly 40 available English descriptions per video.
neutral
train_97599
Our approach follows the general paradigm of SMT with two important differences.
the value of the term is maximum when the selected objects are ranked first.
neutral