id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14000
Several formulae to compute prior polarities starting from posterior polarities scores have been used in the literature.
their performance varies significantly depending on the adopted variant.
contrasting
train_14001
That works because typed queries are often fairly precise, and tabular responses are easily skimmed.
spoken queries, and in particular open-domain spoken queries for unrestricted spoken content, pose new challenges that call for new thinking about interaction design.
contrasting
train_14002
For example, given a large collection of texts and images related to a specified news event (e.g., East Japan The researches (Goldstein et al., 2000) on automatic multi-document summarization (MDS) have helped a lot when we generate a description for a specific event.
it traditionally exhibits in a very simple style like a "0-dimensional" point.
contrasting
train_14003
Feng and Lapata (2010) use visual words to describe visual features and then propose a probabilistic model based on the assumption that images and their co-occurring textual data are generated by mixtures of latent topics.
to the best of our knowledge, no existing research manages to generate a 2-dimensional story map automatically and integrate images and texts into a unified framework at the same time.
contrasting
train_14004
The way to reconstruct a more effective layout of the story map requires further study and provides a good research direction in the future.
in this paper we order the sentences/images in each story node according to their authority scores.
contrasting
train_14005
The events with short life-cycles prefer a smaller value of σ to dominate the influence from neighbors, as well as the intramodal bias.
long-living events prefer lager σ and more inter-modal bias to get information replenishment from different dates and modality.
contrasting
train_14006
We also tried generating the meta features from the training data only, shown as TrainData in Table 9.
the results shows that the parsers performed worse than the baselines.
contrasting
train_14007
(2008) and a condensed feature representation.
our approach is much simpler than theirs and we believe that our meta parser can be further improved by combining their methods.
contrasting
train_14008
2 is reformulated as where the notion of disruption can be embedded in the term is defined as a probability.
arbitrary scores which do not correspond to probabilities can be used instead as the search for the best path in the graph of possible segmentations makes no use of probability theory.
contrasting
train_14009
Moreover, smooth topic shifts can be found, in particular at the beginning of each program with different reports dedicated to the headline.
transcripts significantly differ from written texts: no punctuation signs or capital letters; no sentence structure but rather utterances which are only loosely syntactically motivated; presence of transcription errors which may imply an accentuated lack of word repetitions.
contrasting
train_14010
In image processing, researchers have studied the problem of brand identification from image using histogram comparison (Pelisson et al., 2003).
to the best of our knowledge, even though textual data is vastly available, the problems of automatic brand identification from raw text and computational branding analytics, are new.
contrasting
train_14011
To solve this problem, graph methods seem to be a good solution, because they are simple, generalizable, and are often used to model such complex dependency structures (Cohen, 2012).
combining the sparse modeling and spectral graphical modeling approaches in a principled way is challenging.
contrasting
train_14012
Aligned with previous study in marketing science (Moon and Quelch, 2006), an informative set of features related to Starbucks store decorations showed up in our model: "store", "restroom", "public", "bathroom", and "spacious".
these features stopped to show up on the list of Dunkin' Donuts.
contrasting
train_14013
Also, it is possible to consider the higher order n-gram features for better exploratory data analysis.
since the focus of this paper is a proof of concept for Laplacian structured sparsity models and computational branding analytics, we have not yet explored various multiview representations to augment our model.
contrasting
train_14014
The Minnesota Multiphasic Personality Inventory, MMPI) are based on norm-referenced self-report, and therefore depend on patients' willingness and ability to report symptoms.
some individuals are motivated to underreport symptoms to avoid negative consequences (e.g.
contrasting
train_14015
We also explored including essays' average sentence length and total wordcount, as an initial proxy for language complexity, which often figures into psychological assessments.
results adding these features to LIWC did not differ significantly from LIWC alone, and for brevity we do not report them.
contrasting
train_14016
To implement the extraction step, we use Z&N's and K&Z's observation: ZPs can only occur before a VP node in a syntactic parse tree.
according to K&Z, ZPs do not need to be extracted from every VP: if a VP node occurs in a coordinate structure or is modified by an adverbial node, then only its parent VP node needs to be considered.
contrasting
train_14017
Like an overt pronoun, a ZP whose closest overt antecedent is far away from it is harder to resolve than one that has a nearby overt antecedent.
a corpus study of our training data reveals that only 55.2% of the AZPs appear in the same sentence as their closest overt antecedent, and 22.7% of the AZPs appear two or more sentences away from their closest overt antecedent.
contrasting
train_14018
The smaller vocabulary size allows us to efficiently model larger context, so in addition to the 4-gram LM, we also train a 7-gram LM based on word classes.
to an LM of the same size trained on word identities, the increase in computational resources needed for translation is negligible for the 7-gram word class LM (wcLM).
contrasting
train_14019
The strongest degradation can be seen when replacing the TM, while replacing the HRM only leads to a small drop in performance.
when the word class models are added as additional features to the baseline, we observe improvements.
contrasting
train_14020
In most of these operations, the examples in a minibatch can be processed in parallel.
in the sparse-dense products used when updating the parameters D and D , we found it was best to divide the vocabulary into blocks (16 per thread) and to process the blocks in parallel.
contrasting
train_14021
We assumed above that the words are generated independently from the grammatical relations.
we are likely to ignore valuable information in doing so.
contrasting
train_14022
This constrains the utility of many traditional tree kernels in two ways: i) two sentences that are syntactically identical, but have no semantic similarity can receive a high matching score (see Table 1, top) while ii) two sentences with only local syntactic overlap, but high semantic similarity can receive low scores (see Table 1, bottom).
distributional vector representations of words have been successful in capturing finegrained semantics, but lack syntactic knowledge.
contrasting
train_14023
Lexical approaches using pairwise semantic similarity of SENNA embeddings (DSM), as well as Wordnet Affective Database-based (WNA) labels perform poorly (Carrillo de Albornoz et al., 2010), showing the importance of syntax for this particular problem.
a syntactic tree kernel (SSTK) that ignores distributional semantic similarity between words, fails as expected.
contrasting
train_14024
In addition, WSD systems are not suitable for newly created words, new senses of existing words, or domainspecific words.
wSI systems can learn new senses of words directly from texts because these programs do not rely on a predefined set of senses.
contrasting
train_14025
When a system has higher V-Measure and paired F-Score on nouns than another system, it achieves a higher supervised recall on nouns too.
this behavior is not observed on verbs.
contrasting
train_14026
First, most prior work aims to monitor a specific illness, e.g., influenza or food-poisoning by paying attention to a relatively small set of keywords that are directly relevant to the corresponding sickness.
we examine all words people use in online reviews, and draw insights on correlating terms and concepts that may not seem immediately relevant to the hygiene status of restaurants, but nonetheless are predictive of the outcome of the inspections.
contrasting
train_14027
2012, which transformed words from different languages to WordNet synset identifiers as interlingual sense-based representations.
multilingual WordNet resources are not always available for different language pairs.
contrasting
train_14028
Though MT can greatly increase the test accuracies comparing to the other four methods, TB, CL-Dict, CLD-LSA, and CL-SCL, the benefit is obtained at the cost of whole document translations.
our proposed approach does not require whole document translations, but relies on the same simple word-pair translations used in CL-Dict.
contrasting
train_14029
e 7 is a satellite (child) of the nucleus (parent) e 8 .
we cannot judge whether we have to drop e 9 or e 10 because the parent-child relationships are not explicitly defined between e 8 and e 9 , e 8 and e 10 .
contrasting
train_14030
He may also issue a query like "travel in Edinburgh" to search relevant questions.
both the browsing and the searching give the user a list of relevant contents (e.g., questions shown in Table 1), not the direct knowledge.
contrasting
train_14031
In this paper, given a root topic, subtopics and lower-level topics are extracted from UGC, which form a hierarchical structure to organize corresponding UGC.
in (Zhu et al., 2013) more external sources are utilized to identify subtopics.
contrasting
train_14032
Besides, (Singh, 2012b) proposed an entitybased translation language model and demonstrated that it outperformed classical translation language model in question retrieval.
to the best of our knowledge, no previous study leverages entities to organize UGC in social media.
contrasting
train_14033
Note that Set EC only covers a small set of real entities and clustering on Set EC is partial clustering.
it leverages Freebase labels and avoids manual labeling, which is time-consuming.
contrasting
train_14034
We can find that Stanford NER and FIGER get a relatively high precision in extracting entities.
their recalls are very low and only about 15% of entities are recognized.
contrasting
train_14035
The reason is that the CETbased program clusters similar results in the same group, and if the user finds one answer she can easily get more answers.
the list-based program returns a list of questions, and users need to find answers question-by-question.
contrasting
train_14036
Intuitively, questions sharing similar topics should be ranked similarly.
traditional question retrieval models (Cao et al., 2010) such as QLLM and VSM do not capture key semantics and give more weights for entity terms.
contrasting
train_14037
The number of word types in WEBQUESTIONS is larger than in datasets such as ATIS and GeoQuery (Table 3), making lexical mapping much more challenging.
in terms of structural complexity WEBQUESTIONS is simpler and many questions contain a unary, a binary and an entity.
contrasting
train_14038
(Kate and Mooney, 2006;Wong and Mooney, 2007;Muresan, 2011;Kwiatkowski et al., 2010Kwiatkowski et al., , 2011Kwiatkowski et al., , 2012Jones et al., 2012).
these techniques require training data with hand-labeled domain-specific logical expressions.
contrasting
train_14039
The model learns to correctly produce complex forms that join multiple relations.
there are a number of systematic error cases, grouped into four categories as seen in Figure 9.
contrasting
train_14040
The cohesion model built on these noisy super target lexical chains may select incorrect words rather than the proper lexical chain words.
if we set the threshold too large (e.g., 0.3 or 0.4), we may take the risk of not selecting the appropriate chain word translations into the super target lexical chains.
contrasting
train_14041
This suggests that stress has an effect on phoneme alteration, something we discuss in more detail in Section 5.
while providing a large gain in the p2p condition, pronunciation modeling gives small or negative effects elsewhere.
contrasting
train_14042
This idea allows the words to be represented by vectors of statistics collected from a sufficiently large corpus of text; each element of the vector reflects how many times a word co-occurs in the same context with another word of the vocabulary.
due to the generative power of natural language, which is able to produce infinite new structures from a finite set of resources (words), no text corpus, regardless of its size, can provide reliable distributional representations for anything longer than single words or perhaps very short phrases consisting of two words; in other words, this technique cannot scale up to the phrase or sentence level.
contrasting
train_14043
So in a sense the output element embraces both input elements, resembling a union of the input features.
the elementwise multiplication of two vectors can be seen as the intersection of their features: a zero element in one of the input vectors will eliminate the corresponding feature in the output, no matter how high the other input component was.
contrasting
train_14044
An ambiguous vector for 'run' will have non-zero values for every component.
we would expect the vector for 'horse' to have high values for the 'race', 'gallop', and 'move' components, and very low values (but not necessarily zero) for the dissolving-related ones-it is always possible for the word 'horse' to appear in the same context with the word 'painting', for example.
contrasting
train_14045
Our second set of experiments is based on the phrase similarity task of Mitchell and Lapata (2010).
with the task of Section 8.1, this one does not involve any assumptions about disambiguation, and thus it seems like a more genuine test of models aiming to provide appropriate phrasal or sentential semantic representations; the only criterion is the degree to which these models correctly evaluate the similarity between pairs of sentences or phrases.
contrasting
train_14046
Despite the obvious benefits of the tensor-based approaches, this work suggests for one more time that vector mixture models might constitute a hardto-beat baseline; similar observations have been made, for example, in the comparative study of Blacoe and Lapata (2012).
when trying to interpret the mixing results regarding the effectiveness of the tensor-based models compared to vector mixtures, we need to take into account that the tensorbased models tested in this work were all "hybrid", in the sense that they all involved some element of point-wise operation; in other words, they constituted a trade-off between transformational power and complexity.
contrasting
train_14047
As a generalization of LSA, MRLSA is also a linear projection model.
while the words are represented by vectors as well, multiple relations between words are captured separately by matrices.
contrasting
train_14048
This confirms our claim that when giving the same among of information, MRLSA performs at least comparably to PILSA.
the true power of MRLSA is its ability to incorporate other semantic relations to boost the performance of the target task.
contrasting
train_14049
The standard way of building a bilingual vector space is to use bilingual lexicon entries (Rapp, 1999;Fung and Cheung, 2004;Gaussier et al., 2004) as dimensions of the space.
there seems to be an apparent flaw in logic, since the methods assume that there exist readily available bilingual lexicons that are then used to induce bilingual lexicons!
contrasting
train_14050
According to the results from tables 1 and 2, regardless of the seed lexicon size, the bootstrapping approach does not suffer from semantic drift, i.e., if we seed the process with high-quality symmetric translation pairs, it is able to recover more pairs and add them as new dimensions of the bilingual vector space.
we also study the influence of applying different confidence estimation functions on top of the symmetry constraint (see sect 2.3), but we do not observe any improvement in the BLE results, regardless of the actual choice of a confidence estimation function.
contrasting
train_14051
Because they cannot capture the meaning of longer phrases properly, compositionality in semantic vector spaces has recently received a lot of attention (Mitchell and Lapata, 2010;Socher et al., 2010;Zanzotto et al., 2010;Yessenalina and Cardie, 2011;Grefenstette et al., 2013).
progress is held back by the current lack of large and labeled compositionality resources and models to accurately capture the underlying phenomena presented in such data.
contrasting
train_14052
An alternative to RNTNs would be to make the compositional function more powerful by adding a second neural network layer.
initial experiments showed that it is hard to optimize this model and vector interactions are still more implicit than in the RNTN.
contrasting
train_14053
In (1), although there is a positive sentiment, the target of the sentiment is an event (Kentucky losing to Tennessee).
from the positive sentiment toward this event, we can infer that the speaker has a negative sentiment toward Kentucky and a positive sentiment toward Tennessee.
contrasting
train_14054
Yang and Cardie (2013) is similar in spirit to our own, where the identification of opinion holders, opinion targets, and opinion expressions is modeled as a sequence tagging problem using a CRF.
similar to previous work ap-plying CRFs to extract sentiment, Yang and Cardie use syntactic relations to connect an opinion target to an opinion expression.
contrasting
train_14055
Theoretically, one can directly apply EM to solve the problem (Knight et al., 2006).
eM has time complexity O(N • V 2 e ) and space complexity O( e are the sizes of ciphertext and plaintext vocabularies respectively, and N is the number of cipher bigrams.
contrasting
train_14056
Since the deciphering model described by Dou and Knight (2012) does not consider word reordering, it needs to decipher the bigram into "nations united" in order to get the right word translations "naciones"→"nations" and "unidas"→"united".
the English language model used for decipherment is built from English adjacent bigrams, so it strongly disprefers "nations united" and is not likely to produce a sensible decipherment for "naciones unidas".
contrasting
train_14057
As in that work, we model morphological features rather than directly inflected forms.
that work may be criticized for providing no mechanism to translate surface forms directly, even when evidence for a direct translation is available in the parallel data.
contrasting
train_14058
Similar gains are obtained by model combination of the DT approach with the best Boost model.
a combination of the SMT-based CLIR approaches DT and PSQ barely improved results over the best input model.
contrasting
train_14059
Both models adopt a recurrent language model for the generation of the target translation (Mikolov et al., 2010).
to other n-gram approaches, the recurrent language model makes no Markov assumptions about the dependencies of the words in the target sentence.
contrasting
train_14060
Timeline construction involves identifying temporal relations between events (Do et al., 2012;Mc-Closky and Manning, 2012;D'Souza and Ng, 2013), and is thus related to process extraction as both focus on event-event relations spanning multiple sentences.
events in processes are tightly coupled in ways that go beyond simple temporal ordering, and these dependencies are central for the process extraction task.
contrasting
train_14061
The triggers bind and binds cannot denote the same event if a third trigger secrete is temporally between them.
local predicts they are the same event, as they share a lemma.
contrasting
train_14062
1 Their system is fully automatic, domain-independent, and scales to large text corpora.
we identify several limitations in the schemas produced by their system.
contrasting
train_14063
produces three tuples: Relational triples provide a more specific representation which is less ambiguous when compared to (subj, verb) or (verb, obj) pairs.
using relational triples also increases sparsity.
contrasting
train_14064
Prior work by Chambers and Jurafsky (2008; showed that event sequences (narrative chains) mined from text can be used to induce event schemas in a domain-independent fashion.
our manual evaluation of their output showed key limitations which may limit applicability.
contrasting
train_14065
Both latent and generative topic models attempt to find topics from the data and it has been found that in some cases they are equivalent (Ding et al., 2006).
this approach suffers from the problem that the topics might be artifacts of the training data rather than coherent semantic topics.
contrasting
train_14066
Furthermore, in the case that W N and X has n non-zeroes, the calculation of the SVD is of complexity O(nN + W N 2 ) and requires O(W N ) bytes of memory.
oNETA requires computation time of o(N a ) for a > 2, which is the complexity of the matrix inversion algorithm 3 , and only o(n + N 2 ) bytes of memory.
contrasting
train_14067
order O(N a 1 ), being much more efficient than the eigenvalue methods.
it is potentially more error-prone as it requires that a left-inverse of C exists.
contrasting
train_14068
The variance analysis result on the six groups of scores (scores given by five times of five-fold cross-validation and the scores provided by human rater), no significant difference, suggests the robustness of our proposed approach.
although pref-erence ranking based approach, SVM for ranking, and regression based approach, SVM for regression, give very good result in human-machine agreement, their variance analysis results indicate that there exists significant difference between the scores given by human and machine raters.
contrasting
train_14069
6 We have proposed a novel Joint, Additive, Sequential (JAS) model of conversational topics and speech acts.
to previous approaches to modeling conversational exchanges, this model factors both the current topic and the current speech act into token emission and state transition probabilities.
contrasting
train_14070
If an entity or an entity pair appears significantly more frequently in one day's news than in recent history, the corresponding event candidates are likely to be good to generate paraphrase.
the temporal burstiness heuristic implies that a good EEC (a 1 , a 2 , t) tends to have a spike in the time series of its entities a i , or argument pair (a 1 , a 2 ), on day t. even if we have selected a good EEC for paraphrasing, it is likely that it contains a few relation phrases that are related to (but not synonymous with) the other relations included in the EEC.
contrasting
train_14071
A total of 349 training reports and 477 test reports were made available to the participants.
data which came from UPMC (more than 50% data) was not made available for public use.
contrasting
train_14072
extended the ILP formulation and used soft constraints within the Constrained Conditional Model formulation (Chang, 2011).
their implementation performed only approximate inference.
contrasting
train_14073
Label propagation (Zhu and Ghahramani, 2002) aims to spread label distributions from a small training set throughout the graph.
our unsupervised algorithm leverages the connection between two adjacent unlabeled nodes to find the correct labels for both of them.
contrasting
train_14074
The segmentation errors especially on opinion target words will directly influence the results of part-of-speech tagging and candidate extraction.
some of the opinion target words in a topic are often included in the hashtag.
contrasting
train_14075
In the topic #90 后打老人# (means "A young man hits an old man"), "90 后" (literally "90 later" and means a young man born in the 90s) is an important word because it is the opinion target of many sentences.
existing Chinese word segmentation tools will regard it as two separate words "90" and "后" ("later").
contrasting
train_14076
So far in our model topics and events are not related.
many events are highly related to certain topics.
contrasting
train_14077
TimeUserLDA also models topics and events by separating topic tweets from event tweets.
it groups event tweets into a fixed number of bursty topics and then uses a twostate machine in a postprocessing step to identify events from these bursty topics.
contrasting
train_14078
Thus, events are not directly modeled within the generative process itself.
events are inherent in our generative model.
contrasting
train_14079
This is because this method mixes topics and events first and only detects events from bursty topics in a second stage of postprocessing.
our model performs well for topic-oriented events.
contrasting
train_14080
Each warp of threads shares a program counter and executes code in lock-step.
execution is not SIMD -all threads do not execute all instructions.
contrasting
train_14081
REG is related to content selection, which has been studied for generating text from databases (Konstas and Lapata, 2012), event streams (Chen et al., 2010), images (Berg et al., 2012;Zitnick and Parikh, 2013), and text (Barzilay and Lapata, 2005;Carenini et al., 2006).
most approaches to this problem output bags of concepts, while we construct full logical expressions, allowing our approach to capture complex relations between attributes.
contrasting
train_14082
The RANKBOOST is a boosted decision stump where, in each boosting iteration, the stump is found by maximizing the weighted exponential rank loss.
both the EXPENS and LAMBDAMART make use of tree learners in the ensemble classifier they produce.
contrasting
train_14083
If two users rate the same movies with equals ratings, then these similarities will be maximal.
they may have rated identically but for completely different reasons, making them not alike at all.
contrasting
train_14084
In replicating Hopkins and May's experiments, we confirm that existing search algorithms for MERT-including coordinate ascent, Powell's algorithm (Powell, 1964), and random direction sets (Cer et al., 2008)-perform poorly in this experimental condition.
when using our gradient-based direction finder, MERT has no problem finding the true optimum even in a 1000-dimensional space.
contrasting
train_14085
On the other hand, regularized MERT only requires one hyperparameter to tune: a regularization penalty for 2 or 0 .
since PRO optimizes translation length on the Dev dataset and MERT does so using the Tune set, a comparison of the two systems would yield a discrepancy in length that would be undesirable.
contrasting
train_14086
First, unregularized MERT can achieve competitive results with a small set of highly engineered features, but adding a large set of more than 200 features causes MERT to perform poorly, particularly on the test set.
unregularized MERT can recover much of this drop of performance if it is given a good sparse initializer w 0 .
contrasting
train_14087
In Section 4, we gave a simple set of features that yielded a high-performance coreference system; this high performance is possible because features targeting only superficial properties in a fine-grained way can actually model complex linguistic constraints.
while our existing features capture syntactic and discourse-level phenomena surprisingly well, they are not effective at capturing semantic phenomena like type compatibility.
contrasting
train_14088
To test how much of this performance could be obtained by a simpler iterated network, we experimented with ablated systems that don't fork or join, i.e., our classic "baby steps" schema (chaining together 15 optimizers), using both DBM and DBM 0 , with and without a transform in-between.
all such "linear" networks scored well below 50%.
contrasting
train_14089
In this limit, posterior regularization degenerates into the convex log-likelihood objective normally used for supervised data J Q (θ) = L(θ).
in the general case, the PR objective J Q is not necessarily convex.
contrasting
train_14090
2012;Lerman and Hogg 2010;Szabo and Huberman 2010), which is closely related to the highlighting task.
to these approaches, we strive to predict what term a user is likely to be interested in when reading content, which may or may not be the same as the most popular content that is related to the current document.
contrasting
train_14091
However, exploiting this extra information does not always need to result in a better model, as the target side words are only derived from the given source side, which is available to both TMs and JMs.
including future source words in a bidirectional model clearly improves the performance further.
contrasting
train_14092
The ability of formulating persuasive arguments is not only the foundation for convincing an audience of novel ideas but also plays a major role in general decision making and analyzing different stances.
current writing support is limited to feedback about spelling, grammar, or stylistic properties and there is currently no system that provides feedback about written argumentation.
contrasting
train_14093
To the best of our knowledge, there is currently only one approach that aims at identifying argumentative discourse structures proposed by Mochales-Palau and Moens (2009).
it relies on a manually created context-free grammar (CFG) and is tailored to the legal domain, which follows a standardized argumentation style.
contrasting
train_14094
We are only aware of one approach (Mochales-Palau and Moens, 2009;Wyner et al., 2010) that also focuses on the identification of argumentative discourse structures.
this approach is based on a manually created CFG that is tailored to documents from the legal domain, which follow a standardized argumentation style.
contrasting
train_14095
Error analysis: The system performs well for separating argumentative and non-argumentative text units as well as for identifying premises.
the identification of claims and major claims yields lower performance.
contrasting
train_14096
It turned out that structural features are the most effective ones for this task.
some of those features are unique to persuasive essays, and it is an open question if there are general structural properties of arguments which can be exploited for separating claims from premises.
contrasting
train_14097
This is because in the current version of our system, the confidence estimation of the intention identifier for domain-dependent dialogue queries is less reliable due to the lack of context information.
the confidence scores returned by the domain experts will be more informative at this point.
contrasting
train_14098
Given a state, a most straightforward policy is to select the action that corresponds to the maximum mean Q-value estimated by the GP.
since the objective is to learn the Q-function associated with the optimal policy by interacting directly with users, the policy must exhibit some form of stochastic behaviour in order to explore alternatives during the process of learning.
contrasting
train_14099
Next, a tree subtraction algorithm was used to extract the arguments.
as pointed out in Dinesh et al.
contrasting